The Paradox We Built
A recent piece in The Ambidextrous Executive articulates something many of us have sensed: recruitment has become a closed loop where computers select computers. Read the full article: The AI Recruitm
The numbers tell a familiar story. AI speeds screening by 40-60%, yet 79% of job seekers feel more anxious than ever. Automated tools push candidates to submit 50-100 applications daily, but 62% of UK hiring managers believe qualified people are now rejected more often. More than half of candidates report being ghosted after interviews.
Read the full article: The AI Recruitm
The author identifies the “keyword-matching fallacy” — strong candidates screened out for lacking specific phrasing, while others advance because they have learned to game the algorithm. An experiment showed a candidate who added prestigious company names and inappropriate statements to her CV. A human would have rejected it immediately. The automated system advanced her to interview.
Both sides respond by using more automation. Both sides feel the process becoming less human. Trust erodes.
The article concludes: “Technology should serve humanity. Recruitment, perhaps more than any other organisational process, requires this principle to be upheld.”
I have been thinking about this for years. The issue is not AI itself — it is the underlying logic. Screening operates on elimination: what does not pass gets discarded. Efficiency measured in volume processed, not in understanding gained.
What if we measured something different? What if candidates could see how they have been evaluated — and respond? What if the process generated insight rather than just filtering noise?
The article calls for “AI as tool, not master.” I would frame it differently: the tool reflects the intent. If the intent is transactional, the tool will be too.
A change is needed.


