Can machines make better decisions than humans? Evidence shows that before building AI into their business, talent development teams must establish boundaries to ensure trust among candidates and clients. 

Establishing the boundaries of AI

Can machines make better decisions than humans? That’s the claim made by some experts. In the future, they predict that AI will find a place in every business, delivering a “cutting edge” over the competition.

Despite the inherent contradiction in this statement, it’s clear to see the power and potential of AI and machine learning for every industry – including our space. But evidence shows that before building AI into their business, talent development teams must establish boundaries to ensure trust among candidates and clients.

Automated pseudoscience

AI and machine learning can perform time-consuming menial tasks faster and more accurately than humans. That could be sifting through applications to identify errors, searching for key terms or phrases, or even analysing sentiment at a basic level.

The application of AI is more problematic when programmes are trusted to make personal judgements about skills and suitability. Several software manufacturers claim their software removes bias from talent matching, assessing candidates on their skills and suitability for a role, not any particular characteristic.

But do they work?

Cambridge University academics, concerned about the claims made by AI recruitment software companies, analysed several programmes – and the results weren’t exactly encouraging.

In a 2022 paper, they found that AI talent acquisition tools reduce race and gender to trivial data points and often rely on personality analysis that is “automated pseudoscience”.

They found that AI systems designed to reduce bias could make things worse. “AI-powered hiring tools may unintentionally entrench cultures of inequality and discrimination by failing to address the systemic problems within organisations.”

We can’t, as the evidence shows, blindly trust machine learning and AI systems, despite what their programmers may say. The algorithms powering AI systems are fallible. What works in the lab may not work in the real world (or at Cambridge, it doesn’t work anywhere).

More fundamentally, people still need to trust AI. For example, content moderation is a typical AI task that involves sifting through mountains of data to identify and remove offensive material. Researchers have found that social media platforms are increasingly leaving moderation to machines, but users don’t trust it.

Instead of trusting AI decisions, users are sceptical that an algorithm can classify and control freedom of human expression. Given the findings of Cambridge academics, their concerns are legitimate.

Why? Because we don’t trust machines to make complex decisions. Far from being flawless, AI reflects the limitations of its evidence base and the limitations of its exposure to data and experience – and that can have serious ethical implications.

Augmented approach

AI offers incredible potential if it’s applied with a clear purpose. For example, an AI system could identify red flags, such as spelling errors in applications, with greater accuracy than humans, but could it spot potential in a candidate? Can it truly remove bias from the process?

Fundamentally, AI systems use the past to predict the future. That’s fine for logic-based challenges with fixed rules and parameters. But they can fail to deal with the uncertainty of the future.

Chinese researchers suggest that people who work in our space should seek to develop an “augmented approach” to AI. Such a relationship recognises and respects the limitations of AI, as well as those of people.

The skills of a CIO 5 years ago won’t be the skills needed today, and we can confidently predict that they will be different in 5 years. If we accept this to be true, there’s no way we possibly expect AI to forecast the in-demand skills of the future. They also need help to look in left-field ways at how candidates could transfer skills from one sector or speciality to another.

Or, could Saragossa’s 16-point appraisal process ever be completed by a machine? Currently, the answer is no. Perhaps AI never will be able to do this. Until it can, we can trust machines to perform data analysis, number crunching, and reporting, but we’ll trust our people to analyse and assess candidates. It’s why a personal recommendation from us means so much.