Author Profile Picture

Tom Viggers

pymetrics

Director, EMEA Global Accounts

Read more about Tom Viggers

The selection process: algorithmic bias or bias against algorithms?

portishead1

The use of algorithms to select candidates, or indeed any kind of pre-defined criteria, has its limitations in the world of recruitment. So, can there ever be a definitive way to review applicants efficiently and fairly? 

Amazon Gender-Diversity Fail Shows Limits of Tech read the headline from Reuters on October 10. Sounds familiar doesn’t it? The dangers posed by AI is a hot topic right now and it is natural to conclude that this is yet another example.

The moral of the Amazon story is that using AI for recruitment is a dangerous thing to do. Robo-recruiters lack the empathy of human beings, so if we start using them to make our decisions for us, it’s unavoidable that this kind of thing will start to happen.

An algorithm is a set of rules that are followed to achieve certain objectives. The term is used all the time due to the rise of machine learning, which is the fast-advancing ability of computers to write their own algorithms, but algorithms written by human beings are nothing new in hiring.

Inventing an algorithm

You could argue that to have any kind of useful hiring practice, you need to use an algorithm. This is because you need to work out what you’re looking for, then you need to see which applicant best matches that.

If you are a volume recruiter receiving 250 applications per vacancy, you will also know that algorithms are economically necessary. If you spent five minutes reading every CV and gave everyone a single half hour interview, you’d spend over three and a half weeks of continuous work to fill a single role – so recruiters don’t do that.

Human beings…have a habit of making assumptions about people based on irrelevant characteristics (gender and ethnicity in particular).

Instead, they spend a vastly reduced amount of time reviewing CVs as a first pass – six seconds, according to one piece of research – before shortlisting the ones to spend more time on.

The problem is that their algorithm (or their set of rules they are using when reviewing CVs in six seconds) isn’t very effective. Human beings, including ones with recruiter licences, have a habit of making assumptions about people based on irrelevant characteristics (gender and ethnicity in particular).

In six seconds, you can probably spot a spelling mistake or make a sweeping judgement about someone based on their name or university, but you can’t do a lot else.

Methodologies, best practices and training

So, we need something different. We need methodologies, best practices and training. If we can use an algorithm that tests in a more objective way, we can make hiring processes fairer. Sounds great, doesn’t it? All we need to do is figure out a way of ranking people by quality, then we can hire based on that.

We can make sure our whole system is a meritocracy where everyone in a good job deserves it and everyone who’s not can rest assured that they would be if only they were a “high quality candidate.”

If we can use an algorithm that tests in a more objective way, we can make hiring processes fairer.

This is the argument for using academic attainment as a recruiting filter. High quality candidates are good at maths; ergo, screen out the people who are not good at maths. How do you know if they’re not good at maths? Simple, they didn’t get an A in their maths exam.

Trouble is, there are many reasons why some people don’t get an A in their maths exams and others do. One, for example, could be that over half a million children in the UK arrive at school each day too hungry or malnourished to learn.

And even putting that aside, what does an A in maths actually tell you about someone other than that they got an A in maths? It doesn’t, for example tell you whether they are able to explain mathematical commercial concepts clearly in a business context.

What about the push for digital natives that’s going on? How do we define that? Should we only hire people who say they know how to change a printer cartridge? Or maybe track what browser they use when they apply and use that as a cut-off?

Making bad decisions faster

Much easier just to use your gut and rely on one of the 175 cognitive biases that help us to make bad decisions faster.

If CVs and academics aren’t predictive of success, and we accept that all the other stuff on peoples’ CVs is so inconsistent as to be pretty incomparable (especially by a human), then maybe we can introduce tests that are more useful? This, of course, is the reason for standardised testing.

If you limit your definition of high performance to a narrow set of easily-observable characteristics from historically high-performing groups, then you reduce the group of people from whom you can spot future potential. 

The trouble is, you still have to pin down what you’re looking for and therein lies our old friend bias once again. Standardised tests such as IQ assessments have a long history of causing an adverse impact against women and ethnic minorities, just like Amazon’s algorithm did.

The problem isn’t with the people taking them; it’s with the tests themselves. If you limit your definition of high performance to a narrow set of easily-observable characteristics from historically high-performing groups, then you reduce the group of people from whom you can spot future potential. And you also make no distinction (or at best very blunt distinctions) between the types of cleverness that might make someone a great engineer, doctor or lawyer.

The adverse impact/validity trade-off

Historically these kinds of tests have been made “fair” by simply lowering the cut-off score until it’s set at a level where no one group is being favoured by the methodology.

But if this comes at the expense of validity and usefulness (i.e. it only screens out a small group of people) then it doesn’t solve the problem; it just sweeps it under the carpet, leaving our human judgement to pick up the pieces.

The fact is that if this problem was simple to solve then we wouldn’t live in a world where:

Let’s be clear, the Amazon story is not one about technology creating a problem, it is one of technology failing at the considerable challenge of improving on a fundamentally-broken status quo. That doesn’t mean the technology can’t do it, and it doesn’t mean Amazon should be criticised for their efforts, it just means that it’s a difficult problem to solve.

Author Profile Picture
Tom Viggers

Director, EMEA Global Accounts

Read more from Tom Viggers
Newsletter

Get the latest from HRZone

Subscribe to expert insights on how to create a better workplace for both your business and its people.

 

Thank you.