Could AI help to remove unconscious bias from the hiring process?

We might be more aware of it than ever before, but unconscious bias continues to be a widespread problem for businesses during the hiring process. Even with the best intentions in the world, our unconscious biases towards age, gender or race have the power to significantly sway our hiring decisions, all without us even realising.

Understandably, this can be detrimental towards efforts to improve inclusivity and diversity within the modern workplace. While many of us would like to think that we don’t have biases towards prospective candidates, unfortunately, we’re all susceptible in one way or another. Some experts believe that unconscious biases are so ingrained into the human psyche, that the only way we can truly stop them is to look for non-human solutions. This is where AI comes in.

Types of unconscious bias

Before we take a look at how AI can help to reduce unconscious bias from the hiring process, it’s important to understand that there are several forms of biases out there. In fact, according to Wikipedia, there are as many as 180 proven social, memory and decision-making biases that can affect each of us. Here are some common examples of biases that have been found to significantly impact the hiring process.

Confirmation bias

Confirmations bias occurs when someone favours information that confirms their existing beliefs and biases and ignores evidence that says otherwise. For instance, if an interviewer has made a judgement based on the information on a candidate’s CV, their confirmation bias will see them focus their questioning towards corroborating their beliefs. Understandably, this can result in favouritism towards candidates who share similar beliefs and cause a negative bias towards those who don’t.

Personal Similarity bias

This is a particularly common unconscious bias where people seek out and favour candidates who are just like them. Recent research has found that we’re all guilty of preferring candidates who are similar to ourselves in terms of hobbies, life experience and background, despite these factors not necessarily being linked to the job we’re hiring for. This can lead to organisations having a lack of diverse mindsets, backgrounds and beliefs within their teams; a problem that has been proven to impact business success.

The Halo effect

Largely based on first impressions, the halo effect is a cognitive bias where your initial impression of a prospective candidate influences how you think and feel about their character. For instance, the halo effect can use a positive reaction to a candidate as a basis for assuming that they will be good at the job, rather than objectively assessing their skills and experience. Naturally, this can mean that highly personable people are given roles or responsibilities that they aren’t experienced or qualified for.

Name discrimination

Name discrimination refers to a form of prejudice where a candidate is negatively discriminated against on accounts of their name. It’s often centred around cultural stereotypes which can influence a person’s unconscious decision making and cause a bias towards race, gender, culture, religion, class and age. This bias most commonly occurs during the CV and application screening process.

A study by the University of Toronto in 2017 found that people with Chinese, Indian or Pakistani-sounding names were 28% less likely to make the interview stage than candidates with English-sounding names. Another study found that CV’s with male names were twice as likely to be hired then identical CV’s containing female names.

How can AI help?

These common examples prove that unconscious biases can affect our thinking and decision making during the hiring process and disrupt our true intentions. Because they come in different forms, are automatic and occur without our knowledge, these biases are hard to overcome. So, what can AI do to help?

Screening CVs 

Screening CVs is exactly the type of pattern matching that AI was made for. Recruitment software that uses AI can screen thousands of resumes in half the time that a human can and can also reduce unconscious bias by using machine learning to understand what skills and qualifications are required for the role. It will then analyse prospective candidates CV data and identify those who best fit the job profile.

AI recruitment software can also be programmed to ignore demographic information such as gender, race and age, as well as school names and addresses which can be representative of race, religious beliefs and socioeconomic status.

Screening job ads

Have you ever stopped to think about the wording used in your job ads? Research by The University of Waterloo and Duke University defined a series of words which socially, cultural and historically carry a stereotypical weight towards a particular gender. Male-gendered words include ambitious, decisive and dominant, whereas female-gendered words included sensitive, considerate and interpersonal.

Another study found that packing too many of these male-gendered words into a job advertisement or description can actually deter female candidates from applying. Rather than continuing to turn off certain large segments of your talent pool, AI software can use sentiment analysis to identify any exclusionary language. Not only that, but it can also suggest alternative words that you can use so that your job ads and descriptions will appeal to a wider and more diverse range of candidates.

Exposing bias within the overall hiring process

There are people out there who aren’t entirely convinced that AI can eradicate unconscious bias because it is trained to find patterns in previous behaviours. This means that any human bias that is already in a recruitment function, can be learned by the AI, even if it’s unconscious.

In 2018, it was reported that e-commerce giant, Amazon had discovered that its machine-learning software, which was automating the CV screening process, was favouring men over women. This occurred because the AI had been trained to vet candidates by observing patterns in CVs submitted to the company over the past decade and a high percentage of these were from men. As a result, the AI taught itself that male candidates were more preferable.

To avoid incidents such as this, we humans still have a key role to play. AI programs and software learn from us and human oversight is vital to ensure they aren’t replicating existing bias or creating new ones.

We need to provide our AI recruitment software with broad training data from a diverse population and include factors such as country, gender, age, race and religion in order to create an unbiased output. You can also test your recruitment process for bias by assessing demographic breakdowns of new hires and comparing it to your overall candidate pool or by using tools such as Google’s ‘What-If?’

With the right training and testing from the offset, AI can pinpoint unconscious bias within your overall recruitment process more quickly and accurately than any human can. This can provide you with an opportunity to address any existing bias issues before they have a negative impact on your hiring process and decisions.

It’s important to remember that while AI can provide us with reams and reams of data, it doesn’t make the final hiring decision. That’s a task that is still very much down to us.

While AI could help us get closer to eradicating unconscious bias completely, it’s important to realise that this needs to be a joint effort. AI needs a human touch to constantly steer it in the right direction and humans need AI to help them expose their flaws. It will be a process that requires constant improvement and oversight to ensure the most unbiased and data-driven output is being produced to influence our hiring decisions.

Share this:

30th October

Career Advice industry-news