How can we remove unconscious biases from hiring practices and ensure a more diverse workforce? According to Oklahoma State University researcher Kimberly Houser, the answer is to address the source of the problem – human decision-making. Her soon-to-be published research shows that using machine decision-making through artificial intelligence (AI) can remove unconscious bias and “noise” from the hiring and promotion process and begin making the workplace reflect a diverse society.
Houser, assistant professor of legal studies in the Department of Management in the Spears School of Business, studied gender inequality in the technology industry. Silicon Valley is well known for its lack of diversity and unwelcome culture for women tech workers. Although these companies began releasing diversity reports in 2014 touted their diversity measures, little progress has been made. Women in technical roles hover at around 20%, despite making up more than half the overall American workforce. The industry, including giants Apple (where 23 percent of the tech workforce is female, according to statista.com), Google (21 percent) and Twitter (17 percent), has been widely criticized for its lack of diversity and a culture ripe with sexual harassment and discrimination.
“The system for increasing diversity in the tech industry is broken,” Houser wrote in her paper set to be published this summer in the Stanford Technology Law Review.
Houser reports that women technology workers are unable to gain a stronger footing because of unconscious bias and noise, or inevitable human inconsistencies in decision making, baked into hiring. Her research shows that AI could make the hiring process blind to gender and race and result in the best people hired for jobs, and more diversity.
“We have an industry dominated by white males from universities like Stanford, MIT, Harvard, Yale and Cornell,” Houser said. “When you have a male from Stanford interviewing a group of people, they tend to like males who graduated from Stanford. It’s called affinity bias and its unconscious. You’re not aware of it as a bias and you’re not sure why, but you think the male Stanford graduate is best for the job. It is not a conscious effort to ignore everyone else.”
Silicon Valley’s answer has been widespread employee diversity training to make its workers aware of biases, but training has had little impact. As an example, Google has provided training to more than 70,000 employees since 2014 but the percentage of male versus female tech workers there has not changed.
Houser writes that there are many reasons the tech industry has trouble attracting and retaining women tech workers, but the most critical reason why is unconscious bias in decision-making. Although there are many areas in which this comes into play, one of the methods she suggests in her paper with respect to the hiring process is make these decisions blind to gender and race by using AI.
“Research has shown that if you take race and gender off of resumes, more women and minorities get interviews and are hired,” she said.
In a study she cites, researchers found that simply replacing a woman’s name on a resume with a man’s name resulted in improving the odds of being hired by 61 percent.
“It’s not that there are a bunch of men saying, ‘Let’s keep women out!’ It’s more, ‘Who do I feel more comfortable working with?’ The problem is we like people who are like ourselves,” Houser said citing extensively from the research by Nobel Prize winner Daniel Kahneman.
Houser points to a number of companies using AI in hiring to perform functions previously done by a human or used to remove human variables from the process. AI can be used to make resumes and the interview process anonymous, remove biased language from job descriptions and create and utilize neuroscience games to match candidates’ skills and behaviors with the best jobs. For example, one company recruits, interviews and evaluates job candidates using a chatbot, a computer program that carries on an online “conversation” with a person without the attendant biases that would be present in a human interviewer.
But using AI cannot totally eliminate bias without other fixes, Houser found, because the predominantly white, male programmers who write programs introduce biases into their algorithms resulting in machines “learning” that biases are the accepted norm. According to Houser, people still needed to ensure that data used in programs is balanced, while human auditors are needed on the backend to make sure results are unbiased. Houser said the key is to make sure both the data sets used and the humans involved in creating the AI are a diverse population to begin with.
Houser said she is often asked what happens if despite every effort to remove human bias from the decision, new hires remain predominantly white males. Her reply, she said, is that research indicates that’s not likely to happen. Many sources confirm that if decisions were actually made on the merits, many more diverse candidates would be hired and promoted.
“If we can make the candidate selection process, the interviewing and, once they’re in the door, the job promotion criteria objective, then we’ll really see the best people in these positions,” Houser said. “And that will start the change in tech companies – just getting women and minorities in the door. That will begin to change the culture.”