In recent years, there has been an increase in the usage of AI tools that are advertised as a solution to the lack of diversity in the workforce. These tools range from chatbots and CV scrapers to analyze video interviews to analysis software.
Those behind the system claim that it eliminates human prejudices against gender and race during recruiting by employing algorithms that read vocabulary, speech patterns, and even facial micro-expressions to screen large pools of job applicants for the correct personality type and “culture fit.”
In a new paper appearing in Philosophy and Technology, researchers from Cambridge’s Centre for Gender Studies argue that these claims make some use of AI in hiring little more than “automated pseudoscience” akin to physiognomy or phrenology: the discredited belief that personality can be derived from facial features and skull shape.
According to them, this is a dangerous form of “technosolutionism”: relying on technology to give quick answers for discrimination problems that require investment and organizational culture changes.
In fact, researchers have developed an AI tool based on the technology that is available at: https://personal-ambiguator-frontend.vercel.app/ to disprove these new recruiting practices.
The “Personality Machine” shows how random adjustments in facial expression, dress, lighting, and backdrop may provide radically different personality readings, which could be the difference between rejection and advancement for a generation of job applicants vying for graduate employment.
As the technology is calibrated to seek the employer’s “ideal candidate,” the Cambridge team suggests that the use of AI to reduce candidate pools may ultimately encourage homogeneity rather than diversity in the workforce.
According to the authors, this might allow anyone with the proper education and experience to “win over the algorithms” by imitating the behaviors the AI is taught to recognize and bringing similar attitudes into the workplace.
Furthermore, they contend that the applicants who are deemed the best fits will probably end up being those who most closely resemble the current workforce because algorithms are developed using historical data.
According to co-author Dr. Eleanor Drage, “We are concerned that some vendors are wrapping ‘snake oil’ products in a shiny package and selling them to unsuspecting customers.”
“By claiming that racism, sexism and other forms of discrimination can be stripped away from the hiring process using artificial intelligence, these companies reduce race and gender down to insignificant data points, rather than systems of power that shape how we move through the world.”
They say that these AI recruitment tools are often secret or “black box,” which makes it hard to figure out how they work.
“While companies may not be acting in bad faith, there is little accountability for how these products are built or tested,” adds Drage. “As such, this technology, and the way it is marketed, could end up as dangerous sources of misinformation about how recruitment can be ‘de-biased’ and made fairer.”
Even though there has been some pushback—for example, the EU’s proposed AI Act labels AI-powered hiring software as “high risk”—researchers say that tools made by companies like Retorio and HIreVue are used without much regulation, and they point to surveys that show the use of AI in hiring is on the rise.
According to a 2020 research of 500 organizations from five countries and several industries, 24% of enterprises have already adopted AI for hiring, and 56% of hiring managers planned to do so in the next year.
As the pandemic took root in April 2020, the second survey of 334 human resources leaders revealed that 86% of organizations were implementing new virtual technology into their hiring procedures.
According to co-author Dr. Kerry Mackereth, who hosts the Good Robot podcast with Drage and discusses the ethics of technology, “this trend was in place as the pandemic began, and the accelerated shift to online working caused by COVID-19 is likely to see the greater deployment of AI tools by HR departments in the future.”
According to HR professionals, researchers spoke with, COVID-19 is not the sole determinant. According to Mackereth, “volume recruitment is increasingly untenable for human resources teams that are desperate for software to cut costs as well as numbers of applicants needing personal attention.”
According to Drage and Mackereth, many businesses now analyze applicant videos using AI, grading candidates for the “big five” personality traits: extroversion, agreeableness, openness, conscientiousness, and neuroticism. This method is similar to how lie-detection AI evaluates candidates.
The undergrads who made the “Personality Machine,” which uses a similar method to show AI’s flaws, say that while their tool might not help users beat the algorithm, it will give job seekers a taste of the kinds of AI scrutiny they might be under, maybe even without their knowledge.
“All too often, the hiring process is oblique and confusing,” adds Euan Ong, one of the student developers. “We want to give people a visceral demonstration of the sorts of judgements that are now being made about them automatically.
“These tools are trained to predict personality based on common patterns in images of people they’ve previously seen, and often end up finding spurious correlations between personality and apparently unrelated properties of the image, like brightness. We made a toy version of the sorts of models we believe are used in practice, in order to experiment with it ourselves,” Ong adds.
Image Credit: Getty
You were reading: Personality Machine: A Quick Fix For Deep-rooted Discrimination Issues?