Workplace facial screening is a bad idea

Subscribe Now Choose a package that suits your preferences.
Start Free Account Get access to 7 premium stories every month for FREE!
Already a Subscriber? Current print subscriber? Activate your complimentary Digital account.

Artificial intelligence has been on the rise in workplaces for at least the past decade. From consumer algorithms to quantum computing, AI’s uses have grown in type and scope.

One of the more recent advances in AI technologies is the ability to read emotions through facial and behavioral analysis. While the emotional AI technology has largely been implemented in marketing campaigns and health care, a growing number of high-profile companies are using it in hiring decisions.

Companies should stop this immediately.

There are a number of risks associated with this technology. One of the more troubling is the apparent racial bias — one that assigns more negative emotions to Black people than white people, even when they are smiling.

For example, Microsoft’s Face API software scored Black faces as three times more contemptuous than white faces. This bias is obviously harmful in a number of ways, but it’s especially devastating to non-white professionals who are disadvantaged in their ability to secure a job and progress within their field.

Any workplace that uses a hiring algorithm that disproportionately sees Black and brown people as worse emotionally will further drive workplace inequalities and discriminatory treatment.

According to a Washington Post report, more than 100 companies are currently using emotional AI, and this technology has already been used to assess millions of job applicants. Among the top-tier companies deploying emotional AI are Hilton, Dunkin’ Donuts, IBM and the Boston Red Sox.

Emotional AI recognition has been estimated to be at least a $20 billion market.

The technology uses facial recognition to analyze emotional and cognitive ability. Generally, an interviewee will answer preselected questions during a recorded video interview, and be assessed by the AI algorithm. The assessment provides a grade or score on various characteristics, including verbal skills, facial movements, and even emotional characteristics — all of which aim to predict how likely the candidate will succeed in a position before taking next steps.

Supporters of the technology argue that it removes human prejudice from the equation. But replacing human bias with an artificial one can’t be the solution.

Moreover, companies tend to use emotional AI to screen for a very limited data set to decide who gets marked as “employable.” These limited data sets usually favor majority groups while ignoring minority ones. For example, if someone’s first language isn’t English and they speak with an accent or if an applicant is disabled, they will more likely be earmarked as less employable.

There are a plethora of other anecdotes that highlight the biases of emotional AI, even outside the workplace. These include cameras that identify Asian faces as blinking and software that misgenders those with darker skin.

Of course, companies have been warned of the ongoing biases and have so far ignored them; many still use software like HireVue, which Princeton professor of computer science Arvind Narayanan described as “a bias perpetuation engine.”

Until emotional AI is shown to be free of racial and gender biases, it’s unsafe for use in a world already struggling to overcome inequalities. If companies want to assist in that struggle, they should end the use of emotional AI in the workplace.