There is no substitute for emotional and empathy.


A Critique of Facial-Analysis Algorithms: The Case of Buolamwini and Gebru

Amid what can feel like overwhelming public enthusiasm for new AI technologies, Buolamwini and Gebru instigated a body of critical work that has exposed the bias, discrimination and oppressive nature of facial-analysis algorithms. Their audit was ground-breaking four years ago, and remains an influential reference point to counter the rapid progress of this technology and the threat it poses.

Impactful research isn’t always understood and acknowledged at first glance, especially when it challenges conventional thinking. The paper by Buolamwini and Gebru was considered to be outliers in the field of computer vision and in ethics related to artificial intelligence. Academic journals and conferences are highlighting audit studies because a lot has changed since then.

Algorithms that claim to detect emotions, predict gender or gauge someone’s trustworthiness have been dubbed ‘AI snake oil’ by some (go.nature.com/3rh7cfp), because such sociocultural attributes cannot reliably be inferred from faces, expressions or gestures6. Others have called for a blanket ban on facial-recognition algorithms, saying that the technology resurrects the pseudosciences of physiognomy and phrenology7.

An employment and labor law professor at the University of San Diego has studied how technology and the gig economy affects workers. That has made her familiar with the potential disruptions caused by tools like automated résumé screening and apps that use algorithms to assign work to people. Yet Lobel feels discussion about automation and AI is too stuck on the harms these systems create.

Orly Lobel: For the past decade, I’ve seen too much of a binary discussion. People on the inside of the tech industry are not really interested in equality, distributive justice, and fairness—they’re just celebrating technology for the sake of technology. There are people asking who is the winner and who is the loser. I wanted to bridge the two conversations.

We should celebrate opportunities and successes, not just have a tunnel vision of the problems. The people who wanted to have these conversations are getting discouraged. A lot of people, particularly women and minorities, are opting out of working for Big Tech. It is a vicious circle, where we are getting fewer diverse voices on the inside and the people who are critiquing or agnostic have less skin in the game.

People think that if they give perfect answers, the software will give them precise or perfect answers. Is there any danger that automated hiring calls will not be questioned?

I’ve been researching hiring and diversity and inclusion for a long time. We know that so much discrimination and disparity happens without algorithmic decisionmaking. The question to ask if you’re introducing a hiring algorithm is whether it is outperforming the human processes—not if it’s perfect. Adding more training data can be used to help correct biases, but what about the sources? How much can we debias as humans versus how much can we improve the different systems?

Most large companies are using automated resume screening. It’s important for agencies like the US Equal Employment Opportunity Commission and the Labor Department to look at the claims versus the results. There has not been enough discussion about the sources of risks and whether or not they can be fixed.

Will there be more chatbots of emotions? The science of emotional AI, and how it affects gender and racial inequality in our society

The problem is that the majority of emotional AI is based on flawed science. Emotional AI algorithms, even when trained on large and diverse data sets, reduce facial and tonal expressions to an emotion without considering the social and cultural context of the person and the situation. While, for instance, algorithms can recognize and report that a person is crying, it is not always possible to accurately deduce the reason and meaning behind the tears. A scowling face is not always an indication of an angry person. Why? We all adapt our emotional displays according to our social and cultural norms, so that our expressions are not always a true reflection of our inner states. Often people do “emotion work” to disguise their real emotions, and how they express their emotions is likely to be a learned response, rather than a spontaneous expression. For example, women often modify their emotions more than men, especially the ones that have negative values ascribed to them such as anger, because they are expected to.

In 2023, tech companies will be releasing advanced chatbots that can closely mimic human emotions to create more empathetic connections with users across banking, education, and health care. Microsoft’s chatbot Xiaoice is already successful in China, with average users reported to have conversed with “her” more than 60 times in a month. The Turing test showed that the users did not see it as a bot for 10 minutes. According to a report by the Research Consultancy, it is predicted that there will be almost 1.7 billion chats between health care providers in the year 2023. This will free up medical staff time and potentially save around $3.7 billion for health care systems around the world.

As such, AI technologies that make assumptions about emotional states will likely exacerbate gender and racial inequalities in our society. For example, a 2019 UNESCO report showed the harmful impact of the gendering of AI technologies, with “feminine” voice-assistant systems designed according to stereotypes of emotional passiveness and servitude.