The benefits of computation in the workplace have been overlooked.


The Artificial Intelligence Trap is the Birth of Humanlike AI: Embracing Ancient Egypt, Mediation, and Science into Superscientists

The test Turing wrote became the north star for generations of artificial intelligence pioneers. For a long time, they’ve tried to emulate basic human skills with wild success: Now, we’ve got artificial intelligence that can hold conversations, draw pictures, and play video games.

But now some AI thinkers wonder whether we’ve succeeded a little too well—at the wrong task. Economic competition has been caused by the altering of human abilities. Turing may have led us astray, maybe he was the wrong person to do that.

The Turing Trap is what he calls it. It’s certainly true that human-like AI is on a roll: Behold the rise of uncannily deft visual-art generators such as Dall-E and Midjourney. This summer, a game designer entered a Midjourney creation in the Colorado State Fair art contest and won first prize, apparently without any of the judges suspecting the work had been done by a computer—nailing, as it were, an aesthetic Turing test.

Brynjolfsson gets why AI creators have been so enchanted with mimicking human abilities. It caters to a desire to play god, creating life forms in our own image. “Every culture has a myth about this,” Brynjolfsson says. Ancient Greeks told stories of the inventor Daedalus producing mechanisms that walked like men, Jewish folklore had the golem, and real-life inventors have been crafting humanlike automata from early Islam to Renaissance Europe. Modern sci-fi is simply littered with AI that walks and talks like humans.

“We need to change the target,” he says. AlphaFold is DeepMind’s artificial intelligence for predicting structure. It isn’t possible for humans to predict the structure of proteins with millions of combinations. But by using AlphaFold, scientists could potentially become super-scientists, able to explore far more possibilities for drugs and medical treatments than they could on their own. When I spoke to DeepMind CEO Demis Hassabis last winter, he argued, much like Brynjolfsson, that augmentation was the promising way forward. “What I’m hoping for is AI as this sort of ultimate tool that’s helping science experts,” he said. He anticipates “a huge flourishing in the next decade,” and says that “we will start seeing Nobel-Prize-winning-level challenges in science being knocked down one after the other.”

At the University of San Diego, a law professor studying labor and employment has studied the effects of technology and the gig economy on workers. That has made her familiar with the potential disruptions caused by tools like automated résumé screening and apps that use algorithms to assign work to people. The discussion about automation and artificial intelligence has become stuck on harms these systems create.

Over the last decade, I’ve seen too much of a discussion. People on the inside of the tech industry are not really interested in equality, distributive justice, and fairness—they’re just celebrating technology for the sake of technology. People ask, “who are the winners and who are thelosers?” I wanted to bridge the two conversations.

We need to celebrate opportunities and successes, not just have tunnel vision on the problems. The people interested in having these conversations are getting discouraged. Some people, especially women and minorities, are not interested in working for Big Tech. It’s a vicious circle, where we’re getting fewer of those diverse voices on the inside, and the people who are critiquing or being agnostic have less skin in the game.

People think that if they ask the right answer, the Algorithms give a precise or perfect answer. Is there a danger that no one will question automated hiring calls, or accusations of harassment?

I’ve been researching hiring and diversity and inclusion for a long time. We know that discrimination and disparity can happen without decision making. If you are going to use a hiring algorithm, it is a good idea to know if it is outmatched by the human processes. What are the sources, and can the biases be corrected by adding more training data? How much can we debias as humans versus how much can we improve the different systems?

A vast majority of large companies today are using some form of automated resume screening. It’s important for agencies like the US Equal Employment Opportunity Commission and the Labor Department to look at the claims versus the results. There hasn’t been enough nuanced conversation about the sources of the risks and whether they can be corrected.