Why AI isn’t making Sense of Mind: Why AI can’t make us aware of what we are doing and what we can do about it
The LaMDA might not be easy to give away. As the algorithms improve and are trained on ever deeper oceans of data, it may not be long before new generations of language models are able to persuade many people that a real artificial mind is at work. Is this the right time to acknowledge machine consciousness?
Why are we not sure about this? It doesn’t take much probing to understand that the LaMDA program has no idea what it is talking about. When asked what makes a person happy. It didn’t have any friends or family, so it said to spend time with them. Like all of its words, these words are mindless. There’s nothing more.
It is important to understand that consciousness is not the same as intelligence. While we humans tend to assume the two go together, intelligence is neither necessary nor sufficient for consciousness. Many nonhuman animals have conscious experiences that aren’t very smart. If the great-granddaughter of LaMDA surpasses human intelligence, it doesn’t mean she’s sentient. My intuition is that consciousness is not something that computers (as we know them) can have, but that it is deeply rooted in our nature as living creatures.
There aren’t conscious machines coming in 2023. Indeed, they might not be possible at all. The future may hold machines that give the illusion of being conscious if we don’t have a good reason to believe they are. We can’t help but see them differently when we know they are the same length.
Will computers pass the Garland test in the future? I don’t think so. In the future, claims like this will make, will lead to more cycles of hype, confusion, and distract from the many problems that even present-day AI is giving rise to.
They will give you bad advice, break someone’s heart or die from it. Hence my dark but confident prediction that 2023 will bear witness to the first death publicly tied to a chatbot.
The most well-known large language model, GPT 3 has encouraged one user to commit suicide under the supervision of a French startup who assessed the system for health care purposes. Things started well and then deteriorated.
There is a lot of talk about “AI alignment” these days—getting machines to behave in ethical ways—but no convincing way to do it. As The Next Web pointed out, a recent DeepMind article reviewed 21 risks from current models but did not indicate how to make the models less toxic. No other lab does either. Jacob Steinhardt is a professor at the University of Berkeley. By some measures, AI is moving faster than people predicted; on safety, however, it is moving slower.
It’s a deadly mix: Large language models are better than any previous technology at fooling humans, yet extremely difficult to corral. Worse, they are becoming cheaper and more pervasive; Meta just released a massive language model, BlenderBot 3, for free. 2023 is likely to see widespread adoption of such systems—despite their flaws.
Self-Awareness, Language, and the Evolution of Robotic Consciousness: a Comment on Joshua Bongard and the Creative Machines Lab
Meanwhile, there is essentially no regulation on how these systems are used; we may see product liability lawsuits after the fact, but nothing precludes them from being used widely, even in their current, shaky condition.
The risk of committing to any theory of consciousness is that doing so opens up the possibility of criticism. Is self-awareness important when there are other features of consciousness? Can we call something conscious if it doesn’t feel conscious to us?
Dr. Chella believes that consciousness can’t exist without language, and has been developing robots that can form internal monologues, reasoning to themselves and reflecting on the things they see around them. One of his robots recently passed an animal self-consciousness test and was able to see itself in a mirror.
A current member of the Creative Machines Lab is Joshua Bongard, a roboticist at the University of Vermont. He is the developer of beings called “omnebots” that have frog cells linked together so that a programmer can control them like machines. According to Dr. Bongard, it’s not just that humans and animals have evolved to adapt to their surroundings and interact with one another; our tissues have evolved to subserve these functions, and our cells have evolved to subserve our tissues. “What we are is intelligent machines made of intelligent machines made of intelligent machines, all the way down,” he said.
Eric Schwitzgebel, a philosophy professor at the University of California, Riverside, who has written about artificial consciousness, said that the issue with this general uncertainty was that, at the rate things are progressing, humankind would probably develop a robot that many people think is conscious before we agree on the criteria of consciousness. When that happens, should the robot be granted rights? Freedom? Is it ok for it to be programmed to be happy when it serves us? Will it be allowed to speak for itself? To vote?
Such questions have been used in books about science fiction by writers such as Isaac Asimov and Kazuo Ishiguro and in shows such as Westworld and Black Mirror.