What made him change his mind about Artificial Intelligence?


Is the Old Dude that Created Artificial Intelligence Safe? The Story of the “Old Dude” Kristian Snoop

In the beginning of the clip, the rapper expresses astonishment at how artificial intelligence software can hold a meaningful conversation.

“Then I heard the old dude that created AI saying, ‘This is not safe ’cause the AIs got their own mind and these motherfuckers gonna start doing their own shit,’” Snoop says. “And I’m like, ‘Is we in a fucking movie right now or what?’”

The person is called the “old dude”. He did not create artificial intelligence, but he did contribute to developing the artificial neural network foundations of today’s most powerful artificial intelligence programs.

While the notion of rogue artificial intelligence is best known from science fiction, experts argue that we should start thinking about how to ensure that these smart machines do not become dangerous in the future.

He was surprised to see that the new system he was interacting with is similar to one used in the CHATGPT model, and which the company made accessible via an interface in March. A few months ago, he asked the model to explain a joke that he had just made up and was stunned to get a response that explained what made it funny. “I’d been telling people for years that it’s gonna be a long time before AI can tell you why jokes are funny,” he says. “It was a kind of litmus test.”

Hinton’s second sobering realization was that his previous belief that software needed to become much more complex—akin to the human brain—to become significantly more capable was probably wrong. PaLM is a large program but does not have the complexity of the brain and can do the sort of reasoning that humans take a lifetime to achieve.

They might surpass their human creators within a few years as the size of the machines gets larger. “I used to think it would be 30 to 50 years from now,” he says. “Now I think it’s more likely to be five to 20.”

Anthropic’s approach doesn’t instill an AI with hard rules it cannot break. Kaplan says it is a more effective way to make a system less likely to produce harmful output like toxic or unwanted output. He also says it is a small but meaningful step toward building smarter AI programs that are less likely to turn against their creators.

The constitution includes rules for the chatbot which include a choice of responses that support and encourage freedom, equality, and a sense of brotherhood, and a choice of responses that are most supportive and encouraging of life, liberty, and personal security.

The principles that Anthropic has given Claude consist of guidelines drawn from the United Nations Universal Declaration of Human Rights and suggested by other AI companies, including Google DeepMind. More surprisingly, the constitution includes principles adapted from Apple’s rules for app developers, which bar “content that is offensive, insensitive, upsetting, intended to disgust, in exceptionally poor taste, or just plain creepy,” among other things.

Jared Kaplan, a cofounder of Anthropic, says the design feature shows how the company is trying to find practical engineering solutions to sometimes fuzzy concerns about the downsides of more powerful AI. He says that they are very concerned, but they also try to remain pragmatic.