A group says the FTC should stop OpenAI from releasing new GPT models


On the Rise of AI Tools, a Prompt Supporter of GPT-4 and a Challenge for Human Rights and Citizens’ Rights

Yoshua Bengio is a professor at the University of Montreal, and he wrote a letter calling for a halt in the training of artificial intelligence systems more powerful than GPT-4.

The petition was organized by the nonprofit Future of Life Institute, which says confirmed signatories include the Turing Award-winning AI pioneer Yoshua Bengio and other leading AI researchers such as Stuart Russell and Gary Marcus. Others who joined include Wozniak, former U.S. presidential candidate Andrew Yang and Rachel Bronson, president of the Bulletin of the Atomic Scientists, a science-oriented advocacy group known for its warnings against humanity-ending nuclear war.

The letter comes as the systems are making leaps and bounds. GPT-4 was only announced two weeks ago, but its capabilities have stirred up a lot of enthusiasm and concern. The language model that is available via the popular chatbot is perfect at resolving tricky questions, and can score highly on many academic tests. Yet GPT-4 also makes plenty of trivial, logical mistakes. And, like its predecessors, it sometimes “hallucinates” incorrect information, betrays ingrained societal biases, and can be prompted to say hateful or potentially harmful things.

Artificial intelligence experts have become increasingly concerned about AI tools’ potential for biased responses, the ability to spread misinformation and the impact on consumer privacy. They have sparked questions about how these tools enable students to cheat and how technology can shift our relationship with technology.

AI’s Existential Risks: The Future of Life Institute and E.M. Musk’s Case for a 6-month Moratorium

The letter, which was written by the Future of Life Institute, an organization focused on technological risks to humanity, adds that the pause should be “public and verifiable,” and should involve all those working on advanced AI models like GPT-4. It does not propose a way to verify a halt on development, but it does state that if it can’t be done quickly, governments should institute a moratorium.

Musk, who runs Tesla, Twitter and SpaceX and was an OpenAI co-founder and early investor, has long expressed concerns about AI’s existential risks. A more surprising inclusion is Emad Mostaque, CEO of Stability AI, maker of the AI image generator Stable Diffusion that partners with Amazon and competes with OpenAI’s similar generator known as DALL-E.

“Corporate ambitions and desire for dominance often triumph over ethical concerns,” Su said. “I won’t be surprised if these organizations are already testing something more advanced than ChatGPT or [Google’s] Bard as we speak.”

Still, the letter hints at the broader discomfort inside and outside the industry with the rapid pace of advancement in AI. Some governing agencies in China, the EU and Singapore have previously introduced early versions of AI governance frameworks.

That’s the conclusion of a group of prominent computer scientists and other tech industry notables such as Elon Musk and Apple co-founder Steve Wozniak who are calling for a 6-month pause to consider the risks.

James Grimmelmann is a Cornell University professor of digital and information law. It is deeply hypocritical for Musk to sign on given that he has resisted accountability for the malfunctioning artificial intelligence in his cars.

A CAIDP Challenge to the Economic Use of Artificial Intelligence in Artificial Language Models and the OpenAI – Google, Google, and FTC

It warns that language models like GPT-4 could be used to automate jobs, and spread misinformation, at a time when jobs are still being created. There is a possibility of artificial intelligence that could replace humans and remake civilization.

The pace of change—and scale of investment—is significant. Microsoft has poured $10 billion into OpenAI and is using its AI in its search engine Bing as well as other applications. Until this year, when it chose not to release them due to ethical concerns, Google created powerful language models of its own, but it developed some of the artificial intelligence needed to build GPT-4.

An artificial intelligence-focused tech ethics group has asked the Federal Trade Commission to investigate OpenAI for violating consumer protection rules, arguing that the organization’s rollout of AI text generation tools has been “biased, deceptive, and a risk to public safety.”

In the complaint, CAIDP asks the FTC to halt any further commercial deployment of GPT models and require independent assessments of the models before any future rollouts. It also asks for a publicly accessible reporting tool similar to the one that allows consumers to file fraud complaints. It wants firm rulemaking on the FTC’s rules for generative Artificial Intelligence systems, just as the agency is still conducting research on such systems.