AI can replace humans, change jobs and remake civilization: a warning on Bard, Google, GPT and other AI-enabled languages
It warns that language models like GPT-4 can already compete with humans at a growing range of tasks and could be used to automate jobs and spread misinformation. There is a chance that Artificial Intelligence systems could replace humans and remake civilization.
Yoshua Bengio is a Turing Award-winning artificial intelligence pioneer who is one of the confirmed signatories of the petition organized by Future of Life Institute. A group known for its warnings against the end of nuclear war have joined with others.
But excitement around ChatGPT and Microsoft’s maneuvers in search appear to have pushed Google into rushing its own plans. Bard is a competition to the chatgtPt, and the company made a language model similar to Openai’s offerings. “It feels like we are moving too quickly,” says Peter Stone, a professor at the University of Texas at Austin, and the chair of the One Hundred Year Study on AI, a report aimed at understanding the long-term implications of AI.
Artificial intelligence experts have become increasingly concerned about AI tools’ potential for biased responses, the ability to spread misinformation and the impact on consumer privacy. Questions were raised about the use of technology and how it would upend professions, cheat and shift relationships with technology.
The Future of Life Institute and the European Parliament are fighting for a 6-month Artificial Intelligence “Pause to consider the risks to humanity”
The letter, which was written by the Future of Life Institute, an organization focused on technological risks to humanity, adds that the pause should be “public and verifiable,” and should involve all those working on advanced AI models like GPT-4. It does not suggest how a halt on development could be verified, but adds that “if such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” something that seems unlikely to happen within six months.
OpenAI was founded by Musk in 2015, but he left 3 years later and has since criticized the company. Gates cofounded Microsoft, which has invested billions of dollars in OpenAI.
Corporate ambitions and desire for dominance often triumph over ethical concerns, according to Su. I will be surprised if these organizations are already testing something that is more advanced than the ones we already have.
Governments are working to regulate high-risk artificial intelligence tools. The United Kingdom released a paper Wednesday outlining its approach, which it said “will avoid heavy-handed legislation which could stifle innovation.” Lawmakers in the European Union are trying to pass sweeping rules for Artificial Intelligence.
That’s the conclusion of a group of prominent computer scientists and other tech industry notables such as Elon Musk and Apple co-founder Steve Wozniak who are calling for a 6-month pause to consider the risks.
“A pause is a good idea, but the letter is vague and doesn’t take the regulatory problems seriously,” says James Grimmelmann, a Cornell University professor of digital and information law. It’s hypocritical for Musk to sign the agreement given how hard his company has fought for accountability for the faulty artificial intelligence in its self-driving cars.
Google’s GPT-4 Language Models Have Not Been Discovered by the Gates of Knowledge for AI: Constraints from Ethical Concerns
The pace of change—and scale of investment—is significant. Microsoft has poured $10 billion into OpenAI and is using its AI in its search engine Bing as well as other applications. Because of ethical concerns, from this year onwards, Google has stopped creating powerful language models of its own, and has not developed any of the artificial intelligence needed to build GPT-4.