The case against openAI: Musk argues GPT-4 is better than average humans, but does not render it truly artificial intelligence (some tweets by Musk)
Musk is a co-founder of Openai, which has a unique corporate structure. It is a nonprofit charged with safeguarding humanity against artificial general intelligence, or AGI, a hypothetical AI system that can surpass humans at most tasks. But in late 2019, after Musk left the company’s board, it also established a for-profit arm with a less altruistic focus. The explosive popularity of ChatGPT and demand for the underlying GPT-4 AI model has made that side of the company worth a reported $80 billion—and drawn the ire of Musk. The billionaire told CNBC last year he was the reason that Openai existed.
The lawsuit claims that the GPT-4 model OpenAI released in March 2023 isn’t just capable of reasoning but is also actually “better at reasoning than average humans,” having scored in the 90th percentile on the Uniform Bar Examination for lawyers. The company is rumored to be working on a model called “Q Star” that will claim to be a true artificial general intelligence.
AI systems exist across a spectrum of openness, ranging from fully open source to fully closed, depending on how much their inner workings are shared with researchers and members of the public. Those in favor of open source argue the approach allows greater transparency and potential for innovation. There are arguments against that it could be used to make powerful models available to criminals. Meta’s Llama 2 model is free to download, modify, and deploy—though it does have some limitations on use—while GPT-4 is not.