The American Automobile Industry Needs an Artificial Intelligence-Enabled Government? Sam Altman, CEO and Sen. Susan Cummings
The Senate hearing on the subject of Artificial Intelligence was personable. Sam Altman, the OpenAil CEO, was the only one who agreed on the need to regulate new tech. Politicians seemed happy to hand over responsibility for drafting rules to the companies. The senator said he can’t remember when people representing large corporations and private sector entities came before him and asked for permission to operate.
Cummings told me this week that she left the NHTSA with a sense of profound concern about the autonomous systems that are being deployed by many car manufacturers. “We’re in serious trouble in terms of the capabilities of these cars,” Cummings says. People think they are capable, but they’re not.
Also like ChatGPT, Tesla’s Autopilot and other autonomous driving projects have been elevated by absurd amounts of hype. Huge sums were poured into developing and installing a technology that still has many unresolved problems because of the fanciful dreams of a transportation revolution. There was a permissive regulatory environment around autonomous cars in the mid-2010s, with government officials loath to apply brakes on a technology that promised to be worth billions for US businesses.
After billions spent on the technology, self-driving cars are still beset by problems, and some auto companies have pulled the plug on big autonomy projects. The public is unsure about how capable a technology really is.
It is good that governments and lawmakers are interested in regulating large language models and generative artificial intelligence tools. The current panic is centered on large language models and tools like ChatGPT that are remarkably good at answering questions and solving problems, even if they still have significant shortcomings, including confidently fabricating facts.
The rhetorical comment was clear at the hearing. Discussing government licensing, OpenAI’s Altman quietly suggested that any licenses need only apply to future systems. “Where I think the licensing scheme comes in is not for what these models are capable of today,” he said. “But as we head towards artificial general intelligence … that’s where I personally think we need such a scheme.”
For example, Joy Buolamwini, a researcher, has identified problems with bias in facial recognition, and it has led to wrongful arrest in the US. Despite this, AI-driven surveillance was not mentioned at all during the hearing, while facial recognition and its flaws were only alluded to once in passing.
Experts at the hearing included IBM’s Christina Montgomery and noted AI critic Gary Marcus, who also raised the specter of regulatory capture. Marcus stated that the danger is that they make it appear as if we are doing something, but it is actually greenwashing and nothing really happens. And although no one from Microsoft or Google was present, the unofficial spokesperson for the tech industry was Altman.
Those running their own artificial intelligence companies said there was a threat to competition. “Regulation invariably favours incumbents and can stifle innovation,” Emad Mostaque, founder and CEO of Stability AI, told The Verge. Clem Delangue, CEO of AI startup Hugging Face, tweeted a similar reaction: “Requiring a license to train models would be like requiring a license to write code. It would slow down progress, fairness and transparency because power would be in the hands of a few.
But some experts say some form of licensing could be effective. Margaret Mitchell, who was forced out of Google alongside Timnit Gebru after authoring a research paper on the potential harms of AI language models, describes herself as “a proponent of some amount of self-regulation, paired with top-down regulation.” She told The Verge that she could see the appeal to individuals rather than companies, but she said it was not for companies.
At the hearing this week, he was not so grandiose. Altman, too, mentioned the problem of regulatory capture but was less clear about his thoughts on licensing smaller entities. We do not want to slow down smaller startups. He said that they don’t want to slow down open source and need them to comply with things.
Despite being a start-up, OpenAI is considered the most influential artificial intelligence company in the world. Its deals with Microsoft to remake Bing have made waves in the tech industry. It is clear that Altman is in a good position to appeal to both the imaginations of the VC class and the hardcore. AI boosters with grand promises to build superintelligent AI and, maybe one day, in his own words, “capture the light cone of all future value in the universe.”