Sam Altman, the chief executive of ChatGPT’s OpenAI, has told legislators in the United States that government regulation of artificial intelligence is “critical” because of the potential risks it poses to humanity.
Altman used his appearance on Tuesday in front of a US Senate judicial subcommittee to urge Congress to impose new rules on big tech, despite deep political divisions that for years have blocked legislation aimed at regulating the internet.
“If this technology goes wrong, it can go quite wrong,” Altman, who has become the global face of AI, told the hearing.
“OpenAI was founded on the belief that artificial intelligence has the potential to improve nearly every aspect of our lives, but also that it creates serious risks,” he said, but given concerns about disinformation, job security and other dangers, “we think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models”.
Altman proposed the formation of a US or global agency that would license the most powerful AI systems and have the authority to “take that license away and ensure compliance with safety standards”.
Altman’s San Francisco-based startup rocketed to public attention after it released ChatGPT, a free chatbot tool that answers questions with convincing human-like responses, late last year.
But initial worries about how students might be able to use ChatGPT to cheat on assignments have expanded to broader concerns about the ability of the latest crop of “generative AI” tools to mislead people, spread falsehoods, violate copyright protections and disrupt some jobs.
The legislators stressed their deepest fears of AI’s developments, with a leading senator opening the hearing on Capitol Hill with a computer-generated voice, which sounded remarkably similar to his own, and reading a text written by the bot.
“If you were listening from home, you might have thought that the voice was mine and the words were from me, but in fact, that voice was not mine,” said Senator Richard Blumenthal.
Artificial intelligence technologies “are more than just research experiments. They are no longer fantasies of science fiction, they are real and present,” said Blumenthal, a Democrat.
“What if I had asked it, and what if it had provided, an endorsement of Ukraine surrendering or [Russian President] Vladimir Putin’s leadership?”
Global action needed
While acknowledging the enormous potential of AI tools, Altman suggested the US government might consider a combination of licensing and testing requirements before the release of more powerful models.
He also recommended labeling and increased global coordination in setting up rules over the technology.
“I think the US should lead here and do things first, but to be effective we do need something global,” he added.
Senator Josh Hawley, a Republican, said the technology had big implications for elections, jobs and national security and that the hearing marked “a critical first step towards understanding what Congress should do”.
Blumenthal noted that Europe had already made considerable progress with the AI Act, which is set to go to a vote next month at the European Parliament.
A sprawling legislative text, the European Union measure could see bans on biometric surveillance, emotion recognition and certain policing AI systems.
OpenAI, cofounded by Altman in 2015 with backing from tech billionaire Elon Musk, has evolved from a nonprofit research lab with a safety-focused mission into a business. Its other popular AI products include the DALL-E image generator. Microsoft has invested billions of dollars into the startup and has integrated the technology into its own products, including its search engine Bing.
Altman is also planning a worldwide tour this month to national capitals and major cities across the six continents to talk about AI with policy makers and the public.
On Capitol Hill, politicians also heard warnings that the technology was still in its early stages.
“There are more genies yet to come for more bottles,” said New York University professor emeritus Gary Marcus, another panelist.
“We don’t have machines that can really … improve themselves. We don’t have machines that have self-awareness, and we might not ever want to go there,” he said.