ChatGPT chief warns US Senate committee to regulate artificial intelligence News
TechCrunch, CC BY 2.0, via Wikimedia Commons
ChatGPT chief warns US Senate committee to regulate artificial intelligence

CEO of OpenAI Samuel Altman appeared before the US Senate Subcommittee on Privacy, Technology, and the Law on Tuesday for a hearing about how the US might regulate the use of artificial intelligence (AI) platforms like OpenAI’s ChatGPT. Altman called on the US “to develop regulations that incentivize AI safety while ensuring that people are able to access the technology’s many benefits.”

The hearing opened with AI-generated audio of Senator Richard Blumenthal (D-CT). After playing the audio, Blumenthal said that he had an AI voice cloning software compose the audio based on his previous appearances in Congress, while ChatGPT wrote the audio’s script based on what it identified as Blumenthal’s congressional record. Blumenthal noted that while the technology’s capabilities are impressive, they also present the potential for harm. He pointed out, for example, that the same software could be used to generate false audio of Blumenthal promoting Russian President Vladimir Putin or Ukraine’s surrender in the ongoing war. He said that this is a reality that the US must now reckon with.

Senator Josh Hawley (R-MO) echoed Blumenthal’s concerns. He stressed the rapid development of AI technology like ChatGPT, stating that the hearing could not have taken place a year ago because the technology did not yet exist.

Appearing before the subcommittee to help the lawmakers navigate the rapidly-developing field of AI was a panel of academics and experts including Altman, IBM Chief Privacy and Trust Officer Christina Montgomery and New York University Professor Gary Marcus. All three stressed the need for the US to develop laws and regulations around AI. The panel suggested the establishment of licensing and testing requirements to ensure AI technologies do not promote harmful material or usage. Montgomery and Marcus even went so far as to call on AI companies to halt development for six months to allow governments to develop a regulatory framework.

As it exists now, the AI industry operates entirely on the voluntary adoption of rules and regulations, stemming from the companies themselves. Altman stated that OpenAI, for example, has established guardrails for the use of ChatGPT. But lawmakers stated they were wary of voluntary adoptions, citing the “race to the bottom” exhibited by social media companies like Meta and TikTok. Both social media companies have faced increasing scrutiny for their failure to adopt user protections in the absence of congressional oversight. Blumenthal said, “Congress failed to meet the moment on social media. Now we have the obligation to do it on AI before the threats and risks become real.”

One of the most pressing areas of concern that the panel highlighted was AI’s ability to manipulate and persuade individuals using disinformation. Altman and Hawley, in particular, highlighted the importance of preventing AI from spreading disinformation ahead of the 2024 US presidential election. The potential for such harm most recently caused Italy to ban ChatGPT in March.

While Altman said there may come a point where the public is able to recognize AI-generated audio, images and textas with photoshopped imagesthat point has not yet come. As of now, Altman says AI remains “photoshop on steroids.”