Trade Law Daily is a Warren News publication.
‘Subtle’ Manipulation

OpenAI CEO Asks Senate Judiciary for AI Regulation

ChatGPT creator OpenAI supports deploying a new federal agency to regulate artificial intelligence and other disruptive technologies, OpenAI CEO Samuel Altman told Senate Judiciary Committee members Tuesday.

Sign up for a free preview to unlock the rest of this article

Timely, relevant coverage of court proceedings and agency rulings involving tariffs, classification, valuation, origin and antidumping and countervailing duties. Each day, Trade Law Daily subscribers receive a daily headline email, in-depth PDF edition and access to all relevant documents via our trade law source document library and website.

Members discussed various concerns about AI, including employment impacts, privacy invasions, copyright issues and social media manipulation affecting free elections. During a Senate Privacy Subcommittee hearing, Senate Judiciary Committee ranking member Lindsey Graham, R-S.C., discussed forthcoming legislation he's putting together with Sen. Elizabeth Warren, D-Mass., that seeks to create a new tech regulator (see 2209160053).

Congress could create 10 new agencies, but if they don’t have the right resources and expertise, industry will run circles around regulators, said Senate Privacy Subcommittee Chairman Richard Blumenthal, D-Conn. Blumenthal urged Congress to create the right framework so AI rules are enforceable and real. He asked witnesses about a potential legislative solution that would result in something akin to “nutritional labels,” which would detail the “ingredients” in AI technology and decide whether a service is reputable.

There needs to be “clear liability” for when AI products harm consumers, said Altman during the hearing. He told members the company is focused on election manipulation, news content and copyright for music creators. OpenAI has backed a new regulator that would license operators of AI technology, as envisioned by Graham, said Altman.

Congress isn’t happy with the current landscape for Communications Decency Act Section 230 and the lack of liability for social media companies, said Senate Judiciary Committee Chairman Dick Durbin, D-Ill. Durbin said he doesn’t want to repeat the mistakes made with Section 230. He said he can’t remember a time when industry came before Congress and asked for regulation, since industry usually prefers government to stay out of the way of innovation.

IBM has been urging “precision regulation” for AI for years, said IBM Chief Privacy and Trust Officer Christina Montgomery, calling it an issue of trust. The technology needs to be deployed in a responsible and clear way, she said. The Computer & Communications Industry Association released a statement Tuesday encouraging policymakers to “adopt a technology-neutral approach, examine where existing laws already apply to AI technology and ensure that we use existing frameworks ahead of creating additional layers of bureaucracy that could impede both oversight and progress as companies compete to offer new digital services and technologies.” Center for Data Innovation Senior Policy Analyst Hodan Omaar said the concept of creating a national regulator to license AI operation is “seriously flawed.” Just as it “would be ill-advised to have one government agency regulate all human decision-making, it would be equally ill-advised to have one agency regulate all AI,” said Omaar. “Regulators need industry-specific knowledge.”

Senate Privacy Subcommittee ranking member Josh Hawley, R-Mo., focused on AI election manipulation. Collecting data from Google Search and using it to fine-tune strategies to elicit reactions from people can have major impacts on undecided voters, said Hawley. Altman said this is “one of my areas of greatest concern.” Users should know when content is AI-generated, and there should be clear rules about disclosure, said Altman. The problem is that AI creates “subtle” manipulation that can go completely undetected, said Gary Marcus, New York University professor emeritus-psychology and neural science. Blumenthal played an AI-generated voiceover of himself, indistinguishable from his own voice, written using a ChatGPT script. Consumers in a few years will look back on the current version of ChatGPT much like they do with older cellphone technology, said Blumenthal. There are situations where the “risks are so extreme” that Congress should ban the use of AI, particularly for commercial invasions of privacy for profit and decisions that affect people’s livelihoods, said Blumenthal.

Sen. Marsha Blackburn, R-Tenn., cited musicians in Nashville and the need to protect copyrighted works that AI relies on to create content, such as impersonating country music stars. Altman said his company is engaged on the issue and content owners should benefit from the technology when their work is used. Sen. Amy Klobuchar, D-Minn., raised similar concerns about stealing news content. Klobuchar introduced legislation Tuesday that would require a disclaimer for AI-generated political ads. Co-signed by Sens. Cory Booker, D-N.J., and Michael Bennet, D-Colo., the Real Political Ads Act is aimed at improving election information transparency.