House Commerce Shows Bipartisan Interest in AI Chatbot Privacy Rules
AI chatbots create privacy risks, and Congress should explore data-protection obligations, House Commerce Committee Republicans and Democrats said during a House Oversight Subcommittee hearing Tuesday.
Sign up for a free preview to unlock the rest of this article
Timely, relevant coverage of court proceedings and agency rulings involving tariffs, classification, valuation, origin and antidumping and countervailing duties. Each day, Trade Law Daily subscribers receive a daily headline email, in-depth PDF edition and access to all relevant documents via our trade law source document library and website.
Committee Chairman Brett Guthrie, R-Ky.; ranking member Frank Pallone, D-N.J., and House Oversight Subcommittee Chairman John Joyce, R-Pa., all discussed how AI chatbot developers and deployers aren’t bound by health data obligations like those found in HIPAA. House Oversight Subcommittee ranking member Yvette Clarke, D-N.Y., said she sees a need to ensure AI chatbots are “safe, secure and trustworthy.”
Guthrie noted that many online users assume interactions with chatbots are private, even though there are no HIPAA-like “confidentiality obligations.” Questions during Tuesday’s hearing suggest both parties are “barking up the same tree, which, I think, is good,” he said.
Guthrie and Joyce are leading House Republicans’ privacy working group, which launched a federal privacy legislation inquiry in early 2025 (see 2511050007). Joyce used the same terminology as Guthrie during the hearing, saying personal information isn’t protected by “confidentiality obligations.” Joyce noted the FTC is studying AI chatbots (see 2509110068) and said he’s hopeful the inquiry will shed light on how these technologies can be improved.
Sens. Josh Hawley, R-Mo., and Richard Blumenthal, D-Conn., are pushing for a Senate Judiciary Committee vote on legislation that would ban AI chatbots for minors and require age verification (see 2511050048).
Chatbots retain data to enhance their functionality, and this data is used to train and improve accuracy of responses, said Joyce. The accumulation of personal data raises data breach issues and the possibility of conversations falling into the “wrong hands,” he added. Guthrie said he wants to continue an open discussion about the risks but said Congress should also consider the benefits of AI chatbots, such as increased access to mental health assistance.
There are “significant” privacy concerns, particularly when interactions involve mental health, said Pallone. Even though there might be an assumption of confidentiality, chatbot interactions are saved and used for training or shared with undisclosed third parties, said Pallone. He called for research and congressional oversight, as well as states’ ability to continue regulating AI. Pallone discussed House Republicans’ failure to pass an AI moratorium and reports that those plans could be revived in National Defense Authorization Act negotiations.
“Republican leadership has said that they may try that misguided effort again very soon,” said Pallone. “That is very unfortunate. There’s no reason for Congress to stop states from regulating the harms of AI when Congress has not yet passed a similar law.” Congress often looks to states to chart a path forward, so blocking AI regulation at the state level is in no way positive, he added.
Witnesses testified that there’s a need for transparency standards, so the public can understand how training data is gathered and how personal data is handled or shared. Harvard Medical School Director of Digital Psychiatry John Torous said companies are finding ways to use health data in novel ways, and Congress can get ahead of the issue with transparency mandates.
Stanford Institute for Human-Centered Artificial Intelligence Privacy and Data Policy Fellow Jennifer King agreed there’s “very little” transparency about how training data is sourced and how data is collected and shared.
Psychiatrist Marlynn Wei noted that some companies like OpenAI have deployed age-verification systems to identify teens and safeguards against self-harm. But age-verification methods can be unreliable and also require biometric data collection, which creates other child-related issues, she said.
Torous said age-verification tools are useful, but Congress must explore the core issues of why chatbots are resulting in child harm. King said she supports efforts in California to establish age-verification requirements for device manufacturers, app stores and parents.