Trade Law Daily is a Warren News publication.
Industry Opposed

FCC Proposal on AI Political Ads Draws Scrutiny and Challenges of Agency Authority

Legislators, broadcasters, cable groups, the Heritage Foundation and civil rights groups disagree on whether the FCC can or should require disclosures for political ads created with generative AI, according to comments filed in docket 24-211 by Thursday’s deadline.

Sign up for a free preview to unlock the rest of this article

Timely, relevant coverage of court proceedings and agency rulings involving tariffs, classification, valuation, origin and antidumping and countervailing duties. Each day, Trade Law Daily subscribers receive a daily headline email, in-depth PDF edition and access to all relevant documents via our trade law source document library and website.

Industry-side and conservative commenters such as NAB and the Free State Foundation decried the plan as outside FCC authority and overly burdensome, while public interest groups focused on the need to combat AI misinformation. Despite that disparity, nearly every commenter agreed that the FCC’s proposed definition of AI as “computational technology or other machine-based systems” is overly broad. Advertisers are unsure whether the proposed rules apply to commonly used editing software, said Public Knowledge, which otherwise supported the FCC proposal. The rules would encompass “virtually all of today’s audio and video production methods, even if merely used to enhance color and lighting in a television ad or reduce background noise in a radio ad,” said NPRM opponent NAB.

“This definition," said the Lawyers’ Committee for Civil Rights Under Law, "means that any use of a computer to create media will meet the definition, whether or not AI is used at all.” Instead, the FCC should use a definition that focuses on “potentially deceptive AI-generated content,” said PK. A broader scope could lead to over-reporting, where ads without AI-generated content end up with AI labels “thereby creating audience fatigue and watering-down the effectiveness of the disclosures,” PK added.

A collection of Democratic U.S. senators also said the definition should be narrowed. In a letter supporting the FCC’s proposal to require disclosures, the lawmakers wrote the agency should “clarify that longstanding, basic editing tools are not considered as covered content.” The letter included signatures from Sens. Ben Ray Lujan, N.M.; Corey Booker, N.J.; and Amy Klobuchar, Minn. The National Hispanic Media Coalition was one of the few commenters supporting the definition as currently worded, though it said it “may require tweaking and adjusting in the future.”

Many commenters, both supporters and critics of the rules, also agreed that AI deepfakes are a growing problem. “AI-generated and other synthetic media have advanced rapidly in recent years, becoming more prevalent in U.S. political campaigns and posing a greater risk of deceiving voters,” said The Brennan Center for Justice at New York University School of Law. Nexstar “understands and appreciates the Commission’s desire to combat the potential use of artificial intelligence (“AI”) to disseminate false or deceptive information,” said the broadcaster, which opposed FCC action on AI. Focusing on AI instead of deceptive ads ignores the core problem, said ad company Locality Broadcast. Under the FCC rule, “an ad that in fact conveys false information without using generative AI will have no disclosures and so consumers may mistakenly believe in its accuracy merely because no AI disclosure is on the ad,” Locality said.

Despite those areas of consensus, commenters clashed over whether the FCC has authority to tackle the matter. “This proposal constitutes an overreach of the FCC's statutory authority,” and “threatens to severely distort political discourse through a fragmented regulatory approach,” said the Heritage Foundation. The FCC “simply does not have the authority to compel truthful disclosures about political deepfakes across platforms in a manner that would ultimately benefit the public,” said NAB.

However, supporters said provisions of the communications act giving the FCC power to respond to developments in tech in the public interest and require sponsorship identification in political ads give the FCC authority to require AI disclosures. “The FCC’s sponsorship disclosure obligations have been upheld over the years and rarely challenged, as have the political programming rules,” said a joint filing from the Leadership Conference on Civil and Human Rights and numerous public interest groups. Sponsorship ID “is not a general grant of authority for the Commission to prescribe whatever disclaimer requirement it sees fit simply because advertising is disseminated over broadcast, cable, or satellite media,” said The People United for Privacy Foundation. “Nor has the Commission, to our knowledge, ever taken such a broad position until now.”

The FCC lacks authority over political advertisers or production companies that make ads, NAB argued. “A disclosure regime cannot be successful if the information that triggers the disclosure is not accurate or even available, but in this instance that information is controlled by the advertisers,” NAB said. “The use of AI in creating political advertisements can alter the way information is presented to voters,” PK said. “Therefore, regulating AI-generated political ads falls well within the FCC’s mandate.”

Numerous opponents of the disclosure rules said the FCC proposal encroaches on the territory of the Federal Election Commission. FEC Chair Sean Cooksey has denounced the FCC’s NPRM (see 2406060051). “Congress has made the Federal Election Commission the ‘cop-on-the-beat’ in this area,” said NCTA. “The absence of a clear legislative mandate from Congress to regulate AI, coupled with a carefully considered analysis by the FEC that problematic use of AI in political advertising is covered by existing law” demonstrates that the FCC is “exceeding its jurisdiction, expertise, and constitutional limitations,” said Americans for Prosperity. The FEC doesn't have authority over all ads, said the Center for American Progress. Many political ads would fall under the FCC’s disclosure requirements but are outside FEC authority, such as ads that urge early voting without promoting a specific candidate or that focus on ballot measures, the Center said. “While Congress should someday enact comprehensive legislation regulating AI in elections, the FCC’s proposed rule would provide vital electoral transparency for voters in the interim,” said Campaign Legal Center.

An eventual FCC rule requiring AI political ad disclosures is likely to fall to a court challenge based on recent U.S. Supreme Court decisions limiting agency authority, said numerous opponents of the agency's proposal. The FCC’s proposal is “likely to run afoul” of the major questions doctrine (MQD), said the Free State Foundation. “Proposing for the first time to regulate the use of AI in connection with political advertisements appears to be a paradigmatic case meeting the MQD criteria.” After the SCOTUS ruling against Chevron deference in Loper Bright Enterprises v. Raimondo, the FCC’s “lack of clear statutory authority is fatal to the Commission’s ability to act.” Given “the Supreme Court’s recent rejection of Chevron, the FCC should be prepared to receive less deference in the eventual cases challenging these rules,” said Tech Freedom.

The FCC proposal won’t work as the agency intends because it only affects agency regulatees and not tech platforms, said numerous commenters. “Audiences used to seeing disclosures on TV or cable may mistakenly assume that AI-generated content on unregulated digital platforms is equally authentic or valid simply because no disclosure is provided,” said the American Association of Political Consultants. Viewers notified that a broadcast ad “has incorporated AI but who receive no such disclosure for a political advertisement appearing on YouTube,” will likely assume “that one was created with AI’s help and the other was not -- regardless of whether this is actually true,” said the Taxpayer Protection Alliance. Google and Meta, which account for a huge swath of digital ads, already require disclosures for online ads that contain “synthetic or digitally altered content,” said the Center for American Progress. The FCC proposal “is not furthering confusion, but rather reducing confusion by helping finally bring political advertisements run on television and radio into alignment with the standards required by major online advertisers,” said the Center.

Broadcasters, cable groups and advertising companies all said the FCC’s proposal doesn’t comport with the realities of how political ads are purchased and created. “Candidates and other political ad buyers often book time for their broadcast ads weeks prior to their airing but may then make changes to the content such that AI is used in an ad at the last minute,” said Locality. Inquiries about the presence of AI content made at the time the advertising schedule was placed might no longer be correct when the ad runs, Locality said. “The party from whom Scripps buys a political ad usually has nothing to do with creating the content for the ad that eventually runs,” said broadcaster E.W. Scripps. The FCC proposes requiring that stations ask the ad buyer about AI use, but that entity “will rarely have information about the content of the advertisement or the production methods used to create it,” Scripps said. “Complying with the proposed requirements” will be “exceedingly onerous,” for cable groups, NCTA argued. Most ad spots are received “with just one day of lead time” and MVPDs can run more than 1,000 individual spots in a year, NCTA said.

The FCC should “finalize and implement these rules as soon as possible,” as “the 2024 presidential election is less than two months away,” the letter from Senate Democrats said. Reply comments in the AI political ads proceeding are due Nov. 4, just one day before Election Day. “If the Commission actually issues an order prior to the November 5 election, the chaos it will unleash on the political process will be massive,” said TechFreedom. “Rushing a proceeding of this magnitude increases the risk that the final rules will be rejected by the courts, especially because the FCC aims to regulate constitutionally protected political speech.”