Trade Law Daily is a service of Warren Communications News.
‘No Ceiling Above’

Industry Urges White House to Work With Congress to Block State AI Measures

The Trump administration should work with Congress to preempt burdensome AI regulations at the state level, industry groups told the White House Office of Science and Technology Policy in comments due Monday. Meanwhile, consumer advocates urged OSTP to protect civil rights through mandatory auditing, transparency standards and human oversight of high-risk AI systems.

Sign up for a free preview to unlock the rest of this article

Timely, relevant coverage of court proceedings and agency rulings involving tariffs, classification, valuation, origin and antidumping and countervailing duties. Each day, Trade Law Daily subscribers receive a daily headline email, in-depth PDF edition and access to all relevant documents via our trade law source document library and website.

OSTP in September requested public input on identifying federal laws, regulations, agency rules and guidance that “unnecessarily hinder” AI development and deployment in the U.S.

The Software & Information Industry Association recommended that the administration work with Congress to pass a federal law on “frontier AI model oversight and regulation that preempts state legislation.” The bill should establish baseline requirements for frontier model developers concerning “disclosure, transparency, risk mitigation, and security to avoid the potential danger of fragmented and inconsistent state regulation and oversight,” said SIIA.

Policymakers are passing AI-related state laws and applying existing state measures in potentially disruptive ways, said the Computer & Communications Industry Association. It cited as an example the California Privacy Protection Agency’s recently issued regulations on automated-decision making technology (see 2509230036). CCIA compared California’s opt-out mechanism for “significant” decisions to rights provided by the GDPR. The CCPA’s early draft suggestion for risk assessments for all AI training, which ultimately wasn’t adopted, was “even more disruptive,” said CCIA. “In the final regulation, this was reduced to the more reasonable requirement of risk assessments only for systems that make significant decisions about a consumer, but the potential for disruptive regulation exists.”

CCIA urged the administration to work with Congress to at least partially preempt state AI regulation: “In areas in which preemption exists, it is critical that it fully preempt states from acting. A federal law that provides a floor below which states cannot go, but no ceiling above, will ultimately result in just the type of regulatory morass that preemption seeks to avoid.”

TechNet raised concerns about states adopting conflicting definitions for automated decision systems, algorithmic accountability and high-risk AI, which, it said, forces “companies to design different compliance programs for each jurisdiction even when operating a single national product. ... In some cases, state rules impose obligations that contradict or exceed federal frameworks, particularly in privacy and bias auditing, creating uncertainty about which standard prevails.”

Chamber of Progress called on the federal government to take action to prevent measures that disproportionately burden data centers. This might include federal policies and guidance restricting “states from enacting discriminatory measures. Establishing a national standard for data center electricity rates and ensuring equal access to economic incentives would help create a consistent and fair operating environment across states.”

The Electronic Privacy Information Center and the Center for Democracy & Technology signed joint comments with more than 30 consumer groups, including Common Cause, Leadership Conference on Civil and Human Rights, American Civil Liberties Union and TechEquity Action. EPIC, in its own comments, wrote about previous efforts to block state AI regulations: “Targeting existing regulations for removal or modification communicates to the American people that their rights and safety are secondary to business interests. This will not inspire more trust in AI, only less trust in authorities meant to protect us.”

The joint comments suggested modernizing or strengthening existing protections to ensure they can be applied to AI systems. The groups recommended mandatory audits, transparency standards and human oversight of high-risk AI systems: “AI systems used in high-stakes AI systems -- such as those in housing, lending, employment, public benefits and services, or criminal justice -- should be subject to independent bias audits and impact assessments, with public disclosure of findings and required remediation of any civil rights risks.”