Trade Law Daily is a Warren News publication.
‘Discrete Additions’

Tech Groups Cite Existing Law for AI Regulation

The U.S. shouldn’t pursue AI legislation unless it applies to specific harms not covered by current law, tech associations told the White House in comments due Friday.

Sign up for a free preview to unlock the rest of this article

Timely, relevant coverage of court proceedings and agency rulings involving tariffs, classification, valuation, origin and antidumping and countervailing duties. Each day, Trade Law Daily subscribers receive a daily headline email, in-depth PDF edition and access to all relevant documents via our trade law source document library and website.

The Office of Science and Technology Policy sought input on “national priorities for mitigating AI risks, protecting individuals’ rights and safety, and harnessing AI to improve lives” (see 2305230051). The Computer & Communications Industry Association, the Information Technology Industry Council, the Center for Data Innovation and TechNet said policymakers and regulators need to recognize that existing laws can be used to hold AI service providers accountable.

Consumer advocates urged the White House to focus its national AI strategy on keeping AI systems equitable and protecting civil rights. The Leadership Conference on Civil and Human Rights submitted comments in collaboration with the Communications Workers of America, Access Now, American Civil Liberties Union, Anti-Defamation League, Center for American Progress, Lawyers’ Committee for Civil Rights Under Law, National Consumer Law Center and others, saying: “Individuals are grappling with the impacts of discriminatory automated systems in just about every facet of life, leading to loss of economic opportunities, higher costs or denial of loans and credit, adverse impact on their employment or ability to get a job, lower quality health care, and barriers to housing.” They said regulation and legislation can be used to protect “due process, transparency, and equal protection."

AI is “already subject to existing laws, which will cover the vast majority of needed regulation,” said CCIA Senior Counsel-Innovation Policy Joshua Landau. “AI-specific rules should be reserved for challenges unique to AI. This approach will maximize the benefits of AI while reducing the potential harms.” CCIA said the government should only pursue “discrete additions in the limited instances where AI introduces unique challenges.” That approach would lead to a “predictable and stable environment for AI investment, while limiting duplicative regulation and regulatory arbitrage,” said CCIA.

Policymakers should proceed with new legislation “only if there is a specific harm or risk to individuals where existing legal frameworks are either determined to be insufficient or do not exist,” said ITI. It said AI policy needs to be paired with federal privacy legislation. A federal privacy law would “simplify and strengthen data management practices across the economy and provide a firmer foundation for robust discussions about how to advance trustworthy AI,” said ITI.

A national data privacy framework would establish basic consumer data rights, streamline regulation and minimize the impact on innovation, said the Center for Data Innovation. OSTP shouldn’t discourage companies from collecting and using data, and federal privacy legislation shouldn’t include data minimization requirements, the organization said. CTI urged regulators not to frame AI regulation as if it were a single application of technology: “AI technology is not a tangible thing like a power source (such as nuclear energy) or a physical object (such as a jet engine).”

TechNet noted several federal agencies stated an intent to use existing law to regulate AI. The association cited an April 25 joint statement from the FTC, DOJ, Consumer Financial Protection Bureau and the Equal Employment Opportunity Commission outlining how existing enforcement authorities apply to automated systems. AI policy should pass with privacy legislation since it “would apply to and mitigate some risks to consumers stemming from using AI systems,” said TechNet.

BSA | The Software Alliance said it supports legislation requiring entities to “conduct impact assessments if they develop or deploy high-risk AI systems.” BSA noted assessments are already used for environmental protection and data protection, and they can be used to hold companies accountable for mitigating risks.