Trade Law Daily is a Warren News publication.
Strong Enforcement Needed

Leap Ahead for AI Law, European Treaty Finalized

AI regulation leaped forward Tuesday following EU governments' approval of the AI Act and Friday's adoption of the Council of Europe (CoE) framework convention on AI. The EU measure is "the first of its kind in the world and can set a global standard for AI regulation," the European Council said. The CoE document is the first international treaty "aimed at ensuring the respect of human rights, the rule of law and democracy legal standards in the use of artificial intelligence (AI) systems," the CoE said in an email.

Sign up for a free preview to unlock the rest of this article

Timely, relevant coverage of court proceedings and agency rulings involving tariffs, classification, valuation, origin and antidumping and countervailing duties. Each day, Trade Law Daily subscribers receive a daily headline email, in-depth PDF edition and access to all relevant documents via our trade law source document library and website.

The Council is composed of representatives from the 27 EU governments; the CoE, a 46-member human rights organization, includes individual EU countries, the EU, and Argentina, Australia, Canada, Costa Rica, the Holy See, Israel, Japan, Mexico, Peru, the U.S. and Uruguay.

The EU act harmonizes AI rules in the public and private spheres via an approach in which the higher the risk of an AI application's harm, the stricter the rules governing it are. For instance, AI systems with limited risks will be subject to light transparency requirements, while high-risk applications must comply with tougher obligations, including a fundamental rights impact assessment. Cognitive behavioral manipulation and social scoring are banned.

The law bars use of AI for predictive policing based on profiling and systems that use biometric data to categorize people according to characteristics such as race or religion. On general purpose AI, such as ChatGPT, the act states that models that don't pose systemic risks will be subject to limited conditions such as transparency; those that carry systemic risks will have tighter controls.

The AI Act creates enforcement bodies: An AI Office in the European Commission; a scientific panel of independent experts; an AI Board of EU countries' representatives; and a stakeholders' advisory forum. Fines for violating the act are a percentage of a company's global annual revenue in the previous financial year, or a predetermined amount, whichever is higher. Smaller companies and startups will face proportional administrative penalties. The measure becomes effective 20 days after it's published, after which businesses will have two years to comply.

The EU AI Act "has the potential to drive 'digital compliance' in businesses in a better way" than the general data protection regulation has done, Pinsent Masons technology attorney Wouter Seinen noted in the firm's Out-Law news. While the GDPR tries to change the mindset and attitude of companies toward transparency and risk management, compliance rates remain disappointing, due partly to a lack of standards and good practices as well as to European enforcement strategies. The AI Act should focus on "nudging and educating, backed with systematic enforcement, rather than setting examples by singling out a handful of companies and imposing massive fines on them," Seinen said.

"Now is the time for governments to make the best out of the new law by swiftly appointing the regulators" who will enforce it and giving them the necessary resources, the European Consumer Organisation said.

The CoE treaty also takes a risk-based approach and covers government and private sector AI systems. The convention aims to ensure that "activities within the lifecycle of artificial intelligence systems are fully consistent with human rights, democracy and the rule of law." National security and research activities aren't covered. Countries signing the treaty (which opens for signature in September) must create or maintain appropriate legislative, administrative or other measures to ensure transparency, oversight, equality and privacy/data protection.

The CoE treaty aligns with the AI Act but the two operate at different levels of international law, a CoE spokesperson said in an email. If the EU signs and ratifies the convention, the AI Act will likely be used to put in place the obligations the EU agrees to as a party to the treaty, he said. The same goes for any country that signs the convention: The country will use its domestic AI laws to implement it.

Asked about potential overlap between the two instruments, the CoE spokesperson noted that, unlike the AI Act, the convention doesn't place additional requirements on governments or companies. The focus of the CoE treaty is protecting human rights, democratic processes and the rule of law: "It is not a market- regulating instrument."