Questions Surround EC Proposal to Allow Legitimate Interest as Basis for AI Training
Lawyers and privacy advocates are raising questions about a key proposal in the European Commission's digital omnibus package that aims to reform the GDPR by allowing legitimate interests as a legal basis for processing personal data for AI models.
Sign up for a free preview to unlock the rest of this article
Timely, relevant coverage of court proceedings and agency rulings involving tariffs, classification, valuation, origin and antidumping and countervailing duties. Each day, Trade Law Daily subscribers receive a daily headline email, in-depth PDF edition and access to all relevant documents via our trade law source document library and website.
The proposal won't give AI developers a green light to grab data, lawyers argued, while privacy group Noyb said it risks upending the GDPR's technology-neutral approach to favor AI. All agreed it raises issues that must be addressed.
At the core of the debate on AI model training "is the need to either collect consent from sufficient users to be able to train data (opt-in) or to allow companies to just use everyone's data and merely allow an objection (opt-out)," Noyb said in a review of the package published Tuesday. The proposal "seems to overall favour AI applications over any other technical approaches to data processing (e.g. a normal database), which would not benefit from having a 'legitimate interest' codified in law."
Data controllers would still have to conduct a balancing test and demonstrate that the processing is necessary and proportionate, wrote Stephenson Harwood data protection attorney Katie Hewson and her colleagues in an analysis Thursday. Controllers couldn't use legitimate interests where applicable law requires consent, she added.
Green-lighting legitimate interests as a legal basis will provide certainty to AI developers, who currently face differing views among regulators, Hewson said. But the proposal leaves unanswered questions, such as how to give data subjects meaningful transparency and how people can exercise their right to object to model training in practice.
The draft actually tightens requirements for the legitimate interests balancing act, Heuking IT attorneys Hans Wulf and Theresa Bardenhewer wrote Thursday. Controllers must consider whether AI training benefits data subjects or the public, whether the processing meets their reasonable expectations, and what specific safeguards are in place to prevent risks such as data leaks, they noted.
"Businesses should not welcome this proposition as an unrestricted authorisation," Van Bael & Bellis data protection lawyer Tanguy Van Overstraeten and others said in a Nov. 25 analysis. The need to conduct and document "a robust legitimate interest assessment" will likely draw regulators' scrutiny.
The proposal also introduces an unconditional right to object to the use of one's personal data for such training, Van Overstraeten said. "This should trigger enhanced transparency and user-friendly opt-out mechanisms that may, however, be difficult to implement in practice."
In addition, the package broadens the possibilities for processing special categories of personal data such as sensitive data for AI training, Wulf noted. New Article 9 (2) allows residual processing of sensitive data for training, testing or operating AI systems, provided that the controller takes effective technical and organizational measures to avoid collecting such data as far as reasonably possible, he said.
Controllers must identify and remove such data as soon as it's discovered, unless removal would require a disproportionate effort, Wulf said. In that case, the data must be safeguarded in such a way that it can't be used to generate model outputs, he added.
The provision would allow the use of biometric data to confirm an individual's identity, provided that it's solely under their control, such as a user's fingerprint to unlock their device, Hewson wrote. But "requiring controllers to remove sensitive data from large data sets may not be enough to prevent misuse."