Understanding the EDPB’s Take on AI and Personal Data
On 18 December 2024, the European Data Protection Board (EDPB) issued an opinion addressing key data protection concerns in AI development and deployment. This opinion, requested by the Irish Data Protection Authority, aims to ensure regulatory consistency across the EU. It provides essential guidance for organizations using personal data to train and deploy AI models. Here are the highlights, explained in accessible terms.
When is an AI Model Truly Anonymous?
Anonymity is a central concept under the GDPR. The EDPB opinion emphasizes that AI models trained on personal data cannot always be considered anonymous. Whether an AI model is anonymous depends on whether:
It is unlikely that personal data used in training can be extracted, either directly or indirectly.
Outputs of the model do not relate to the individuals whose data was used.
A case-by-case assessment is essential, considering the methods used to anonymize data and the risk of re-identification. The EDPB provides a non-exhaustive list of measures to demonstrate anonymity, such as limiting the collection of personal data, applying pseudonymization, and using privacy-preserving techniques like differential privacy.
Organizations must also account for “means reasonably likely to be used” to identify individuals. This includes analyzing the characteristics of the training data, potential risks of re-identification, and evolving technological capabilities. Notably, controllers must document these steps thoroughly to demonstrate compliance.
Legitimate Interest as a Legal Basis for AI Development
The opinion outlines how legitimate interest can serve as a legal basis for processing personal data in AI development and deployment. However, this requires a rigorous three-step test:
Identifying a Legitimate Interest: The interest must be lawful, clearly articulated, and real (not speculative). Examples of legitimate interests include developing conversational agents to assist users or deploying AI to improve cybersecurity by detecting threats.
Necessity of Processing: The processing must be strictly necessary to achieve the identified legitimate interest. Controllers should explore whether less intrusive methods could achieve the same objective. For instance, the amount of data processed should align with the GDPR’s data minimization principle.
Balancing of Rights: The interests of the controller must not override the fundamental rights and freedoms of data subjects. This balancing test considers various factors, such as the nature of the relationship between the controller and the data subject, whether the data was publicly available, and the individuals’ reasonable expectations about how their data would be used. For example, if personal data was scraped from public sources, this does not automatically justify its use without proper safeguards.
To mitigate negative impacts on individuals, the EDPB suggests measures such as improving transparency, facilitating the exercise of data subjects’ rights, and adopting technical safeguards to limit the risks of harm.
What Happens if Data Was Processed Unlawfully?
AI models developed with unlawfully processed personal data present significant challenges. The EDPB identifies three scenarios:
If personal data remains in the model and is processed during deployment, the lawfulness of the deployment depends on whether the initial unlawful processing impacts subsequent use.
If the model is shared with another controller, the receiving controller must ensure compliance by assessing the lawfulness of the model’s development.
If the data is anonymized after the model is developed, the GDPR no longer applies to the anonymized model. However, any subsequent processing of new personal data during deployment must comply with GDPR.
These scenarios underscore the importance of conducting robust due diligence and maintaining thorough records throughout the AI lifecycle. Controllers should anticipate scrutiny from supervisory authorities and be prepared to demonstrate compliance with GDPR principles.
Final Thoughts
The EDPB’s opinion provides useful guidance for IT lawyers and organizations on GDPR compliance in AI. It highlights critical issues like anonymity, legitimate interest, and the consequences of unlawful data processing, offering a foundation for responsible innovation. However, the opinion leaves room for improvement. It does not fully address the operational challenges of demonstrating compliance with the balancing test for legitimate interest or the nuances of mitigating risks when using public data.
For more information about the risk mitigation measures suggested by the EDPB, see our second blog post on the topic here.
At CuratedAI, we are closely following these regulatory developments to ensure our products reflect the latest standards. Security and privacy are central to our mission, and we continuously integrate these principles into our platform. Register at app.curatedai.eu today and start your free trial!

Siyanna Lilova
Dec 20, 2024
Latest posts
Discover other pieces of writing in our blog