AI and Privacy Compliance: Risk Mitigation Measures according to the EDPB
The European Data Protection Board (EDPB) recently issued an opinion addressing the challenges of data protection in AI development and deployment. Requested by the Irish Data Protection Authority (DPA), the guidance focuses on critical areas such as risk mitigation, lawful processing, and the distinction between AI development and deployment. For privacy professionals, understanding the technical and organizational measures highlighted in the opinion is essential for ensuring GDPR compliance.
Key Risk Mitigation Measures in AI
The EDPB emphasizes the importance of implementing robust risk mitigation measures to ensure AI systems align with GDPR principles. These measures should address risks across the lifecycle of AI, from development to deployment. Key recommendations include:
Data Minimization: AI models should process only the data necessary for their purpose. This involves strategies such as filtering out irrelevant data during preprocessing and avoiding the collection of personal data whenever possible.
Anonymization and Pseudonymization: Where personal data is used, techniques like pseudonymization or differential privacy can help minimize risks. For example, pseudonymized data replaces identifying information with coded references, while differential privacy ensures that outputs do not reveal specifics about individual data points.
Transparency Measures: Controllers must ensure that individuals understand how their data is used. This includes clear communication about the collection and processing of public data, such as through web scraping. Transparency also involves informing data subjects about their rights and the purposes of data processing.
Technical Safeguards Against Re-identification: To prevent re-identification risks, the opinion highlights the importance of testing AI models for vulnerabilities. Examples include membership inference and model inversion attacks. Supervisory authorities (SAs) are encouraged to assess whether organizations have documented and tested these safeguards effectively.
Documentation and Accountability: Controllers must document their processing activities thoroughly. This includes maintaining records of data sources, measures taken to ensure data minimization, and risk assessments of anonymization techniques. Proper documentation is essential to demonstrate compliance under GDPR’s accountability principle.
Legal Bases and the Role of Legitimate Interest
The EDPB’s opinion reiterates that legitimate interest can serve as a legal basis for data processing in AI development, provided the following conditions are met:
Legitimacy: The processing must pursue a lawful and clearly defined interest, such as improving cybersecurity or developing user-assistance tools like conversational agents.
Necessity: The data processing must be strictly necessary to achieve the intended purpose. Less intrusive alternatives should be considered first.
Balancing of Rights: The legitimate interest must not override the fundamental rights and freedoms of data subjects. This includes considering the context of data collection, whether the data was publicly available, and whether individuals could reasonably expect their data to be used for the stated purpose.
Mitigating Risks in Practice
The opinion provides practical examples of mitigating measures that can reduce risks to data subjects:
Designing AI Models with Privacy by Design: Integrating privacy measures at the development stage, such as limiting data retention and ensuring outputs do not inadvertently reveal personal data.
Regular Testing and Auditing: Conducting ongoing tests for vulnerabilities, including resistance to state-of-the-art attacks like regurgitation of training data or unauthorized inference.
Tailored Access Controls: Restricting access to AI models and their outputs based on the role and need of the user. For example, models deployed internally might have stricter access controls than publicly available ones.
Enhanced User Controls: Providing mechanisms for individuals to access, correct, or delete their data and making it easier to exercise their rights under GDPR.
Addressing Unlawfully Processed Data
If personal data is processed unlawfully during AI model development, the opinion outlines three scenarios:
Data Retention in the Model: If the model retains personal data and processes it during deployment, the initial unlawful processing may affect the lawfulness of subsequent use.
Transfer to Another Controller: When sharing a model with another controller, the receiving entity must ensure the model’s development complied with GDPR.
Anonymization of Data: If data is anonymized before deployment, the GDPR no longer applies to the anonymized model. However, any new personal data processed during deployment must still comply with GDPR requirements.
Final Thoughts
The EDPB’s guidance highlights the importance of integrating robust risk mitigation measures into AI workflows. While it provides a strong foundation, certain areas—such as defining reasonable expectations for data subjects and evaluating the effectiveness of anonymization techniques—could benefit from more detailed guidance. For privacy professionals, the opinion underscores the need for meticulous documentation, rigorous testing, and proactive engagement with evolving regulations.
At CuratedAI, we prioritize security and privacy by closely following regulatory developments and implementing them in our products. Register at our platform today and start for free at app.curatedai.eu.
Siyanna Lilova
Dec 29, 2024
Latest posts
Discover other pieces of writing in our blog