AI4Media conducted an initial analysis of the legal and ethical framework for trusted AI, addressing the question of how the GDPR provisions should be interpreted when applied in an AI system context.
This work comprises:
Firstly, the research conducted showed that despite the GDPR not referring to “artificial intelligence”, many provisions of the legal text prove to be relevant for AI systems. It also highlighted that there is a lack of sufficient clarity as well as uncertainties and diverging opinions between scholars and interpretative guidelines. The academic literature showed sometimes converging and in other cases conflicting opinions among the research community on the scope of some GDPR provisions applied to AI systems. This research also introduced the use of AI systems in the media environment, including recommender and targeted advertising systems.
Then, it delivered a comprehensive description of the overarching principles of the GDPR, including lawfulness, fairness, and transparency. A detailed analysis of GDPR Art. 5’s principles on purpose limitation, data minimisation, accuracy, storage limitation, integrity and confidentiality (security), and accountability, was also provided.
The different data subject rights when applied in the context of AI, were also analysed. The rights considered were: the right to be informed, the right not to be subject to a decision based solely on automated processing, the so-called right to explanations, the right of access, the right to rectification, the right to erasure, the right to restrict processing, and the right to object.
The report also presented the growing challenges involved in the compliance with data subject’s requests for rights enforcement in big datasets, including complexities related to the different stages of AI system processing, transparency and right to information as a key for exercising the other rights, uncertainties regarding the application of data subject’s rights, unfriendly AI system interfaces for rights enforcement, and lack of enforcement leading to trade-offs.
The analysis also briefly touched upon upcoming European legislation relevant to the provisions of the GDPR and AI systems for processing personal data, including the AI Act proposal, the Data Governance Act proposal, and the proposed Data Act. The legislator seems well aware of the current challenges of GDPR and AI as these upcoming instruments try to complement GDPR and create additional safeguards, data quality requirements and favourable conditions to enhance data sharing. However, being currently negotiated, it remains to be seen how this will materialise.
Finally, the report presents a set of initial recommendations built upon the initial analysis conducted throughout the first 18 months of the project. These recommendations addressed ways to ensure the development of trusted and GDPR-compliant AI, offering a conclusion on the gaps and challenges identified throughout the report, while also providing ways forward to mitigate and prevent the identified issues for trusted AI.
What’s next? Further research will dive deeper into legal data protection for the use of AI applications in media environments and will investigate how people can be aware of what is being done with their data. This deliverable was indeed the first step toward the final analysis which is due in August 2023.
Access the full report HERE
Author: Noémie Krack, (KU Leuven)