2024 . 01 . 24

How social media platform addresses AI generated or manipulated content?

The advancement of artificial intelligence (AI) technologies has posed a persistent challenge for the field of disinformation. These technologies make it easier to manipulate content, facilitating its rapid dissemination on the internet, sometimes resulting in intended or unintended harm and deception. Social media platforms serve often as the primary intermediaries between such content and users. Therefore, it is crucial to comprehend their approaches to moderating AI-manipulated and AI-generated content that can potentially circulate as misinformation or disinformation. EU DisinfoLab has developed a factsheet and an analytical framework to analyse and compare the policies of five big platforms in this regard.

The EU DisinfoLab factsheet explores how Facebook, Instagram, TikTok, X, and YouTube address AI-manipulated or AI-generated content within their terms of use, with a specific focus on the potential risks of it turning into misinformation or disinformation.

The analysis concluded that definitions are divergent. Different terms are used to refer to AI- generated or AI-manipulated  content, including “deepfakes”, “synthetic media”, or “digitally altered content”. The factsheet reports that there is only limited mention of “artificial intelligence” within their policies designed to combat misinformation.

In addition, platforms often neglect to mention AI-generated text and primarily focus on AI- generated images and videos within their policies. While they do address manipulated content, there is typically a lack of attention given to generated content. Notably, platforms are increasingly responding to this challenge by incorporating specific provisions for moderating content generated or manipulated using AI technologies. However, at times, these regulations are limited in scope and confined to content deemed more sensitive, such as political ads.

In cases such as TikTok, where platforms explicitly address synthetic or AI-manipulated media, they try to distinguish between allowed and banned uses. The driving force is either the misleading and harmful potential or a more compliance-oriented approach in terms of copyright and quality standards. It’s important to note that subjective criteria can sometimes be exploited by malicious actors. In many cases, platforms attempt to tackle this issue by requiring users to label AI-manipulated or generated content, placing the responsibility on the user.

From a regulatory point of view, all the studied platforms qualify as Very Large Online Platforms (VLOPs) according to the DSA and all except X abide by the strengthened Code of Practice on Disinformation. Consequently, they are all bound to fulfil their due diligence obligations of the DSA including to justify the means they deploy to combat disinformation on their services. The Code signatories must establish or confirm their policies about AI generated or manipulated content.

The factsheet concludes by providing a set of recommendations including a call on platforms to continue their efforts to respond with effective policy changes to meet new needs in the face of rapidly evolving technologies, to enhance cooperation with experts, and clarify the burden of responsibility on this complex topic.

The factsheet and full analysis can be consulted here: https://www.disinfo.eu/publications/platforms-policies-on-ai-manipulated-and-generated-misinformation/

Author: Raquel Miguel, EU DisinfoLab
Reviewer: Noémie Krack, KU Leuven Centre for IT & IP Law – imec
Layout and design: Heini Järvinen, EU DisinfoLab