2023 . 05 . 12

Will the Digital Services Act (DSA) revolutionise the internet? The present and the future of algorithmic content moderation.

news image

First, the Deliverable D6.2 “Report for Policy on Content Moderation” introduces the concept of “algorithmic content moderation” and explains how matching and classification (or prediction) systems are used to make a decision about content removal, geoblocking, or  account takedown. It then provides the overview of challenges and limitations of automation in content moderation: the lack of context differentiation, the lack of representative, well-annotated datasets to use for machine learning training and a difficulty to computationally encode sociopolitical concepts such as “hate speech” or “disinformation”.

The tensions between content moderation and the fundamental human right to freedom of expression is another research theme. The right to freedom of expression in Europe is enshrined in Article 10 of the European Convention on Human Rights (ECHR) and Article 11 of the EU Charter on Fundamental Rights (ECFR) and includes the right to freely express opinions, views, and ideas and to seek, receive and impart information regardless of frontiers. The use of algorithmic content moderation tools may undermine freedom of information since that system might not distinguish adequately between lawful and unlawful content, leading to the over-blocking of lawful communications. On the other hand, the under-removal of certain types of content results in a failure to address hate speech and may create a “chilling effect” on some individuals’ and groups’ willingness to participate in online debate.

Second, the report analyses the EU legal landscape concerning content moderation along two dimensions. First, the horizontal rules, which apply to all types of content: the e-Commerce Directive, the newly adopted Digital Services Act (DSA) and the Audio-Visual Media Services Directive (AVMSD) that imposes obligations on video-sharing platforms. Next, it focuses on rules which apply to specific types of content: terrorist content, child sexual abuse material (CSAM), copyright infringing content, racist and xenophobic content, disinformation, and hate speech. For each of the initiatives, the report provides a description of the main concepts, a critical assessment and future-oriented recommendations.

The Digital Services Act (DSA), which entered into force on 16 November 2022, is subject to detailed analysis given its recency and novelty.  The main aims of the new rules are to:

  • Establish a horizontal framework for regulatory oversight, accountability and transparency of the online space
    • One of the measures foreseen by the DSA includes the obligation for online platforms to publish yearly transparency reports, detailing their algorithmic content moderation decisions.
  • Improve the mechanisms for the removal of illegal content and for the effective protection of users’ fundamental rights online.
    • The DSA establishes a notice-and-action framework for content moderation. This mechanism allows users to report the presence of (allegedly) illegal content to the service provider concerned and requires the provider to take action in a timely, diligent, non-arbitrary, and objective manner.
  • Propose rules to ensure greater accountability on how platforms moderate content, on advertising and on algorithmic processes.
    • In particular, according to Article 14 DSA online platforms remain free to decide what kind of content they do not wish to host, even if this content is not actually illegal. They have to, however, make it clear to their users. Moreover, any content moderation decisions must be enforced ‘in a diligent, objective and proportionate manner’, and with due regard to the interests and fundamental rights involved
    • Importantly, Article 17 requires that providers of hosting services provide a clear and specific statement of reasons to any affected recipients of the service on content moderation decisions.
  • Provide users with possibilities to challenge the platforms’ content moderation decisions.
    • The DSA offers new redress routes which can be used by affected users in a sequence or separately: an internal complaint-handling system and the out-of-court dispute settlement.
  • Impose new obligations on very large online platforms (VLOPs) and very large online search engines (VLOSEs) to assess and mitigate the systemic risks posed by their systems.
    • VLOPs and VLOSEs have the obligation to self-assess the systemic risks that their services may cause and adopt mitigation measures such as adapting their content moderation and recommender systems policies and processes.

It remains to be seen if the DSA will be a “success story”. Besides the elements listed above, the DSA also provides a role for a community of specialised trusted flaggers to notify problematic content, a new access to platforms’ data mechanism in Article 40, as well as a system of enforcement and penalties for non-compliance.

Third, the report also offers a perspective on the future trends and alternative approaches to content moderation. These include end-user or community-led moderation such as voluntary moderation on platforms such as Wikipedia and Discord. Next, the deliverable outlines the content moderation practices in the fediverse, and uses the Mastodon project as a case study. Although these forms of moderation have many advantages, because there is no centralised fediverse authority, there is no way to fully exclude even the most harmful content from the network. Moreover, fediverse administrators will generally have fewer resources, as content moderation is a voluntary-run type of service. Much will therefore depend on whether and how the decentralised content moderation framework scales. Moreover, the report analyses the content moderation in the metaverse, which could be described as an immersive 3D world. One of the key research questions concerns the applicability of the newly adopted DSA to illegal or harmful metaverse content. The need to further amend EU law cannot be ruled out, since the topic of virtual reality is not specifically addressed in the DSA. There are, however, interpretations, which suggest that virtual 3D worlds fall within the scope of the DSA.

Fourth, the report outlines the advantages and challenges of self-regulatory accountability mechanisms such as the Facebook Oversight Board (FOB) and the civil society-proposed Social Media Councils. The FOB, as well as the Twitter Trust and Safety Council, the TikTok Content Advisory Council, the Spotify Safety Advisory Council, and Twitch’s Safety Advisory Council have both supporters and critics. Overall, they may provide a valuable complement to robust, international legislation and an additional venue for users’ complaints against platforms.

Fifth, the report also offers the main takeaways and the results of the workshop on AI and Content Moderation organised by two AI4Media consortium partners – KUL and UvA – inviting academics, media companies, a representative of a very large online platform, and a consultant from an intergovernmental organisation as participants.

Last, the deliverable offers both high-level recommendations and content-specific recommendations regarding moderation of terrorist content, copyright-protected content, child sexual abuse material, hate speech, and disinformation. It concludes that there is no easy way to address the multi-complexity of content moderation. An effective enforcement of the new rules will be key to ensure the balance between effective removal of unwanted and illegal content and fundamental rights of online users to express themselves freely.

Author: Lidia Dutkiewicz, Center for IT & IP Law (CiTiP), KU Leuven