AI4Media’s Achievements and Impact on Society

Over the past four years, AI4Media has made significant strides in harnessing the power of artificial intelligence to address key societal challenges and enhance various aspects of public life. This comprehensive effort has resulted in notable advancements in combating disinformation, improving public understanding of AI, and developing tools that support democratic processes and personal privacy. These are some of the key achievements that highlight AI4Media’s impact on citizens and society:

“Artificial Intelligence: Possibilities and Challenges” Exhibition

AI4Media co-organised a temporary museum exhibition titled “Artificial Intelligence: Possibilities and Challenges” at the NOESIS Science Center & Technology Museum in Thessaloniki, open from April 2024. This exhibition targeted school children and the general public, aiming to demystify AI by exploring topics such as generative AI, AI bias, disinformation, and sustainability. Featuring numerous interactive elements, the exhibition provided an engaging and educational experience, helping to foster a better understanding of AI’s potential and challenges among visitors.

Advancing the Fight Against Disinformation

As part of their use case within AI4Media, Germany’s international broadcaster Deutsche Welle (DW) and the Athens Technology Centre (ATC) developed a demonstrator for testing new AI services in a media business environment. Integrated into the “lab version” of the Truly Media platform, these services included verification tools for video, audio, and text content, such as deepfake analysis and text verification. The services were tested against business requirements and contributed significantly to improving Trustworthy AI and AI compliance in media tools.

Successful Use of Deepfake Detection Service

In collaboration with the Horizon Europe project vera.ai, AI4Media developed the RINE method for synthetic image detection, now integrated into the Fake News Debunker browser plugin. This tool, used by over 130,000 journalists and fact-checkers globally, has been pivotal in flagging AI-generated images in significant events, such as the European Elections, the War in Ukraine, and the Israel-Palestine conflict. The AFP (Agence France-Presse) successfully used this service to debunk disinformation during these high-profile events.

Political Barometer: Predicting EU Election Outcomes in Greece

Developed by the Artificial Intelligence and Information Analysis Laboratory of the Department of Informatics at Aristotle University of Thessaloniki (AUTH), the Political Barometer software performs political opinion polling and election result prediction using sentiment analysis of political tweets. This innovative tool, which analyses daily tweets about political parties and integrates past election results and classical poll data, demonstrated high accuracy in predicting outcomes for the Greek parliamentary elections and the European elections of June 2024.

YDSYO App for Assessing Social Media Content Impact

The YDSYO mobile app prototype, developed under AI4Media, uses AI to provide feedback on the potential real-life effects of sharing photographs on social media. It analyses visual content from the user’s smartphone, aggregates the results, and rates these profiles in situations like job searches, loan applications, or housing searches. The app offers users control mechanisms, such as masking or deleting photos, to manage their online presence effectively. All processing is done locally on the device, ensuring user privacy and control over data.

Analysing European Local News with NLP Tools

In the AI4Media project, the Idiap Research Institute in Switzerland developed an analytical framework for local news using open-source natural language processing (NLP) tools. This framework was applied to analyse news sources at both the European and hyper-local levels. By examining local media, which plays a crucial role in maintaining community ties and addressing the crisis of trust in national media, the project highlighted the importance of local journalism in the democratic process.

Through these initiatives, AI4Media has significantly contributed to societal advancements by enhancing public understanding of AI, combating disinformation, improving political discourse analysis, and providing tools for better social media content management. These achievements underscore the project’s commitment to leveraging AI for the public good, ensuring that technological advancements benefit all segments of society.

It is time to enforce AI regulation before adding more to the mix

Now the time has come to see emerging AI regulations enforced before renegotiating and developing further regulatory initiatives. That was the key takeaway from the event titled ‘EU Vision for Media Policy in the Era of AI’ organised by KU Leuven as part of the AI4Media project in mid-June.

On June 19, 2024, regulators, researchers, practitioners, and even the Flemish minister for Brussels, Youth, Media and Poverty Reduction, Benjamin Dalle, came together in one room to discuss the transformative potential of AI in the media sector and to look forward to how regulation can make a meaningful impact in how AI is developed and used. The event was hosted at the Belgian Institute for Postal Services and Telecommunications (BIPT) in Brussels and organised by KU Leuven’s (KUL) Centre for IT & IP Law (CiTiP) as part of the Horizon2020 project AI4Media.

The day started with a keynote by Benjamin Dalle, who highlighted a two-sided role for policymakers and regulators; regulating challenges and supporting AI developments in Europe. Something that became a dominant theme throughout the day, as both media practitioners and researchers highlighted how regulation should enable responsible AI development and use, but also crucially be enforced to hinder malicious use of AI and a concentration of power.

This dual need was also evident in the policy recommendations presented during the event by Lidia Dutkiewicz, Noémie Krack from KUL, and Anna Schjøtt Hansen from the University of Amsterdam (UvA). The final recommendations will be delivered to the European Commission by the end of August. They are based on four years of research to understand the core policy needs of the media sector. Here both supportive mechanisms, such as sustainable funding schemes, and mitigating measures, such as ensuring access to APIs of large platforms, were presented as highly relevant to supporting media independence, plurality, and the maintained watchdog function of media.

However, in the final panel of the day composed of Peggy Valcke from BIPT & KUL, Renate Schroeder from the European Federation of Journalism (EFJ), Júlia Tar from Euractiv, and Tomás Dodds from Leiden University, it also became clear that one important policy need is to slow down the ongoing flow of new AI legislation, to give regulators and media organisations a chance to implement, enforce and learn from their experiences with the new legal frameworks.

Looking into a period of enforcement and learning

With the Digital Services Act (DSA), Digital Markets Act (DMA) and the AI Act all being adopted within the last few years – the coming period would be one focused on creating meaningful enforcement and learning about both the positive and negative impacts of the regulation before renegotiating.

This was also highlighted in the panel that featured three of the four media regulatory bodies of Belgium, including Bernardo Herman from the Belgian Institute for Postal Services and Telecommunications (BIPT), François Jongen from der Medienrat, and Carlo Adams from the Vlaamse Regulator voor de Media (VRM). For them, this landscape was one they were only beginning to navigate and they all underlined the importance of adequate staffing, resources, and recruitment of specialized talent across disciplines as prerequisites for efficient enforcement.

A constantly evolving AI landscape will produce new challenges

The challenges of developing and integrating AI in the media sector had been discussed earlier in the day by media practitioners including Rasa Bocyte from the Netherlands Institute for Sound & Vision (NISV), Chaja Libot from the Flemish public broadcaster (VRT), Frank Visser representing the DRAMA project and Angel Spasov from Imagga.

While they all agreed that much value is to be gained from AI when it is done right – that is exactly the tricky part; getting it right. As Rasa Bocyte noted as she started her presentation: “Integration of AI in media is not straightforward”, highlighting how media professionals try to navigate this responsibly but face many dichotomies of wanting to move fast but also being organisationally cautious and slow to protect societal values.

Rasa Boçyte from the Netherlands Institute for Sound & Vision (NISV) presenting the results from the AI4Media AI in media integration workshop

While the media practitioners welcomed a pause in the stream of new legislation, they also stressed the importance of keeping up with the new challenges that AI will continue to pose for the media sector, such as the ongoing debate around copy-righted training data.

Rasa has also introduced the AI4Media initiative How is the Media Sector Responding to Content Crawling for Model Training as a concrete initiative to help gain an overview of these challenges, which was also an area where there was agreement in the room that the EU legislation yet needs to capture the full extend of the problem.

A missed legislative opportunity?

Many media practitioners and researchers find it important to consider how we regulate AI societal risks, including worker displacement, and environmental costs. An equally important aspect is the power imbalance between big tech and media and limited support for long-term sustainable funding and upskilling. These remain only limitedly addressed. While the mantra of the day remained the call for ‘no more legislation’, it was also stressed that it would be important to revisit the newly passed acts in a few years to better address these challenges once we have learned more about their real-world effects. It is only then the legislators should consider new actions to close the enforcement gaps.

Anna Schjøtt Hansen from the University of Amsterdam (UvA) presenting six cross-cutting policy needs for the media sector

The need for interdisciplinarity & collaboration

The various panels emphasized the critical need for collaboration across stakeholders active in the media sector. It was highlighted that shaping an effective EU agenda and ensuring the responsible integration of AI in the media sector cannot be achieved in isolation. The discussions underscored that collective effort and multi-stakeholder engagement are essential to navigate the complexities and harness the full potential of AI use in the media sector. The closing remarks invited all participants to commit to ongoing dialogue and cooperation to drive forward responsible technology development and AI strategy for the media sector.

Experts’ perspective on policy & regulation

Fact box: AI4Media & find out more

The event was part of the AI4Media project, which aims to strengthen Europe’s Excellence in AI for Media, Society, and Democracy and ensure the development of ethical and trustworthy AI.

Importantly the insights generated at the event will feed into the final policy recommendations that will be published by the end of August and sent to the European Commission. In the following weeks, four blog posts that discuss where the current legislation is finding and missing its mark will also be published on the AI Media Observatory’s medium page.

To gain an overview of the work that has led up to the policy recommendations, you can find the reports, factsheets, and whitepapers that disseminate the results in this brochure.