How can we integrate AI successfully in news production? VRT shares their experiences and good practices

As part of the AI4Media project, the national public-service broadcaster for the Flemish Community of Belgium (VRT), has been involved in integrating new AI applications into their workflows. This is a highly complicated process, where there is much to learn from good practices – here VRT shares a few of their insights. 

To facilitate better integration processes VRT has developed a stand-alone tool in which the possibilities and functionalities of a new AI application are made visible and tangible to the team involved. This enables key stakeholders, such as editors-in-chief to better assess the added value and make an informed decision whether to go ahead with the integration.

The tool is called Smart News Assistant, which is aimed at expressing the tool’s role in ‘assisting’ or ‘co-creating’. This is emphasised due to the importance for news professionals to be in control of the production process.

Starting from the content source

When developing this tool, the VRT’s news department expressed that it would be highly important to always start from existing content when assessing the capabilities of the potential AI solution, as this would ensure a sense of reliability and control.

Therefore, the tool allows the user to start from an existing piece of content, video, audio or text, and see how AI would enable the generation of new content formats, like for example a short video or Instagram post.

After making the possibilities of AI tangible in the Smart News Assistant, the editors-in-chief can assess the added value.  If the editors see potential in the solution via this initial test, the innovation team of VRT will proceed to explore integration possibilities. This involves collaboration with the technology teams who are responsible for the surrounding systems that the AI application is to be integrated with.

Towards integrating automatic summarization

One of the potential applications that were made tangible with the Smart News Assistant was automated summarisation, where a news article was automatically turned into bullet points. This was seen as highly valuable for editors and is something that VRT is now working towards integrating. However, there are many challenges in this process, such as the integration into the existing CMS system.

In this work, the team involved are focused not only on integrating the AI functionalities into the familiar news production flow but they are also taking into account emerging formats. VRT, for example, recently introduced WhatsApp updates where VRT NWS has its own channel – a format where the summarization tool might also be useful. By thinking beyond the existing news flow, they can work more efficiently in integration work and pre-empt other potential use cases.

However, this is where the Smart News Assistant is providing additional value, because while the integration process is ongoing, the editors can still use the AI tool as it is presented in the Smart News Assistant interface. So while it is not directly integrated into workflows, editors can copy-paste the textual suggestions manually into their CMS system from the interface, which places less stress on the integration team and enables immediate value from the AI solution for the editors – even if it requires a few extra clicks. 


Screenshot Smart News Assistant (summary) – by Chaja Libot (VRT)


Screenshot Smart News Assistant (Whatsapp Update + fine-tuning result) – Chaja Libot

Author: Chaja Libot (Design Researcher, VRT) 

Recommenders: Amplifiers or Mitigators of Human Biases?

Recommender systems are often criticized for potentially contributing to filter bubbles. This phenomenon is sometimes attributed to algorithmic bias, suggesting that systems operate contrary to user interests. However, this perspective may be overly simplistic; recommender systems are typically optimized for the “utility” as a metric, driven by immediate user engagement such as clicks and likes. In doing so, they inherently reinforce human biases, particularly (i) confirmation bias—the tendency to search for, interpret, favor, and recall information in a way that confirms one’s preexisting beliefs supported rapid decision-making, which was crucial for survival: In the face of imminent threats, this bias can simplify complex information processing, enabling quicker responses by focusing on data that supported known strategies or dangers—and (ii) in-group bias, which is the predisposition to engage with content or groups that share similar attributes or opinions, enhancing social cohesion and cooperation within tribes, fostering trust and mutual support, all crucial for survival in environments where human groups competed for resources. These biases, while advantageous throughout most of human evolution, pose serious challenges in today’s digital environment, which offers unprecedented freedom to filter information and engage only with agreeable content and like-minded individuals. As a result, recommenders can reinforce such human biases and reaffirm users’ beliefs by “filtering us” into information bubbles. Likewise, however, the same technology can also be used to reduce them. Recommender systems can be designed to provide us with a broader range of viewpoints and content, pushing us to also consider opinions and information outside of our bubbles, thereby promoting the diverse public discourse that is essential for democratic engagement.

To tackle this issue, we can utilize metrics such as novelty, diversity, unexpectedness, and serendipity in recommendation algorithms, which aim to broaden users’ informational horizons. Moreover, this approach can be supported by technologies that automatically analyze and annotate content, providing the data needed to drive recommendations that are both subtle and transparent. The goal is to encourage user engagement with a variety of topics and viewpoints without overwhelming them.

Can services and business models that prioritize long-term user satisfaction over short-term metrics like clicks be successful? Similar shifts have succeeded in other sectors. For example, despite our preference for sugary foods, the market for healthier options has flourished. Moreover, as our understanding of the underlying problems deepens, regulatory measures become more likely.A precondition to this, however, is that we start recognizing our personal biases and limitations and how they are contributing to the creation of filter bubbles and all related problems, creating the willingness to tackle them. This includes the exploration of new business models for a healthier information diet, because the current models do not yet address this. They have, for better or worse reasons, catered to our immediate urges too much, at the expense of long-term well-being and societal discourse.

Author: Patrick Aichroth (Fraunhofer IDMT)

Eight new AI services to support future media verification work

As part of their use case in AI4Media, Germany’s international broadcaster Deutsche Welle (DW) and the Athens Technology Centre (ATC) have developed a demonstrator that allows for the testing of new AI services in a media business environment. The services are integrated into a “lab version” of Truly Media, an established platform for collaborative content verification. Here’s an overview of what the services can do:

Video verification services: 

  • Service #1 analyses the deepfake probability of faces extracted from a video and gives users an overall deepfake probability for the entire clip. Transparent AI information helps users interpret these predictive results (Developer: CERTH-ITI).
  • Service #2 is for video summarisation, offering a video synopsis containing all elements deemed relevant, thus helping users save time when dealing with multiple longer videos (Developer: CERTH-ITI).

Audio verification services: 

  • Service #3 is for synthetic audio detection, alerting users that elements in an audio file might have been generated with Descript voice generation software, also pointing out the level of detection evidence (Developer: Fraunhofer-IDMT).
  • Service #4 is another service for synthetic audio detection, aiming to cover a wider range of audio generation software and telling users whether or not synthetic content has been detected (Developer: CERTH-ITI).
  • Service #5 is about duplicate detection in audio. It highlights possible duplicates in the audio track, e.g. sections for which a phrase has been copied and pasted (Developer: Fraunhofer-IDMT).
  • Service #6 allows users to compare two or more audio files – and to identify matching segments (Developer: Fraunhofer-IDMT).
  • Service #7 classifies the microphones used in a recording. It alerts users to possible changes in the recording conditions, e.g. a change in the recording device. (Developer: Fraunhofer-IDMT)

Text verification service: 

  • Service #8 informs users if the text in a tweet has been generated by AI (more specifically: GPT2) – or written by a human (Developer: CEA).

The use case owners DW and ATC are testing these new services in the context of the existing Truly Media platform and verification workflows but also against business requirements related to AI services in media tools, Trustworthy AI as well as AI compliance. We hope these learnings will have a positive impact on news media verification work – which is getting more complex and demanding by the hour.

Author: Alexander Plaum, Innovation Manager, Deutsche Welle (DW)

How social media platform addresses AI generated or manipulated content?

The advancement of artificial intelligence (AI) technologies has posed a persistent challenge for the field of disinformation. These technologies make it easier to manipulate content, facilitating its rapid dissemination on the internet, sometimes resulting in intended or unintended harm and deception. Social media platforms serve often as the primary intermediaries between such content and users. Therefore, it is crucial to comprehend their approaches to moderating AI-manipulated and AI-generated content that can potentially circulate as misinformation or disinformation. EU DisinfoLab has developed a factsheet and an analytical framework to analyse and compare the policies of five big platforms in this regard.

The EU DisinfoLab factsheet explores how Facebook, Instagram, TikTok, X, and YouTube address AI-manipulated or AI-generated content within their terms of use, with a specific focus on the potential risks of it turning into misinformation or disinformation.

The analysis concluded that definitions are divergent. Different terms are used to refer to AI- generated or AI-manipulated  content, including “deepfakes”, “synthetic media”, or “digitally altered content”. The factsheet reports that there is only limited mention of “artificial intelligence” within their policies designed to combat misinformation.

In addition, platforms often neglect to mention AI-generated text and primarily focus on AI- generated images and videos within their policies. While they do address manipulated content, there is typically a lack of attention given to generated content. Notably, platforms are increasingly responding to this challenge by incorporating specific provisions for moderating content generated or manipulated using AI technologies. However, at times, these regulations are limited in scope and confined to content deemed more sensitive, such as political ads.

In cases such as TikTok, where platforms explicitly address synthetic or AI-manipulated media, they try to distinguish between allowed and banned uses. The driving force is either the misleading and harmful potential or a more compliance-oriented approach in terms of copyright and quality standards. It’s important to note that subjective criteria can sometimes be exploited by malicious actors. In many cases, platforms attempt to tackle this issue by requiring users to label AI-manipulated or generated content, placing the responsibility on the user.

From a regulatory point of view, all the studied platforms qualify as Very Large Online Platforms (VLOPs) according to the DSA and all except X abide by the strengthened Code of Practice on Disinformation. Consequently, they are all bound to fulfil their due diligence obligations of the DSA including to justify the means they deploy to combat disinformation on their services. The Code signatories must establish or confirm their policies about AI generated or manipulated content.

The factsheet concludes by providing a set of recommendations including a call on platforms to continue their efforts to respond with effective policy changes to meet new needs in the face of rapidly evolving technologies, to enhance cooperation with experts, and clarify the burden of responsibility on this complex topic.

The factsheet and full analysis can be consulted here: https://www.disinfo.eu/publications/platforms-policies-on-ai-manipulated-and-generated-misinformation/

Author: Raquel Miguel, EU DisinfoLab
Reviewer: Noémie Krack, KU Leuven Centre for IT & IP Law – imec
Layout and design: Heini Järvinen, EU DisinfoLab

Algorithmic systems: how should DSA risk assessments be conducted?

The Digital Services Act (DSA) entered into force in August 2023 for Very Large Online Platforms and Search engines (respectively VLOPs and VLOSEs). These actors must comply with their set of DSA obligations including conducting systemic risks assessment and mitigation. While these new obligations hold great promises, the DSA articles do not specify how the identification and assessment of these systemic risks should occur. Civil society started investigating the topic and delivered recommendations and methodologies. In this piece, I present a selection of their work on the subject.

The DSA is a landmark EU regulation which is currently changing the digital landscape. It establishes a new set of accountability obligations for platforms to create a safer digital space. The DSA promisingly introduced a self-assessment and mitigation obligation for VLOPs and VLOSEs about the systemic risks stemming from their services. These actors must “diligently identify, analyse and assess any systemic risks in the Union stemming from the design or functioning of their service and its related systems, including algorithmic systems, or from the use made of their services.”

Systemic risks include non-exclusively the dissemination of illegal content, any negative effects on the exercise of fundamental rights, the intentional manipulation of their services, including through inauthentic use or automated exploitation means, that has a negative effect on the protection of public health, minors, civic discourse, or on electoral processes and public security. 

The DSA emphasises the need for VLOPs and VLOSEs to consider the influence of recommender systems, content moderation systems, and other algorithmic systems on these risks. However, the legislation lacks specific details on the risk assessment procedure, prompting civil society to step up and produce insights for a meaningful implementation of this obligation. A selection is presented below. 

  • Risks to media freedom and diversity

AlgorithmWatch delivered an outline of a risk assessment method for measuring the risks posed by internet services to media freedom and diversity. They focused on how to identify and assess the risks that internet service generates for freedom of speech and media pluralism. They established a framework composed of 4 steps which they then applied to the digital media sector. They provided various case studies. 

  • Risks to fundamental rights 

ECNL (European Center for Not-for-Profit Law) and Access Now released key recommendations to conduct fundamental rights impact assessments under the DSA. Their paper aims to help primarily the European Commission in their enforcement activities but also VLOPs and VLOSE for their self-assessment. 

  • Risks to disinformation spread

An independent study analysed systemic risks caused by pro-Kremlin disinformation campaigns. The study establishes a methodological approach for civil society and the broader expert community to contribute to assessing the different types of risks caused by disinformation on online platforms. The report can be taken into account by the EC when analysing the risk assessments submitted by VLOPs and VLOSEs. 

Final reflections

These initiatives show that while risk assessment and mitigation obligations under the DSA are powerful tools against online harms, a closer examination of the methodology is indeed essential.  The EC announced that other studies will be conducted. 

Close interdisciplinary collaboration between relevant stakeholders will be key to ensure well thought procedure and methodology. Especially because conducting and evaluating a risks assessment owns, as underlined by AlgorithmWatch, normative and technological challenges. Designing risk assessment  requires great care and continuous efforts because every risk model unavoidably “requires simplification and abstraction of reality in some respects”.

Author: Noémie Krack, Legal Researcher, CiTiP, KU Leuven-imec.

First-ever European Media Industry Outlook published by the European Commission

The Commission published the first-ever European Media Industry Outlook, analysing trends in the audiovisual, video game and news media industries. Commissioner for Internal Market Thierry Breton presented the report at the European Film Forum, organised at the Festival de Cannes.

The Media Outlook report provides market data and identifies challenges and underlying technological trends common to the media industries. Among other findings, it stresses the structural impact of the ongoing shift in media consumption in favour of digital players. According to the report, growth is mostly driven by segments such as video on demand (VoD), mobile gaming or immersive content.

Copyright European Commission

The report also highlights the relevance of strategic assets such as intellectual property rights (IP) for media companies and how the retention, acquisition and exploitation of these rights can help increase revenues, invest or remain independent. It also stresses that an early yet wise uptake of innovative technologies and techniques (e.g. AI virtual production) is fundamental to adapt, opening up new markets and becoming more competitive. Moreover, audience-driven strategies should serve as a basis for building successful business models.

Access to the European Media Industry Outlook report HERE.

Launch of the AI Media Observatory

In May 2023 the AI4Media consortium launched the beta-version of the European AI Media Observatory. The Observatory will serve as a knowledge platform that monitors and curates relevant research on AI in media, provides expert perspectives on the potentials and challenges that AI poses for the media sector and allows stakeholders to easily get in touch with relevant experts in the field via our directory.
The Observatory builds on the expertise of more than 30 leading research and industry partners in the field of AI in media.

The newly launched Observatory is envisioned to be a one-stop-shop for industry, civil society and policy makers who are interested in the implications of AI in the media sector. The aim of the Observatory is to support the ongoing efforts of the multidisciplinary community of professionals who are working towards ensuring responsible use of AI in the media sector and contribute to the broader discussion and understanding of the development and use of AI in the sector and its impacts on society, economy and people.

The Observatory features three main components; ‘Your AI Media Feed’, ‘Let’s Talk AI and Media’ and ‘Find and AI Media Expert’ – along with an overview of relevant upcoming events.

  •   ‘Your AI Media Feed’ is a content site that contains the latest content on AI in the media, focusing on emerging trends in the sector, changes in the policy landscape and the societal implications of AI as well as approaches to social and ethical AI.
  •   ‘Let’s Talk AI and Media’ is a video site that features relevant talks, roundtable discussion, and presentations by experts in the field. The site provides an easily accessible entry point to gain insights into ongoing topics and debates in the field.
  •   ‘Find and AI Media Expert’ is an expert directory, where you can easily search, find, and contact a relevant technical, legal, or social expert within the field of media and AI. If you work in this field, you are also welcomed to sign up to be featured on the directory.

The Observatory features both content produced as part of the AI4Media project, but also relevant external content. If you know of relevant content, you are welcomed to submit it via the form on the site. All featured content is curated by the editorial board of the Observatory according to the outlined editorial principles. Experts who sign up to be featured on the directory, equally undergo a check of relevance by the editorial board.

The current version of the observatory is a beta-version and as a result not all parts are fully developed, such as the expert directory, which will be deployed in full in the coming months. It will also undergo adjustments and improvements based on feedback in the period up to the official launch. The final version of the Observatory will be launched in October 2023.

Will the Digital Services Act (DSA) revolutionise the internet? The present and the future of algorithmic content moderation.

First, the Deliverable D6.2 “Report for Policy on Content Moderation” introduces the concept of “algorithmic content moderation” and explains how matching and classification (or prediction) systems are used to make a decision about content removal, geoblocking, or  account takedown. It then provides the overview of challenges and limitations of automation in content moderation: the lack of context differentiation, the lack of representative, well-annotated datasets to use for machine learning training and a difficulty to computationally encode sociopolitical concepts such as “hate speech” or “disinformation”.

The tensions between content moderation and the fundamental human right to freedom of expression is another research theme. The right to freedom of expression in Europe is enshrined in Article 10 of the European Convention on Human Rights (ECHR) and Article 11 of the EU Charter on Fundamental Rights (ECFR) and includes the right to freely express opinions, views, and ideas and to seek, receive and impart information regardless of frontiers. The use of algorithmic content moderation tools may undermine freedom of information since that system might not distinguish adequately between lawful and unlawful content, leading to the over-blocking of lawful communications. On the other hand, the under-removal of certain types of content results in a failure to address hate speech and may create a “chilling effect” on some individuals’ and groups’ willingness to participate in online debate.

Second, the report analyses the EU legal landscape concerning content moderation along two dimensions. First, the horizontal rules, which apply to all types of content: the e-Commerce Directive, the newly adopted Digital Services Act (DSA) and the Audio-Visual Media Services Directive (AVMSD) that imposes obligations on video-sharing platforms. Next, it focuses on rules which apply to specific types of content: terrorist content, child sexual abuse material (CSAM), copyright infringing content, racist and xenophobic content, disinformation, and hate speech. For each of the initiatives, the report provides a description of the main concepts, a critical assessment and future-oriented recommendations.

The Digital Services Act (DSA), which entered into force on 16 November 2022, is subject to detailed analysis given its recency and novelty.  The main aims of the new rules are to:

  • Establish a horizontal framework for regulatory oversight, accountability and transparency of the online space
    • One of the measures foreseen by the DSA includes the obligation for online platforms to publish yearly transparency reports, detailing their algorithmic content moderation decisions.
  • Improve the mechanisms for the removal of illegal content and for the effective protection of users’ fundamental rights online.
    • The DSA establishes a notice-and-action framework for content moderation. This mechanism allows users to report the presence of (allegedly) illegal content to the service provider concerned and requires the provider to take action in a timely, diligent, non-arbitrary, and objective manner.
  • Propose rules to ensure greater accountability on how platforms moderate content, on advertising and on algorithmic processes.
    • In particular, according to Article 14 DSA online platforms remain free to decide what kind of content they do not wish to host, even if this content is not actually illegal. They have to, however, make it clear to their users. Moreover, any content moderation decisions must be enforced ‘in a diligent, objective and proportionate manner’, and with due regard to the interests and fundamental rights involved
    • Importantly, Article 17 requires that providers of hosting services provide a clear and specific statement of reasons to any affected recipients of the service on content moderation decisions.
  • Provide users with possibilities to challenge the platforms’ content moderation decisions.
    • The DSA offers new redress routes which can be used by affected users in a sequence or separately: an internal complaint-handling system and the out-of-court dispute settlement.
  • Impose new obligations on very large online platforms (VLOPs) and very large online search engines (VLOSEs) to assess and mitigate the systemic risks posed by their systems.
    • VLOPs and VLOSEs have the obligation to self-assess the systemic risks that their services may cause and adopt mitigation measures such as adapting their content moderation and recommender systems policies and processes.

It remains to be seen if the DSA will be a “success story”. Besides the elements listed above, the DSA also provides a role for a community of specialised trusted flaggers to notify problematic content, a new access to platforms’ data mechanism in Article 40, as well as a system of enforcement and penalties for non-compliance.

Third, the report also offers a perspective on the future trends and alternative approaches to content moderation. These include end-user or community-led moderation such as voluntary moderation on platforms such as Wikipedia and Discord. Next, the deliverable outlines the content moderation practices in the fediverse, and uses the Mastodon project as a case study. Although these forms of moderation have many advantages, because there is no centralised fediverse authority, there is no way to fully exclude even the most harmful content from the network. Moreover, fediverse administrators will generally have fewer resources, as content moderation is a voluntary-run type of service. Much will therefore depend on whether and how the decentralised content moderation framework scales. Moreover, the report analyses the content moderation in the metaverse, which could be described as an immersive 3D world. One of the key research questions concerns the applicability of the newly adopted DSA to illegal or harmful metaverse content. The need to further amend EU law cannot be ruled out, since the topic of virtual reality is not specifically addressed in the DSA. There are, however, interpretations, which suggest that virtual 3D worlds fall within the scope of the DSA.

Fourth, the report outlines the advantages and challenges of self-regulatory accountability mechanisms such as the Facebook Oversight Board (FOB) and the civil society-proposed Social Media Councils. The FOB, as well as the Twitter Trust and Safety Council, the TikTok Content Advisory Council, the Spotify Safety Advisory Council, and Twitch’s Safety Advisory Council have both supporters and critics. Overall, they may provide a valuable complement to robust, international legislation and an additional venue for users’ complaints against platforms.

Fifth, the report also offers the main takeaways and the results of the workshop on AI and Content Moderation organised by two AI4Media consortium partners – KUL and UvA – inviting academics, media companies, a representative of a very large online platform, and a consultant from an intergovernmental organisation as participants.

Last, the deliverable offers both high-level recommendations and content-specific recommendations regarding moderation of terrorist content, copyright-protected content, child sexual abuse material, hate speech, and disinformation. It concludes that there is no easy way to address the multi-complexity of content moderation. An effective enforcement of the new rules will be key to ensure the balance between effective removal of unwanted and illegal content and fundamental rights of online users to express themselves freely.

Author: Lidia Dutkiewicz, Center for IT & IP Law (CiTiP), KU Leuven

Addressing challenges for the use of AI in media. What ways forward?

How to tackle the key challenges for the use of AI applications in the media sector for media companies, researchers and legal and social science scholars? The deliverable D2.4 “Policy Recommendations for the use of AI in Media Sector” is a result of the interdisciplinary research by legal, technical, and societal AI4Media experts, as well as an analysis of the 150 responses from AI researchers and media professionals which were collected as part of the AI4Media survey. It provides the initial policy recommendations to the EU policymakers, addressing these challenges.

There is an enormous potential for the use of AI at the different stages of media content production, distribution and re-use. AI is already used in various applications: from content gathering and fact-checking, through content distribution and content moderation practices, to audio-visual archives. However, the use of AI in media also brings considerable challenges for media companies and researchers and it poses societal, ethical and legal risks.

Media companies often struggle with staff and knowledge gap, the limited resources (e.g. limited budget for innovation activities) and the power imbalance vis-à-vis large technology companies and platform companies who act as providers of AI services, tools and infrastructure. Another set of challenges relates to legal and regulatory compliance. This includes the lack of clear and accessible ethics and legal advice for the media staff as well as the lack of guidance and standards to assess and audit the trustworthiness and ethicality of the AI used in media applications.

To overcome some of these challenges, the report provides the following initial recommendations to the EU policy-makers:

  • Promoting of EU-level programs for training media professionals
  • Promoting and funding the development of national or European clusters of media companies and AI research labs that will focus on specific topics of wider societal impact
  • Promoting of initiatives such as Media Data Space, which would extend to pooling together AI solutions and applications in the media sector
  • Fostering the development of regulatory sandboxes to support early-stage AI innovation
  • Providing a practical guidance on how to implement ethical principles, such as AI HLEG Guidelines for Trustworthy AI in specific media-related use cases


Researchers
in AI and media are often faced with challenges predominantly related to data: the lack of real-world, quality, and GDPR-compliant data sets to develop AI research. Disinformation analysis within media companies suffers not only from restricted online platforms application programming interfaces (APIs) but also from the lack of common guidelines and standards as to which AI tools to use, how to interpret results and how to minimise confirmation bias in the content verification process.

To overcome some of these challenges, the report provides the following initial recommendations to the EU policy-makers:

  • Supporting the development of publicly available datasets for AI research, cleared and GDPR-compliant (a go-to place for sharing quality AI datasets)
  • Providing formal guidelines on AI and the GDPR which will address practical questions faced by the media sector such as on using and publishing datasets containing social media data
  • Promoting the development of standards for the formation of bilateral agreements for data sharing between media/social media companies and AI researchers

There are also considerable legal and societal challenges for the use of AI applications in the media sector. Firstly, there is a complex legal landscape and plethora of initiatives that indirectly apply to media. However, there is the lack of certainty on whether and how various legislative and regulatory proposals, such as the AI Act, apply to the media sector. Moreover, societal and fundamental rights challenges relate to the possibility of the AI-driven manipulation and propaganda, AI bias and discrimination against underrepresented or vulnerable groups and the negative effects of the recommender systems and content moderation practices.

To overcome some of these challenges, the report provides the following initial recommendations to the EU policy-makers:

  • Facilitating a process of establishing standardised processes to audit AI systems for bias/discrimination
  • Providing a clear vision on the relationship between legacy (traditional) media and the very large online platforms in light of their “opinion power” over public discourse
  • Clarifying the applicability of the AI Act proposal to media AI applications
  • Ensuring the coherence of AI guidance between different standard setting organisations (CoE, UN, OECD,…)

Lastly, the report also reflects on the potential development of a European Digital Media Code of Conduct as a possible way to tackle the challenges related to the use of AI in media. It first maps the European initiatives which already establish codes of conduct for the use of AI in media. Then, the report proposes the alternatives to the European Digital Media Code of Conduct. It notes that instead of a high-level list of principles, the media companies need a practical, theme-by-theme guide to ethical compliance of real life use cases. Another possible solution could put more focus on certifications to ensure a fair use of AI in media.

 

Author: Lidia Dutkiewicz, Center for IT & IP Law (CiTiP), KU Leuven

AI4Media researchers discuss their work: a video series

The 7th AI4Media Plenary Meeting was held at the University of Florence, Italy on January 31st and February 1st 2023.

During the event, the AI researchers working in WPs 3, 4,5 and 6 presented their most recent research results through posters and demos, while the media industry partners demonstrated live the demonstrators developed for the seven AI4Media use cases.

A debate space was also provided for the partners to exchange ideas and get to know each other’s work.

Some of the AI techniques and demos presented at the event are presented in short videos available on the project’s YouTube channel. Here’s the list of techniques and applications s presented in these videos:

 

Authors: Candela Bravo & Joana Martinheira (LOBA)

First open call projects come to an end with promising results and contributions to the community

The 10 projects funded under the first AI4Media open call have finalised their activities. The projects – five from the application track and five from the research track – initiated their activities on 1 March 2022 and ended on 31 October 2022 and 28 February 2023, respectively. The projects, which addressed different topics in the AI and media domains, delivered new applications and research work focusing on audio and music, recommendation systems, edge computation, misinformation, and others.

The main results and achievements of the 10 projects are presented below, each having also provided a contribution to the AIM4Media ecosystem.

VRES (Application project by Varia)

The VRES (Varia Research) project set out to revolutionise journalistic research by providing an integrated SaaS solution that allows media monitoring and research organisation in one place. The machine learning powered application Varia Research promises more efficient research and additional automated insights. The project has contributed to the AI4Media ecosystem and to the broader media audience with a freely available online research application that brings AI to the people – to the heavy lifters of the news media industry, the journalists.

AIEDJ (Application project by musicube GmbH)

The AIEDJ (AI Empathic DJ) project has focused on developing neural networks that process audio files and automatically tag them with musical features, sound features and emotions. The project has developed a software for Spotify-User data from the Spotify API which is then fed into a neural network. The neural network was trained with both music metadata and audio files. The project has contributed software that allows search operations based on the musical information retrieved with the neural nets and shifted by the user’s Spotify listening behaviour (meaning the user’s perspective on music).

InPreVI (Application project by JOT Internet Media)

The InPreVI (Inauthentic web traffic Prediction in Video marketing campaigns for investment optimization) project has set out to develop an innovative AI based system that can, first, identify the main behavioural patterns of inauthentic users to predict their actions and limit their impact in the video marketing campaigns; and secondly,  model the quality score associated with a campaign. InPreVI has contributed a dataset that can be used to train and validate predictive and classification models as well as enrich other data with it; a classification model that provides ideas related to the potential use of the data set; and a predictive model for conversion difference.

CUHE (Application project by IN2 Digital Innovations GmbH)

The CUHE (An explainable recommender system for holistic exploration and CUration of media HEritage collections) project has looked to develop and demonstrate a web-based application based on AI recommendations that allow cultural heritage professionals (e.g. museum curators, archivists) as well as researchers to explore existing media and cultural heritage digital collections in a more holistic way and allow them to curate new galleries or create digital stories and exhibitions which can showcase and share the new insights gained. The project has contributed with the CUHE recommender system, which will be made available as a service,  as well as a related dataset.

CIMA (Application project by AdVerif.ai)

The CIMA (Next-Gen Collaborative Intelligence for Media Authentication) project has focused on creating a next-gen collaborative intelligence platform powered by the latest AI advancements to make journalists and fact-checkers more effective in media authentication. The work is focused on collaborative investigation and collection of evidence to support cross-EU investigations and knowledge sharing. Moreover, the CIMA project has also looked to provide a novel system for preservation of evidence on the Internet. The project has contributed algorithms for integrations with common open-source intelligence.

RobaCOFI (Research project by the Institut Jozef Stefan)

The RobaCOFI (Robust and adaptable comment filtering) project has looked to develop new methods to overcome the challenge of moderating contents associated with news articles, which is often done by human moderators and therefore decisions may be subjective and hard to make consistently. The project has developed methods for semi-automatic annotation of data, including new variants of active learning in which the AI tools can quickly select the data that need to be labelled. Work has been built on recent progress in topic-dependent comment filtering to build tools that can take the context of the associated news article into account, reducing the new data needed. The project has contributed with several public resources, including a pre-trained offensive language moderation classifier and software tools for model adaptation and active learning.

NeurAdapt (Research project by Irida Labs)

The NeurAdapt (Development of a Bio-inspired, resource efficient design approach for designing Deep Learning models) project has set out to explore a new path in the design of deep Convolutional Neural Networks (CNNs), which could enable a new family of more efficient and adaptive models for any application that rely on the predictive capabilities of deep learning. Inspired by recent advances in the field of biological Interneurons that highlight the importance of inhibition and random connectivity to the encoding efficiency of neuronal circuits, the project has looked to investigate the mechanisms that could impart similar qualities to artificial CNNs. The NeurAdapt project has contributed with an “As A Service” asset that provides access to a Dynamic Computation CNN feature extraction network for image classification, and a free to use executable that provides a hands-on experience on the NeurAdapt technology, by using a small and fast feature extraction network trained on a CIFAR-10 database.

SMAITE (Research project by the University of Manchester)

The SMAITE (Preventing the Spread of Misinformation with AI-generated Text Explanations) project has focused on developing a novel tool for automated fact checking of online textual content, that contextualises and justifies its decision by generating human-accessible explanations. The project’s vision has been to equip citizens with a digital literacy tool that not only judges the veracity of any given claim, but more importantly, also presents explanations that contextualise and describe the reasoning behind the judgement.

TRACES (Research project by the Sofia University “St. Kliment Ohridski”, GATE Institute)

The TRACES (AuTomatic Recognition of humAn-written and deepfake-generated text disinformation in soCial mEdia for a low-reSourced language) project has set out to find solutions and develop new methods for disinformation detection in low-resourced languages. The innovativeness of TRACES has been in detecting both human and deep fakes disinformation, recognising disinformation by its intent, the interdisciplinary mix of solutions, and creating a package of methods, datasets, and guidelines for creating such methods and resources for other low-resourced languages. The project has contributed with machine learning models for detecting untrue information and automatically generated texts in Bulgarian with the models GPT-2 and ChatGPT; social media datasets automatically annotated with markers of lies; and others.

edgeAI4UAV (Research project by the lnternational Hellenic University)

The edgeAI4UAV (Computer Vision and AI Algorithms Edge Computation on UAVs) project has focused on developing a complete framework for moving people and objects detection and tracking in order to extract evidence data (e.g. photos and videos from specific events) at real-time (when the event occurs), like cinematography tasks, though a reactive Unmanned Aerial Vehicle (UAV). To this end, the edgeAI4UAV project implemented an edge computation node for UAVs, equipped with a stereoscopic camera, which will provide lightweight stereoscopic depth information to be utilised for the evidence detection and UAV locomotion.

Author: Samuel Almeida & Catarina Reis (F6S)

The AI4Media Junior Fellows’ collection of testimonials has been released

 

The Junior Fellows Exchange Program is primarily expected to contribute to the creation of a critical mass of early career researchers with a deeper understanding of both media AI research and media industry needs, through collaborative work with research labs and media companies in Europe. All parties involved benefit from novel ideas and the spread of media AI expertise and skills.

The AI4Media Junior Fellows Exchange Program is now a success, with over 60 exchanges of researchers from more than 40 organisations across Europe, and with important outcomes in the form of papers, software, and datasets.

This booklet presents the testimonials of 20 Junior Fellows who participated in the program in 2021-2022. The Fellows discuss the projects they worked on, their views on the opportunities offered by the program, and their advice to researchers who might be thinking about an exchange.

We thank the Fellows for their contributions to AI4Media, and invite junior and senior researchers across Europe to join the program.

Author: Filareti Tsalakanidou (CERTH), Daniel Gatica-Perez (IDIAP), Yiannis Kompatsiaris (CERTH)

New AI4Media white papers released: industry needs for AI uptake in the media

AI is already here and is pervasive, with many applications in the media sector, from media news research and production, to game development, music generation, and media asset management. Europe is home to numerous research labs and universities that are exploring the vast possibilities and bounds of AI, as well as to a vibrant ecosystem of media companies that want to use AI to improve their products, services, and operations. But bridging the gap between the AI scientists and researchers and the actual end-users of the AI algorithms has always been a challenge. In AI4Media, we seek to narrow this gap, by publishing a set of white papers as part of AI4Media’s effort to align AI research with the industrial needs of media companies, describing the most important challenges and requirements for AI uptake in each use case area within the media industry.

The seven AI4Media white papers deal with the use of AI in several media domains throughout the media and content value chain, spanning from disinformation detection and analysis; news research, production, and publication; media production; data-driven research with media content in social sciences and humanities; to video game testing and music processing, music composition, and media asset organisation and management.

Below we provide an overview of the key messages and insights from each white paper.

AI Support Needed to Counteract Disinformation

  • Most fact checking and verification specialists regard AI technologies as highly valuable and important to support them in the task of counteracting disinformation, despite shortcomings associated with some existing tools.
  • New AI support functions are needed in two main areas of fact checking and verification work:
    1. Detection of synthetic media items or synthetic elements, and identification of content manipulation,
    2. Detection of disinformation narratives in online/ social media, including respective content, actors, or networks.
  • The user group of fact checkers and verification specialists has a high need for trustworthy, understandable AI support functions, especially in terms of explainability, transparency, and robustness.


AI for News. The smart news assistant

  • There is a clear opportunity for AI tooling to facilitate mundane and burdensome journalistic tasks, giving more space to creativity and original investigative and informative work.
  • Because of the fragmented information landscape, monitoring assistance is of interest to journalists.
  • The fact that journalists are increasingly confronted with disinformation results in a need for an understandable, accessible and easy-to-use AI tools for fact-checking.


AI in Vision: High Quality Video Production and Content Automation

  • Several crucial tasks of the media value chain are not well covered by existing tools, new AI-driven tools are needed to fill this gap.
  • Trustworthy AI features are one of the key factors that affect the wide adoption of AI in the news media sector, especially those related to Privacy Protection and Legal Compliance. The research community should push as much as possible to build trustworthy AI tools that respect user privacy and comply with relevant regulations.


AI Techniques for Social Sciences and Humanities Research

  • While many researchers are well-versed and are technically supported in textual analysis, AI tools for multimodal content analysis of still images, moving images and sounds fall short to meet the requirements by end-users. This is due to algorithmic limitations and UI/UX considerations not being fully taken on board.
  • To fully integrate AI tools into their workflows, researchers require flexible, easily configurable, transparent and explainable solutions that could be adopted in a variety of research scenarios.


AI for Video Game Testing and Music Processing

  • AI-powered tools shouldn’t replace Quality Assurance and music analysis/synthesis processes done by humans but rather enhance existing practices and help humans in achieving their tasks.
  • Industry partners don’t mind spending more time to get AI-powered tools working but they must be able to easily integrate them into their production pipeline.
  • It is important to have fine control over the input of the automated AI systems and provide a variety of methods to showcase their output.


AI music composition tools for humans

  • Tools for music co-creation go beyond learning models and should include the architectural requirements that a user needs to execute a full application. This means access to powerful computing infrastructure.
  • A creative process cannot be formalized, and a key element is the balance between powerful tools with the freedom to use and combine them. This is the basic requirement for the co-creative process.


AI Technology in Image & Video Organisation

  • AI-enhanced automated organisation of large media collections significantly aids media companies in reducing costs and, at the same time provides new opportunities for visual content monetisation.
  • Media companies have realised the importance of implementing AI-enhanced image and video (re)organisation technologies but have lagged in implementing such technologies as part of their workflows.

A common theme in almost all use case areas is the demand for trustworthy AI tools that are explainable and easily understandable by their end-users. User experience aspects are also quite important naturally; a smooth user experience and intuitive interfaces are a key requirement for most media professionals. Finally, maintaining control over the AI results and any subsequent decision making process is an important factor for media professionals.

Author: Danae Tsabouraki (ATC)

AI4Media supporting the CBMI 2022 Conference

The 19th International Conference on Content-based Multimedia Indexing (CBMI2022) took place as a hybrid conference in Graz, Austria, from Sept. 14-16, 2022, organised by JOANNEUM RESEARCH, with the support of AI4Media and ACM SIGMM. Probably still as an effect of the COVID pandemic, the event was a bit smaller than in previous years, with around 50 participants from 18 countries (13 European countries, the rest from Asia and North America). About 60% were attending on-site, the other via web conference.

The conference program included two keynotes. The opening keynote by Miriam Redi from Wikimedia analysed the role of multimedia assets in a free knowledge ecosystem such as the one around Wikipedia. The closing keynote by Efstratios Gavves from the University of Amsterdam showcased recent progress in machine learning of dynamic information and causality in a diverse range of application domains and highlighted open research challenges.

With the aim to increase the interaction between the scientific community and the users of multimedia indexing technologies, a panel session titled “Multimedia Indexing and Retrieval Challenges in Media Archives” was organised. The panel featured four distinguished experts from the audiovisual archive domain. Brecht Declerq from meemoo, the Flemish Institute for Archive, is currently the president of FIAT/IFTA, the International Association of TV Archives. Richard Wright started as a researcher in speech processing before he became a renowned expert for digital preservation, setting up a series of successful European projects in the area. Johan Oomen manages the department for Research and Heritage at Beeld en Geluid, the Netherlands Institute of Sound and Vision. Christoph Bauer is an expert from the Multimedia Archive of the Austrian Broadcasting Corporation ORF and consults archives of the Western Balkan countries on digitisation and preservation topics.

The panel tried to analyse why only a small part of research outputs makes it into productive use at archives and identified research challenges such as the need for more semantic and contextualised content descriptions, the ability to easily control amount vs. accuracy of generated metadata and the need for novel paradigms to interact with multimedia collections beyond the textual search box. At the same time, archives face the challenge of dealing with much richer metadata, but without the quality guarantees known from manually documented content.

The program included a special session on learning from scarce data in the multimedia domain, organised by AI4Media partners. The session included three papers, covering topics of learning person detection from content generated with computer games, applying Hebbian learning approaches for scarce data problems in multimedia and applying semi-supervised learning approaches to few-shot object detection.

AI-based tools to address societal problems

AI4Media work in the first 16 months of the project on “Human- and Society-centered AI Algorithms” comprises the following activities:

  • policy recommendations for content moderation, which investigate aspects of future regulation: who should decide which content should be removed, for which reasons, when and how;
  • development of detectors for content manipulation and synthesis, which address the growing problem of disinformation based on visual, audio and textual content;
  • development of trusted recommenders, which address challenges related to privacy and bias for recommendation services;
  • development of tools for healthier political debate, aiming at sentiment analysis, public opinion monitoring, and measuring the overall “healthiness” of online discussions;
  • development of tools to understand the perception of hyper-local news, focusing on health information for this period;
  • measuring user perception of social media, focusing on tools and methods that can accurately predict or identify viewer’s emotions and perception of content such as interestingness or memorability;
  • measuring real-life effects of private content sharing, which can often lead to unexpected and serious consequences.

All this is presented in the document “First-generation of Human- and Society-centered AI algorithms (D6.1)”, which also includes references to publications and published software.

Policy recommendations for content moderation: This section addresses a key legal topic around media: Who should decide which content should be removed, for which reasons it should be removed, and when and how it should be removed? In this context, several questions are addressed:

  • Which overall approach should be taken? Self-regulation (such as codes of practice, codes of conducts), or hard-law EU regulatory instruments?
  • How can regulation approaches be designed to respect fundamental rights such as freedom of expression without limiting the open public debate?
  • How can it be ensured that legitimate, lawful content is not deleted and that the freedom of expression is not violated?
  • How do users know what gets deleted, and whether what gets deleted violates laws or not?

Beyond that, the section addresses the use of automated tools in content moderation and offers a critical assessment of the technical limitations of algorithmic content moderation and points out risks for fundamental human rights, such as freedom of expression. Finally, it introduces the main elements of the EU regulatory framework applicable to content moderation.

Manipulation and synthetic content detection in multimedia: This section addresses various approaches related to audio, video and textual content verification, i.e. the detection and localization of manipulations and fabrications, with a focus on the latter: Especially due to the latest advancements in the field of Generative Adversarial Networks (GANs) and Language Models (LMs), the distinction between real and fake content (Deepfakes) is becoming increasingly difficult to make. Apart from many beneficial applications, there are also many applications that are potentially harmful to individuals, communities, and the society as a whole, especially with respect to the creation and distribution of propaganda, phishing attacks, fraud, etc., and there is a growing demand for technologies to support content verification and fact-checking. AI4Media aims at the development of such technologies, which are also used within several of the AI4Media use cases. This document reports on the activities and results of the first project phase:

  • for visual synthesis and manipulation detection, three methods for detecting synthetic / manipulated images and videos (based on facial features and CNN/LSTM architectures, optical flow, and CNN), and one method for image (layout-to-image translation based on a novel Double Pooling GAN with a Double Pooling Module), and an evaluation for existing state-of-the-art CNN-based approaches are presented
  • for audio synthesis and manipulation detection, two detection methods (based on microphone classification, and DNN) and synthetic speech generation tools for training and tests are presented.
  • for text synthesis and manipulation detection, an approach for the composition of a dataset with DeepFake tweets and a method to distinguish between synthetic and original tweets are presented

Hybrid, privacy-enhanced recommendation: This section outlines the initial activities related to recommendation (they will mostly take place in the second half of the project): Recommender systems are powerful tools that can help users find “the needle in the hack stack” and provide orientation, but they also strongly influence how users perceive the world and can contribute to a problem that is often referred to with “filter bubbles” – AI4Media aims at proposing how such effects can be minimized. Beyond that, the task also aims at developing tools to address privacy, which is a potential issue for all recommenders that exploit usage or usage data, by applying so-called Privacy Enhancing Technologies (PET).

AI for Healthier Political Debate: This section describes how Neural knowledge transfer can be applied for improved sentiment analysis in texts including figurative language (e.g. sarcasm, irony, metaphors), with many applications in automated social media monitoring, and customer feedback processing, e-mail scanning, etc. It also describes a new approach for public opinion monitoring via semantic analysis of tweets, especially relevant for political debates, preparing an annotated dataset for semantic analysis of tweets in the Greek language, and applying/validating the aforementioned analysis tools with them. Finally, it describes how the healthiness of online discussions on Twitter was assessed using the temporal dynamics of attention data.

Perception of hyper-local news: Local news are indispensable sources of information and stories of relevance to individuals and communities, and this section includes a description of several analysis approaches for local news and the understanding of their perception both by people and machines: classification of covid-19-related misinformation and disinformation in online news articles, building a corpus of local news about covid-19 vaccination across European countries, and exploration of online video as another health information source.

Measuring and Predicting User Perception of Social Media: This section provides a description of tools and methods developed which can accurately predict or identify viewer’s emotions and perceptions of content, including:

  • benchmarking and predicting media interestingness in images and videos
  • predicting video memorability, using Vision Transformers
  • use of decision-level fusion/ensembling systems for media memorability, violence detection and media interestingness
  • use of a Pairwise Ranking Network for Affect Recognition, and validating it for EEG data
  • estimating Continuous Affect with label uncertainty

Real-life effects of private content sharing: This section described activities related to the analysis of content sharing, which can often lead to unexpected and serious consequences, especially when applied to an unintended context (e.g. to a job application process vs. a personal environment). The main objective is to improve user awareness about data processing through feedback contextualization, applying a method that rates visual user-profiles and individual photos in a given situation by exploiting situation models, visual detectors and a dedicated photographic profiles dataset.

The document can be found HERE, and the initial results include the following OSS tools:

  • Cascaded Cross MLP-Mixer GANs for Cross-View Image Translation: A Novel two-stage framework with a new Cascaded Cross MLP-Mixer (CrossMLP) sub-network in the first stage and one refined pixel-level loss in the second stage. See https://github.com/Amazingren/CrossMLP
  • LERVUP (LEarning to Rate Visual User Profiles): an approach that focuses on the effects of data sharing in impactful real-life situations, which relies on three components: (1) a set of visual objects with associated situation impact ratings obtained by crowdsourcing, (2) a corresponding set of object detectors for mining users’ photos and (3) a ground truth dataset made of 500 visual user profiles which are manually rated per situation.  See https://github.com/v18nguye/lervup_official

Authors: Patrick Aichroth & Thomas Köllmer (Fraunhofer IDMT)

AI4Media at the DH Benelux conference

The AI4Media partner Netherlands Institute for Sound & Vision was part of this year’s Digital Humanities Benelux conference that took place on 1-3 June, hosted by the University of Luxembourg.
They presented the paper titled “A two-way street between AI research and media scholars” and discussed new research opportunities and considerations that AI opens for humanities scholars.  Participants had the opportunity to get a first glimpse at the demonstrator Sound & Vision is developing in AI4Media to introduce new research functionalities for media scholars.

More information about the Conference HERE

30 research exchanges already implemented through the Junior Fellows Program

The AI4Media Junior Fellows Program is the project’s international research exchange initiative. Junior Fellows are Ph.D. students, MS students, and early career postdocs, who actively participate in research exchanges within and beyond the AI4Media Consortium.
The program is built around three values:

  1. Diversity: Junior Fellows are women and men from anywhere in the world working on AI for Media & Society
  2. Visibility: Junior Fellows benefit from close interaction with the consortium partners and from opportunities for professional growth as members of the AI4Media network
  3. Impact: Junior Fellows contribute to core tasks of the project, from research to development and integration. Through their work, Fellows generate concrete results, including code, data, prototypes, and publications.

As of June 1, 2022, a total of 30 individuals have participated or are scheduled to participate in the program.

Exchanges involve a Junior Fellow, a host institution, and a sender institution, where either the sender or the host are AI4Media full consortium members. Senior Fellows are also invited to participate in the exchange program. This flexibility allows receiving/sending researchers from/to other institutions, both internal to the project and worldwide. Exchanges can be physical, virtual, and hybrid (a combination of physical and virtual.)  For physical exchanges, Fellows are supported by AI4Media with funds that cover two-way travel and a partial stipend for one to three months. The virtual and hybrid formats further increase the possibilities to take part in the program.

While the COVID-19 pandemic limited mobility in the first year of the project, the Junior Fellow program has now taken off, thanks to the commitment of the consortium partners to identify and support external visitors to be hosted, as well as to strengthen internal collaborations within the consortium through exchanges of project-funded staff.

As of June 1, 2022, the program has received 30 applications (24 Junior and 6 Senior Fellows; 7 women and 23 men.) A total of 11 exchanges have been completed, 13 are ongoing, and 6 Fellows will start in the summer/autumn of 2022. A balance between internal and external exchanges is emerging (16 internal collaborations between AI4Media partners, and 14 external collaborations with parties outside the consortium.) Finally, all three formats are being used (13 physical, 7 hybrid, and 10 virtual).

Multiple research results have already been produced. In future newsletters, we will feature interviews with some of the Junior Fellows to present their work and experience in the program.

More information about the Junior Fellow program can be found HERE. The videos of the AI4Media 1st Junior Fellow Day 2022 can be found HERE.

Authors: Daniel Gatica-Perez, (IDIAP Research Institute) & Filareti Tsalakanidou, (Information Technologies Institute – Centre for Research and Technology Hellas)

Why and how to use the European AI-on-demand platform

According to its own website, the European AI-on-demand platform is a one-stop shop for anyone looking for AI knowledge, technology, tools, services and experts. Its establishment is one of the main results of the AI4EU project, which was funded by the European Union as part of the Horizon 2020 initiative. The ultimate goal of this platform is to contribute to European sovereignty with respect to data and technology in the field of AI.
In this article, we provide an overview of the many different facets of the AI-on-demand platform, which reflects the diverse and colourful European AI landscape.

Since the end of AI4EU in December 2021, the AI-on-demand platform is now driven by the European AI community and the many follow-up projects of AI4EU within the European research initiatives ICT-48 and ICT-49. For example, the AI4EU Technical Governance Board (TGB) is currently managed by AI4Media which belongs to ICT-48. More on the background of AI4EU and the specific contributions of AI4Media to the AI-on-demand platform can be found in an article in the previous AI4Media newsletter.

The main entry point to all facets of the AI-on-demand platform is the website www.ai4europe.eu, while the platform itself does not only comprise this website but rather consists of the underlying virtual network and the activities of the involved parties. There are currently (May 2022) eleven contributing projects listed on the website, as well as more than one hundred organisations ranging from companies to research institutes and universities. Of course, the organisations and projects are linked to each other so that one can easily see who participates in which projects.

Going further, the AI-on-demand platform is also the home of several working groups such as the Working Group for Ontology just to mention one of them. The Observatory on Society and Artificial Intelligence (OSAI) is another example of cross-project collaboration on the platform. It is planned that the ethics section will also include policy recommendations for specific areas such as the media sector, based on the results of the corresponding tasks and work packages of AI4Media.

On the website, one may also find dedicated sections with news and events regarding the platform and the contributing projects. In this context, it is worth mentioning the AI4EU Web Cafés, a series of live webinars on AI. Since the end of AI4EU, this exceptionally successful format is being continued as AI Cafés under the umbrella of AI4Media. Recordings of past cafés are also available on YouTube and GoToStage.

One of the core parts of the AI-on-demand platform is the AI Catalogue which currently (May 2022) lists about 150 AI assets of various types: services, datasets, Docker containers, executables, Jupyter notebooks, libraries, machine learning models and tutorials. These assets are linked to the contributing projects and organisations. While the AI Catalogue is simply a list of items that do not necessarily implement common interfaces, some of these assets have also been technically integrated into AI4EU Experiments which can be seen as the technical part of the AI-on-demand platform. AI4EU Experiments is an interesting topic on its own and will be discussed in one of the next AI4Media newsletters.

Not all facets of the AI-on-demand platform have been touched on in this article, and new ones might emerge as the platform develops, so it is worthwhile to check out the website from time to time.

Contributions to the AI-on-demand platform are welcome and can be submitted by anyone. For publishing content such as AI assets, news, or events in the already existing sections, it is sufficient to have an EU login. With this, you may log in to the AI4EU website and submit your content for review. Once it has passed the review process, it will be published in the respective section. New sections and features can be added to the platform upon request which will be discussed in the TGB.

Starting in July 2022, the AI-on-demand platform will find its new home in the Coordination and Support Action (CSA) AI4Europe. Therefore, the sustainability of the AI-on-demand platform is ensured for the years to come, and it will thus be able to continue its substantial contributions to the European sovereignty with respect to data and technology in the field of AI.

Author: Andreas Steenpass, (Fraunhofer IAIS)

One step ahead in multimedia analysis and summarization

AI4Media explores innovative Deep Neural Networks (DNNs) for image/video/ audio analysis and summarisation through cutting-edge machine learning. The work performed up to now resulted in novel ways to automatically shorten long videos through unsupervised key-frame extraction, as well as in novel AI tools for the management or retrieval of media datasets.

However, typical DNNs require very large amounts of labeled training data in order to achieve good performance. In a systematic effort to bypass this, AI4Media also researched novel approaches to training or adapting DNNs for scenarios marked by a lack of large-scale, domain-specific datasets or annotations. The result up to now includes several innovative methods for few-shot, semi-supervised or unsupervised learning with media data.

In addition, AI4Media has researched advanced audio analysis for automatic music annotation and audio partial matching/reuse detection, mainly relying on DNNs. Overall, these algorithms can be readily exploited by industry-oriented tools for intelligent and automated media archives, management, analysis, search or retrieval, as well as synthetic audio detection/verification.

In this context, AI4Media has produced, up to now, several modern AI tools for:

Video key-frame extraction. Check out the related papers:

Video Summarization Using Deep Neural Networks: A Survey (Link)

Adversarial Unsupervised video summarization augmented with dictionary loss (Link)

Information retrieval on cultural media datasets, relying on a synthesis of computational deep learning with symbolic semantic reasoning. Check out the related paper:

Learning and Reasoning for Cultural Metadata Quality (Link)

Few-shot object detection. Check out the related code:

Few-shot object detection (Code)

Unsupervised domain adaptation for traffic density estimation/counting or for visual object detection. Check out the related paper:

Domain Adaptation for Traffic Density Estimation (Link)

Advanced video browsing and search. Check out the related paper:

The VISIONE Video Search System: Exploiting Off-the-Shelf Text Search Engines for Large-Scale Video Retrieval (Link)

Semi-supervised learning for fine-grained visual categorization. Check out the related paper:

Fine-Grained Adversarial Semi-supervised Learning (Link)

Deep dictionary-based representation learning. Check out the related paper and code:

When Dictionary Learning Meets Deep Learning: Deep Dictionary Learning and Coding Network for Image Recognition With Limited Data (Link)

Deep Micro-Dictionary Learning and Coding Network (Code)

Even though these activities are only the outcomes of the first project period, future research plans have already been laid with the intention to expand upon them in exciting new directions.

Author: Ioannis Mademlis, (Aristotle University of Thessaloniki)

Legal and ethical framework of trusted AI

AI4Media conducted an initial analysis of the legal and ethical framework for trusted AI, addressing the question of how the GDPR provisions should be interpreted when applied in an AI system context.

This work comprises:

  • an analysis of the EU data protection framework relevant for the AI systems;
  • a reflection on the upcoming EU legislation;
  • an initial suggestion towards the reconciliation of AI and GDPR legal frameworks;
  • a preliminary list of recommendations for trusted and GDPR-compliant AI, and
  • ways to mitigate and prevent risks and gaps.

Firstly, the research conducted showed that despite the GDPR not referring to “artificial intelligence”, many provisions of the legal text prove to be relevant for AI systems. It also highlighted that there is a lack of sufficient clarity as well as uncertainties and diverging opinions between scholars and interpretative guidelines. The academic literature showed sometimes converging and in other cases conflicting opinions among the research community on the scope of some GDPR provisions applied to AI systems. This research also introduced the use of AI systems in the media environment, including recommender and targeted advertising systems. 

Then, it delivered a comprehensive description of the overarching principles of the GDPR, including lawfulness, fairness, and transparency. A detailed analysis of GDPR Art. 5’s principles on purpose limitation, data minimisation, accuracy, storage limitation, integrity and confidentiality (security), and accountability, was also provided.

The different data subject rights when applied in the context of AI, were also analysed. The rights considered were: the right to be informed, the right not to be subject to a decision based solely on automated processing, the so-called right to explanations, the right of access, the right to rectification, the right to erasure, the right to restrict processing, and the right to object. 

The report also presented the growing challenges involved in the compliance with data subject’s requests for rights enforcement in big datasets, including complexities related to the different stages of AI system processing, transparency and right to information as a key for exercising the other rights, uncertainties regarding the application of data subject’s rights, unfriendly AI system interfaces for rights enforcement, and lack of enforcement leading to trade-offs.

The analysis also briefly touched upon upcoming European legislation relevant to the provisions of the GDPR and AI systems for processing personal data, including the AI Act proposal, the Data Governance Act proposal, and the proposed Data Act. The legislator seems well aware of the current challenges of GDPR and AI as these upcoming instruments try to complement GDPR and create additional safeguards, data quality requirements and favourable conditions to enhance data sharing. However, being currently negotiated, it remains to be seen how this will materialise. 

Finally, the report presents a set of initial recommendations built upon the initial analysis conducted throughout the first 18 months of the project. These recommendations addressed ways to ensure the development of trusted and GDPR-compliant AI, offering a conclusion on the gaps and challenges identified throughout the report, while also providing ways forward to mitigate and prevent the identified issues for trusted AI.

What’s next?  Further research will dive deeper into legal data protection for the use of AI applications in media environments and will investigate how people can be aware of what is being done with their data. This deliverable was indeed the first step toward the final analysis which is due in August 2023.

Access the full report HERE

Author: Noémie Krack, (KU Leuven)

The AI4Media Evaluation-as-a-Service Platform

Benchmarking represents a vital tool for the development of new technologies as it allows and eases a fair process for comparing the performance of different AI algorithms on common grounds, e.g., data, training, and metrics. The dedicated AI4Media open Benchmarking Platform is in its Prototype phase at this moment, providing such capabilities.

It was developed on the CodaLab framework. A testing benchmark is also provided as an example, namely the novel “late fusion” benchmark (ImageCLEFfusion 2022 task). The platform allows the users to create benchmarking tasks, create cloud-based repositories, manage participants and submitted data, as well as API integration.

It brings several advantages:

  • a European-based Evaluation-as-a-Service platform;
  • better control over data privacy, as access to data can be managed and the platform can even be deployed on local installs, thus separating it from the outside world;
  • development of reproducible and computationally efficient AI, through the high-level functions and options offered to the users;
  • addition of computational efficiency metrics that organizers can use to understand the computational complexity of the participants’ methods.

Access the prototype source code HERE

Author: Bogdan Ionescu, (Politehnica University of Bucharest)

Kick-off for the first 10 projects funded by the AI4Media Open Calls

The 10 projects funded by the 1st Open Call of AI4Media’s Open Call are underway, having held their official kick-off meeting on 2 March 2022. In the context of the funding programme, AI4Media will financially support each project with €50,000 and will provide tailored coaching, market-driven services, and business support, in addition to large-scale visibility. Some of the topics addressed by the projects include AI music and audio, media authentication, fact-checking, disinformation, and much more.

The objective of the AI4Media – Open Call #1 was to engage companies and researchers to develop new research and applications for AI and contribute to the enrichment of the pool of technological tools. Submissions were required to address one of seven specific challenges or open challenges from a Research or Application track. 

The 10 projects were selected from a total of 60 submissions from 22 countries. The competitive open call ran from 1 September to 1 December 2021. Eligible submissions were subject to an external evaluation by independent experts and a selected group of proposals went on to the interview stage. Each project has been awarded up to €50.000 to implement its work plan. 

A quick glance at the funded projects:

AIEDJ – AI Empathic DJ App (musicube GmbH, Germany): Aims to expand on existing AI software for audio and music and adapt it to each listener’s perspective on music so that the AI learns and adapts to different musical tastes.

CIMA – Next-Gen Collaborative Intelligence for Media Authentication (AdVerif.ai, Israel): Aims to develop a next-generation intelligence platform to make a collaborative collection of evidence for media authentication easier and faster. The platform will adopt cutting-edge AI methods from cyber-security to the media domain, empowering fact-checkers and journalists to be more effective.

CUHE – An explainable recommender system for holistic exploration and CUration of media HEritage collections (IN2 Digital Innovations GmbH, Germany): Aims to develop and demonstrate a web-based application based on AI recommendations that will allow cultural heritage professionals as well as (humanities) researchers to explore existing media and cultural heritage digital collections in a more holistic way and allow them to curate new galleries or create digital stories and exhibitions which can showcase and share the new insights gained.

InPreVI – Inauthentic web traffic Prediction in Video marketing campaigns for investment optimization (JOT Internet Media, Spain): Aims to develop an innovative AI-based system, using the existing JOT-owned video web traffic data to (1) identify the main behavioural patterns of inauthentic users to predict their actions and limit their impact in the video marketing campaigns and (2) model the quality score associated to a campaign.

VRES – Varia Research (Varia UG, Germany): Aims to bring AI power to the frontlines of the media industry, to the journalists. While journalistic research processes today are highly fragmented and based on workarounds, Varia Research will be the first holistic application that gives all central research activities a common home. 

edgeAI4UAV – Computer Vision and AI Algorithms Edge Computation on UAVs (lnternational Hellenic University, Greece): Aims to develop an edge computation node for UAVs equipped with lightweight active computer vision and AI (deep learning) algorithms capable of detecting and tracking moving objects, while at the same time will ensure robust UAV localization and reactive navigation behaviour.

NeurAdapt – Development of a Bio-inspired, resource efficient design approach for designing Deep Learning models (Irida Labs, Greece): Aims to deliver a framework, where established techniques such as channel gating, channel attention and calibrated dropout, are synthesized in order to formulate a building block of and novel methodology for designing CNN models.

RobaCOFI – Robust and adaptable comment filtering (Institut Jozef Stefan, Slovenia): Aims to develop new methods to bypass the problem of filtering and moderating comments and make the initial implementation process easy and fast; develop methods for semi-automatic annotation of data, including new variants of active learning in which the AI tools can quickly select the data they need to be labelled.

SMAITE – Preventing the Spread of Misinformation with AI-generated Text Explanations (University of Manchester, United Kingdom): Aims to develop a fact-checking system underpinned by deep learning-based, generative language models that will generate explanations that meet the identified requirements.

TRACES – AuTomatic Recognition of humAn-written and deepfake-generated text disinformation in soCial mEdia for a low-reSourced language (Sofia University “St. Kliment Ohridski”, GATE Institute, Bulgaria): Aims to provide solutions to the problem of fake content and disinformation spread worldwide and across Europe, and the detection of deep fakes, by creating methods and resources for detecting both human and deepfake disinformation in social media for low-resourced languages.

More information about the projects HERE

Authors: Samuel Almeida & Catarina Reis, (F6S)

New white paper maps the societal potentials and challenges of AI for the media

A newly published white paper has conducted an in-depth mapping of the main potentials and challenges of AI applications in the media cycle, providing a unique overview of the state-of-the-art discussion of societal impacts of AI. Based on this mapping, some provisional guidelines and considerations are distilled to guide the future work of industry professionals, policy makers and researchers.

The white paper has been produced by researchers from the University of Amsterdam, The Institute for Sound and Vision and KU Leuven as part of the AI4Media project and is based on a thorough literature review of academic journals published by scholars within the field of humanities, social science, media and legal studies. As well as, reports developed either with a specific focus on AI in the media sector or with a broader outlook on AI in society. 

The white paper is divided into two major parts. The first part identifies the main potentials and challenges across the entirety of the media cycle including i) ideation and content, ii) gathering, iii) media content production, iv) media content curation and distribution, v) deliberation over the content, and vi) archival practices. The second part explores six societal concerns that affect or impact the media industry. These included:

  • Biases and discrimination: AI is on one hand discussed as a potential solution to mitigating existing media biases (e.g., overrepresentation of male sources). On the other hand, there is also concern about how AI systems might sustain or further enhance existing biases (e.g., in content moderation where minorities are less protected from hate speech) and how that might have severe long-term effects on the role of media in society and the democratic practices it cultivates. 
  • Media (in)dependence and commercialisation: The “platformisation” of society also applies to the media sector, which is dependent on e.g., social media in their distribution of content and entangled in commercial data infrastructures. One major concern regarding this commercialisation and dependence on different platforms is the effects of such dependencies on media independence.
  • Inequalities in access to AI: While the use of AI is expanding rapidly, it is not doing so equally across the world. The primary benefactors of AI solutions remain to be the global north and particularly English-speaking countries. Inequality in access is, therefore, also a major concern. In the media sector this is also further widening, because of the existing competitional differences between smaller and larger media organisations, which could reduce media diversity.
  • Labour displacements, monitoring, and professional control: AI is often discussed in terms of the risk of labour displacement. In the media sector, the effects of AI on existing jobs remain limited, although some examples of displacement are emerging. However, AI also induces new power asymmetries between employees and employers as metrics and monitoring practices are becoming more common. Last, AI is transforming existing media practices (e.g., genres and formats) and challenging the professional control and oversight of both production and distribution practices.
  • Privacy, transparency, accountability, and liability: The privacy discussion regarding AI for media relate mostly to data privacy, where the conflict between commercial and democratic ideals intersects. Media organisations must consider their responsibility regarding data privacy models and new best practices of responsible data practices are needed. Transparency is mainly discussed regarding the practices of disclosure that media organisations currently employ and how streamlining is needed to ensure better transparency in the media landscape. Accountability is mainly discussed in relation to how and where to place responsibility as new actors enter the media landscape with the use of AI (e.g., service providers of AI).
  • Manipulation and mis- and disinformation as an institutional threat: The threat of manipulation is highly present in the discussion of AI and media as well as in society at large through concepts such as ‘fake news’. In the media sector specifically, much discussion centre on how other actors through the manipulation of content (e.g., deep fakes) or by affecting modes of distribution (e.g., bots) can manipulate public opinion. As media continue to serve an important role in society as trusted sources of information, the negative effects this might have on the trustworthiness of media are significant. As a core actor in the fight against disinformation, the development of tools to support the work of media professionals is important.

In the white paper, these discussions are further fleshed out and core points of consideration for the media industry, policy makers and AI researchers who engage with the media sector are suggested to help guide future work and research on AI.

Access the full white paper HERE.

A second version of the whitepaper will be developed and published in December 2023, in this version some of these core points of consideration will be further explored and qualified through workshops with relevant media organizations who can help provide even more concrete suggestions of best practices.

Author: Anna Schjøtt Hansen, (University of Amsterdam)

Discover the AI4Media Roadmap on AI technologies and applications for the Media Industry!

The AI4Media project developed a Roadmap on AI technologies and applications for the Media that aims to provide a detailed overview of the complex landscape of AI for the media industry.

This Roadmap:

  • analyses the current status of AI technologies and applications for the media industry;
  • highlights existing and future opportunities for AI to transform media workflows, assist media professionals, and enhance the user experience in different industry sectors;
  • offers useful examples of how AI technologies are expected to benefit the industry in the future; and
  • discusses facilitators, challenges, and risks for the wide adoption of AI by the media.

The roadmap comprises 35 white papers discussing different AI technologies and multimedia applications, use of AI in different media sectors, AI risks for the society and economy, legal and ethical aspects and latest EU regulations, AI datasets, benchmarks & open repositories, opportunities in the time of the pandemic, environmental aspects and many more.

The AI4Media Roadmap offers an in-depth analysis of the AI for Media landscape based on a multi-party, multi-dimensional and multi-disciplinary approach, involving the AI4Media partners, external media or AI experts but also the AI research community, and the community of media professionals at large. Three main tools have been used to describe this landscape, including:

  • a multi-disciplinary state-of-the-art analysis involving AI experts, experts on social sciences, ethics and legal issues, as well as media industry practitioners; 
  • a public survey targeted at AI researchers/developers and media professionals; and
  • a series of short white papers on the future of AI in the media industry that focus on different AI technologies and applications as well as on different media sectors, exploring how AI can positively disrupt the industry, offering new exciting opportunities and mitigating important risks.

Based on these tools, we provide a detailed analysis of the current state of play and future research trends with regard to media AI (short for “use of AI in media”), which comprises the following parts.

State-of-the-art analysis of AI technologies and applications for the media. Based on an extensive analysis of roadmaps, surveys, review papers and opinion articles focusing on the trends, benefits, and challenges of the use of AI, we provide a clear picture of the most transformative applications of AI in the media and entertainment industry. Our analysis identifies AI applications that are already having or can have a significant impact in most media industry sectors by addressing common needs and shared aspirations about the future as well as AI technologies that hold the greatest potential to realise the media’s vision for AI. 

Discussion of social, economic and ethical implications of AI. Complementing the previous state-of-the-art analysis, which highlights AI’s potential for the media industry from a technology and practical application point of view, this analysis dives into the social and ethical implications of AI, offering the point of view of social scientists, ethics experts and legal scholars, based on an extensive literature review of both industry reports and scholar articles. The most prevalent societal concerns and risks are identified, including bias and discrimination; media (in)dependence; unequal access to AI; privacy, transparency, accountability and liability; etc. In addition, we identify practices to counteract the potential negative societal impacts of media AI.

EU policy initiatives and their impact on future AI research for the media. We provide an overview of EU policy initiatives on AI, focusing on initiatives having a clear focus on the media industry. We discuss both policy (non-binding provisions) and regulatory initiatives (leading to the adoption of binding legal provisions), including the Digital Services Act, the AI Act, the Code of Practice on disinformation, the Proposal on transparency and the targeting of political advertising and more.

Analysis of survey results. Two online surveys were launched: i) a public survey aiming to collect the opinions of the AI research community and media industry professionals with regard to the benefits, risks, technological trends, challenges and ethics of AI use in the media industry (150 respondents from 26 countries); and b) a small-scale internal survey addressed to the consortium, aiming to collect their opinions on the benefits and risks of media AI for society and democracy. 

Main AI technology & research trends for the media sector. Based on the results of the state-of-the-art analysis, we highlight the potential of specific AI technologies to benefit the media industry, including reinforcement learning, evolutionary learning, learning with scarce data, transformers, causal AI, AI at the edge, bioinspired learning, quantum computing for AI learning. For each technology, a white paper offers an overview of the current status of the technology, drivers and challenges for its development and adoption, and future outlook. The white papers also include vignettes, i.e. short stories with media practitioners or users of media services as the main characters, aiming to vividly showcase how AI innovations could help the media industry in practice. 

Main AI applications for the media sector. Based on the results of the state of the art analysis, we highlight the potential of specific AI applications to benefit the media industry, including multimodal knowledge representation and retrieval, media summarisation, automatic content creation, affective analysis, NLP-enabled applications, and content moderation. Similarly to the above, a short white paper is presented for each application, offering a clear overview of the current status of the technology, drivers and challenges for its development and adoption, and future outlook. 

Future of AI in different media sectors. We present a collection of white papers, focusing on the deployment of AI in different media industry sectors, including news, social media, film/TV, games, music and publishing. We also explore the use of AI to address critical societal phenomena such as disinformation and to enhance the online political debate. Finally, we explore how AI can help the study of media itself in the form of AI-enabled social science tools. These papers offer an in-depth look at the current status of each sector with regard to AI adoption, most impactful AI applications, main challenges encountered, and future outlook. 

Analysis of future trends for trustworthy AI. We present four white papers focusing on different aspects of trustworthy AI, namely AI robustness, AI explainability, AI fairness, and AI privacy, with a focus on media sector applications. The analysis explains existing trustworthy AI limitations and potential negative impacts.

AI datasets and benchmarks. We analyse existing AI datasets and benchmark competitions, discussing current status, research challenges and future outlook, while also providing insights on the ethical and legal aspects of the availability of quality data for AI research.

AI democratisation. We discuss issues related to AI democratisation, focusing on open repositories for AI algorithms and data and research in the direction of integrated intelligence, i.e. AI modules that could be easily integrated into other applications to provide AI-enabled functionalities. 

External forces that could shape the future. We discuss the forces that could shape the future of the use of AI in the media sector, focusing on legislation/ regulation, the pandemic and its impact, and the climate crisis.

The Roadmap has been developed as part of the AI4Media public deliverable D2.3 “AI technologies and applications in media: State of Play, Foresight, and Research Directions”.

Access the full version of Roadmap

Access the Web version of Roadmap

Author: Filareti Tsalakanidou, (Information Technologies
Institute – Centre for Research and Technology Hellas)

Kick-off AI4Media’s Open Call 1#

The first funding programme of the AI4Media project officially buy viagra started on March 1st, 2022.

The 10 projects selected to be funded under the first AI4Media open call will carry out their activities, which will run for nine and twelve months. Five ‘Application’ projects will run for nine months, while an additional five ‘Research’ projects will run for twelve months. 

The funded projects are expected to develop new research and applications for AI, and contribute to the enrichment of the pool of technological tools of the AI4Media platform.

During the funding programme, AI4Media will provide beneficiaries with tailored coaching, market-driven services, and business support, in addition to large-scale visibility. Beneficiaries will receive up to €50,000 to implement their projects.

More information about each project will be soon available on our website.