AI-based tools to address societal problems

AI4Media work in the first 16 months of the project on “Human- and Society-centered AI Algorithms” comprises the following activities:

  • policy recommendations for content moderation, which investigate aspects of future regulation: who should decide which content should be removed, for which reasons, when and how;
  • development of detectors for content manipulation and synthesis, which address the growing problem of disinformation based on visual, audio and textual content;
  • development of trusted recommenders, which address challenges related to privacy and bias for recommendation services;
  • development of tools for healthier political debate, aiming at sentiment analysis, public opinion monitoring, and measuring the overall “healthiness” of online discussions;
  • development of tools to understand the perception of hyper-local news, focusing on health information for this period;
  • measuring user perception of social media, focusing on tools and methods that can accurately predict or identify viewer’s emotions and perception of content such as interestingness or memorability;
  • measuring real-life effects of private content sharing, which can often lead to unexpected and serious consequences.

All this is presented in the document “First-generation of Human- and Society-centered AI algorithms (D6.1)”, which also includes references to publications and published software.

Policy recommendations for content moderation: This section addresses a key legal topic around media: Who should decide which content should be removed, for which reasons it should be removed, and when and how it should be removed? In this context, several questions are addressed:

  • Which overall approach should be taken? Self-regulation (such as codes of practice, codes of conducts), or hard-law EU regulatory instruments?
  • How can regulation approaches be designed to respect fundamental rights such as freedom of expression without limiting the open public debate?
  • How can it be ensured that legitimate, lawful content is not deleted and that the freedom of expression is not violated?
  • How do users know what gets deleted, and whether what gets deleted violates laws or not?

Beyond that, the section addresses the use of automated tools in content moderation and offers a critical assessment of the technical limitations of algorithmic content moderation and points out risks for fundamental human rights, such as freedom of expression. Finally, it introduces the main elements of the EU regulatory framework applicable to content moderation.

Manipulation and synthetic content detection in multimedia: This section addresses various approaches related to audio, video and textual content verification, i.e. the detection and localization of manipulations and fabrications, with a focus on the latter: Especially due to the latest advancements in the field of Generative Adversarial Networks (GANs) and Language Models (LMs), the distinction between real and fake content (Deepfakes) is becoming increasingly difficult to make. Apart from many beneficial applications, there are also many applications that are potentially harmful to individuals, communities, and the society as a whole, especially with respect to the creation and distribution of propaganda, phishing attacks, fraud, etc., and there is a growing demand for technologies to support content verification and fact-checking. AI4Media aims at the development of such technologies, which are also used within several of the AI4Media use cases. This document reports on the activities and results of the first project phase:

  • for visual synthesis and manipulation detection, three methods for detecting synthetic / manipulated images and videos (based on facial features and CNN/LSTM architectures, optical flow, and CNN), and one method for image (layout-to-image translation based on a novel Double Pooling GAN with a Double Pooling Module), and an evaluation for existing state-of-the-art CNN-based approaches are presented
  • for audio synthesis and manipulation detection, two detection methods (based on microphone classification, and DNN) and synthetic speech generation tools for training and tests are presented.
  • for text synthesis and manipulation detection, an approach for the composition of a dataset with DeepFake tweets and a method to distinguish between synthetic and original tweets are presented

Hybrid, privacy-enhanced recommendation: This section outlines the initial activities related to recommendation (they will mostly take place in the second half of the project): Recommender systems are powerful tools that can help users find “the needle in the hack stack” and provide orientation, but they also strongly influence how users perceive the world and can contribute to a problem that is often referred to with “filter bubbles” – AI4Media aims at proposing how such effects can be minimized. Beyond that, the task also aims at developing tools to address privacy, which is a potential issue for all recommenders that exploit usage or usage data, by applying so-called Privacy Enhancing Technologies (PET).

AI for Healthier Political Debate: This section describes how Neural knowledge transfer can be applied for improved sentiment analysis in texts including figurative language (e.g. sarcasm, irony, metaphors), with many applications in automated social media monitoring, and customer feedback processing, e-mail scanning, etc. It also describes a new approach for public opinion monitoring via semantic analysis of tweets, especially relevant for political debates, preparing an annotated dataset for semantic analysis of tweets in the Greek language, and applying/validating the aforementioned analysis tools with them. Finally, it describes how the healthiness of online discussions on Twitter was assessed using the temporal dynamics of attention data.

Perception of hyper-local news: Local news are indispensable sources of information and stories of relevance to individuals and communities, and this section includes a description of several analysis approaches for local news and the understanding of their perception both by people and machines: classification of covid-19-related misinformation and disinformation in online news articles, building a corpus of local news about covid-19 vaccination across European countries, and exploration of online video as another health information source.

Measuring and Predicting User Perception of Social Media: This section provides a description of tools and methods developed which can accurately predict or identify viewer’s emotions and perceptions of content, including:

  • benchmarking and predicting media interestingness in images and videos
  • predicting video memorability, using Vision Transformers
  • use of decision-level fusion/ensembling systems for media memorability, violence detection and media interestingness
  • use of a Pairwise Ranking Network for Affect Recognition, and validating it for EEG data
  • estimating Continuous Affect with label uncertainty

Real-life effects of private content sharing: This section described activities related to the analysis of content sharing, which can often lead to unexpected and serious consequences, especially when applied to an unintended context (e.g. to a job application process vs. a personal environment). The main objective is to improve user awareness about data processing through feedback contextualization, applying a method that rates visual user-profiles and individual photos in a given situation by exploiting situation models, visual detectors and a dedicated photographic profiles dataset.

The document can be found HERE, and the initial results include the following OSS tools:

  • Cascaded Cross MLP-Mixer GANs for Cross-View Image Translation: A Novel two-stage framework with a new Cascaded Cross MLP-Mixer (CrossMLP) sub-network in the first stage and one refined pixel-level loss in the second stage. See https://github.com/Amazingren/CrossMLP
  • LERVUP (LEarning to Rate Visual User Profiles): an approach that focuses on the effects of data sharing in impactful real-life situations, which relies on three components: (1) a set of visual objects with associated situation impact ratings obtained by crowdsourcing, (2) a corresponding set of object detectors for mining users’ photos and (3) a ground truth dataset made of 500 visual user profiles which are manually rated per situation.  See https://github.com/v18nguye/lervup_official

Authors: Patrick Aichroth & Thomas Köllmer (Fraunhofer IDMT)

AI4Media at the DH Benelux conference

The AI4Media partner Netherlands Institute for Sound & Vision was part of this year’s Digital Humanities Benelux conference that took place on 1-3 June, hosted by the University of Luxembourg.
They presented the paper titled “A two-way street between AI research and media scholars” and discussed new research opportunities and considerations that AI opens for humanities scholars.  Participants had the opportunity to get a first glimpse at the demonstrator Sound & Vision is developing in AI4Media to introduce new research functionalities for media scholars.

More information about the Conference HERE

30 research exchanges already implemented through the Junior Fellows Program

The AI4Media Junior Fellows Program is the project’s international research exchange initiative. Junior Fellows are Ph.D. students, MS students, and early career postdocs, who actively participate in research exchanges within and beyond the AI4Media Consortium.
The program is built around three values:

  1. Diversity: Junior Fellows are women and men from anywhere in the world working on AI for Media & Society
  2. Visibility: Junior Fellows benefit from close interaction with the consortium partners and from opportunities for professional growth as members of the AI4Media network
  3. Impact: Junior Fellows contribute to core tasks of the project, from research to development and integration. Through their work, Fellows generate concrete results, including code, data, prototypes, and publications.

As of June 1, 2022, a total of 30 individuals have participated or are scheduled to participate in the program.

Exchanges involve a Junior Fellow, a host institution, and a sender institution, where either the sender or the host are AI4Media full consortium members. Senior Fellows are also invited to participate in the exchange program. This flexibility allows receiving/sending researchers from/to other institutions, both internal to the project and worldwide. Exchanges can be physical, virtual, and hybrid (a combination of physical and virtual.)  For physical exchanges, Fellows are supported by AI4Media with funds that cover two-way travel and a partial stipend for one to three months. The virtual and hybrid formats further increase the possibilities to take part in the program.

While the COVID-19 pandemic limited mobility in the first year of the project, the Junior Fellow program has now taken off, thanks to the commitment of the consortium partners to identify and support external visitors to be hosted, as well as to strengthen internal collaborations within the consortium through exchanges of project-funded staff.

As of June 1, 2022, the program has received 30 applications (24 Junior and 6 Senior Fellows; 7 women and 23 men.) A total of 11 exchanges have been completed, 13 are ongoing, and 6 Fellows will start in the summer/autumn of 2022. A balance between internal and external exchanges is emerging (16 internal collaborations between AI4Media partners, and 14 external collaborations with parties outside the consortium.) Finally, all three formats are being used (13 physical, 7 hybrid, and 10 virtual).

Multiple research results have already been produced. In future newsletters, we will feature interviews with some of the Junior Fellows to present their work and experience in the program.

More information about the Junior Fellow program can be found HERE. The videos of the AI4Media 1st Junior Fellow Day 2022 can be found HERE.

Authors: Daniel Gatica-Perez, (IDIAP Research Institute) & Filareti Tsalakanidou, (Information Technologies Institute – Centre for Research and Technology Hellas)

Why and how to use the European AI-on-demand platform

According to its own website, the European AI-on-demand platform is a one-stop shop for anyone looking for AI knowledge, technology, tools, services and experts. Its establishment is one of the main results of the AI4EU project, which was funded by the European Union as part of the Horizon 2020 initiative. The ultimate goal of this platform is to contribute to European sovereignty with respect to data and technology in the field of AI.
In this article, we provide an overview of the many different facets of the AI-on-demand platform, which reflects the diverse and colourful European AI landscape.

Since the end of AI4EU in December 2021, the AI-on-demand platform is now driven by the European AI community and the many follow-up projects of AI4EU within the European research initiatives ICT-48 and ICT-49. For example, the AI4EU Technical Governance Board (TGB) is currently managed by AI4Media which belongs to ICT-48. More on the background of AI4EU and the specific contributions of AI4Media to the AI-on-demand platform can be found in an article in the previous AI4Media newsletter.

The main entry point to all facets of the AI-on-demand platform is the website www.ai4europe.eu, while the platform itself does not only comprise this website but rather consists of the underlying virtual network and the activities of the involved parties. There are currently (May 2022) eleven contributing projects listed on the website, as well as more than one hundred organisations ranging from companies to research institutes and universities. Of course, the organisations and projects are linked to each other so that one can easily see who participates in which projects.

Going further, the AI-on-demand platform is also the home of several working groups such as the Working Group for Ontology just to mention one of them. The Observatory on Society and Artificial Intelligence (OSAI) is another example of cross-project collaboration on the platform. It is planned that the ethics section will also include policy recommendations for specific areas such as the media sector, based on the results of the corresponding tasks and work packages of AI4Media.

On the website, one may also find dedicated sections with news and events regarding the platform and the contributing projects. In this context, it is worth mentioning the AI4EU Web Cafés, a series of live webinars on AI. Since the end of AI4EU, this exceptionally successful format is being continued as AI Cafés under the umbrella of AI4Media. Recordings of past cafés are also available on YouTube and GoToStage.

One of the core parts of the AI-on-demand platform is the AI Catalogue which currently (May 2022) lists about 150 AI assets of various types: services, datasets, Docker containers, executables, Jupyter notebooks, libraries, machine learning models and tutorials. These assets are linked to the contributing projects and organisations. While the AI Catalogue is simply a list of items that do not necessarily implement common interfaces, some of these assets have also been technically integrated into AI4EU Experiments which can be seen as the technical part of the AI-on-demand platform. AI4EU Experiments is an interesting topic on its own and will be discussed in one of the next AI4Media newsletters.

Not all facets of the AI-on-demand platform have been touched on in this article, and new ones might emerge as the platform develops, so it is worthwhile to check out the website from time to time.

Contributions to the AI-on-demand platform are welcome and can be submitted by anyone. For publishing content such as AI assets, news, or events in the already existing sections, it is sufficient to have an EU login. With this, you may log in to the AI4EU website and submit your content for review. Once it has passed the review process, it will be published in the respective section. New sections and features can be added to the platform upon request which will be discussed in the TGB.

Starting in July 2022, the AI-on-demand platform will find its new home in the Coordination and Support Action (CSA) AI4Europe. Therefore, the sustainability of the AI-on-demand platform is ensured for the years to come, and it will thus be able to continue its substantial contributions to the European sovereignty with respect to data and technology in the field of AI.

Author: Andreas Steenpass, (Fraunhofer IAIS)

One step ahead in multimedia analysis and summarization

AI4Media explores innovative Deep Neural Networks (DNNs) for image/video/ audio analysis and summarisation through cutting-edge machine learning. The work performed up to now resulted in novel ways to automatically shorten long videos through unsupervised key-frame extraction, as well as in novel AI tools for the management or retrieval of media datasets.

However, typical DNNs require very large amounts of labeled training data in order to achieve good performance. In a systematic effort to bypass this, AI4Media also researched novel approaches to training or adapting DNNs for scenarios marked by a lack of large-scale, domain-specific datasets or annotations. The result up to now includes several innovative methods for few-shot, semi-supervised or unsupervised learning with media data.

In addition, AI4Media has researched advanced audio analysis for automatic music annotation and audio partial matching/reuse detection, mainly relying on DNNs. Overall, these algorithms can be readily exploited by industry-oriented tools for intelligent and automated media archives, management, analysis, search or retrieval, as well as synthetic audio detection/verification.

In this context, AI4Media has produced, up to now, several modern AI tools for:

Video key-frame extraction. Check out the related papers:

Video Summarization Using Deep Neural Networks: A Survey (Link)

Adversarial Unsupervised video summarization augmented with dictionary loss (Link)

Information retrieval on cultural media datasets, relying on a synthesis of computational deep learning with symbolic semantic reasoning. Check out the related paper:

Learning and Reasoning for Cultural Metadata Quality (Link)

Few-shot object detection. Check out the related code:

Few-shot object detection (Code)

Unsupervised domain adaptation for traffic density estimation/counting or for visual object detection. Check out the related paper:

Domain Adaptation for Traffic Density Estimation (Link)

Advanced video browsing and search. Check out the related paper:

The VISIONE Video Search System: Exploiting Off-the-Shelf Text Search Engines for Large-Scale Video Retrieval (Link)

Semi-supervised learning for fine-grained visual categorization. Check out the related paper:

Fine-Grained Adversarial Semi-supervised Learning (Link)

Deep dictionary-based representation learning. Check out the related paper and code:

When Dictionary Learning Meets Deep Learning: Deep Dictionary Learning and Coding Network for Image Recognition With Limited Data (Link)

Deep Micro-Dictionary Learning and Coding Network (Code)

Even though these activities are only the outcomes of the first project period, future research plans have already been laid with the intention to expand upon them in exciting new directions.

Author: Ioannis Mademlis, (Aristotle University of Thessaloniki)

Legal and ethical framework of trusted AI

AI4Media conducted an initial analysis of the legal and ethical framework for trusted AI, addressing the question of how the GDPR provisions should be interpreted when applied in an AI system context.

This work comprises:

  • an analysis of the EU data protection framework relevant for the AI systems;
  • a reflection on the upcoming EU legislation;
  • an initial suggestion towards the reconciliation of AI and GDPR legal frameworks;
  • a preliminary list of recommendations for trusted and GDPR-compliant AI, and
  • ways to mitigate and prevent risks and gaps.

Firstly, the research conducted showed that despite the GDPR not referring to “artificial intelligence”, many provisions of the legal text prove to be relevant for AI systems. It also highlighted that there is a lack of sufficient clarity as well as uncertainties and diverging opinions between scholars and interpretative guidelines. The academic literature showed sometimes converging and in other cases conflicting opinions among the research community on the scope of some GDPR provisions applied to AI systems. This research also introduced the use of AI systems in the media environment, including recommender and targeted advertising systems. 

Then, it delivered a comprehensive description of the overarching principles of the GDPR, including lawfulness, fairness, and transparency. A detailed analysis of GDPR Art. 5’s principles on purpose limitation, data minimisation, accuracy, storage limitation, integrity and confidentiality (security), and accountability, was also provided.

The different data subject rights when applied in the context of AI, were also analysed. The rights considered were: the right to be informed, the right not to be subject to a decision based solely on automated processing, the so-called right to explanations, the right of access, the right to rectification, the right to erasure, the right to restrict processing, and the right to object. 

The report also presented the growing challenges involved in the compliance with data subject’s requests for rights enforcement in big datasets, including complexities related to the different stages of AI system processing, transparency and right to information as a key for exercising the other rights, uncertainties regarding the application of data subject’s rights, unfriendly AI system interfaces for rights enforcement, and lack of enforcement leading to trade-offs.

The analysis also briefly touched upon upcoming European legislation relevant to the provisions of the GDPR and AI systems for processing personal data, including the AI Act proposal, the Data Governance Act proposal, and the proposed Data Act. The legislator seems well aware of the current challenges of GDPR and AI as these upcoming instruments try to complement GDPR and create additional safeguards, data quality requirements and favourable conditions to enhance data sharing. However, being currently negotiated, it remains to be seen how this will materialise. 

Finally, the report presents a set of initial recommendations built upon the initial analysis conducted throughout the first 18 months of the project. These recommendations addressed ways to ensure the development of trusted and GDPR-compliant AI, offering a conclusion on the gaps and challenges identified throughout the report, while also providing ways forward to mitigate and prevent the identified issues for trusted AI.

What’s next?  Further research will dive deeper into legal data protection for the use of AI applications in media environments and will investigate how people can be aware of what is being done with their data. This deliverable was indeed the first step toward the final analysis which is due in August 2023.

Access the full report HERE

Author: Noémie Krack, (KU Leuven)

The AI4Media Evaluation-as-a-Service Platform

Benchmarking represents a vital tool for the development of new technologies as it allows and eases a fair process for comparing the performance of different AI algorithms on common grounds, e.g., data, training, and metrics. The dedicated AI4Media open Benchmarking Platform is in its Prototype phase at this moment, providing such capabilities.

It was developed on the CodaLab framework. A testing benchmark is also provided as an example, namely the novel “late fusion” benchmark (ImageCLEFfusion 2022 task). The platform allows the users to create benchmarking tasks, create cloud-based repositories, manage participants and submitted data, as well as API integration.

It brings several advantages:

  • a European-based Evaluation-as-a-Service platform;
  • better control over data privacy, as access to data can be managed and the platform can even be deployed on local installs, thus separating it from the outside world;
  • development of reproducible and computationally efficient AI, through the high-level functions and options offered to the users;
  • addition of computational efficiency metrics that organizers can use to understand the computational complexity of the participants’ methods.

Access the prototype source code HERE

Author: Bogdan Ionescu, (Politehnica University of Bucharest)

Kick-off for the first 10 projects funded by the AI4Media Open Calls

The 10 projects funded by the 1st Open Call of AI4Media’s Open Call are underway, having held their official kick-off meeting on 2 March 2022. In the context of the funding programme, AI4Media will financially support each project with €50,000 and will provide tailored coaching, market-driven services, and business support, in addition to large-scale visibility. Some of the topics addressed by the projects include AI music and audio, media authentication, fact-checking, disinformation, and much more.

The objective of the AI4Media – Open Call #1 was to engage companies and researchers to develop new research and applications for AI and contribute to the enrichment of the pool of technological tools. Submissions were required to address one of seven specific challenges or open challenges from a Research or Application track. 

The 10 projects were selected from a total of 60 submissions from 22 countries. The competitive open call ran from 1 September to 1 December 2021. Eligible submissions were subject to an external evaluation by independent experts and a selected group of proposals went on to the interview stage. Each project has been awarded up to €50.000 to implement its work plan. 

A quick glance at the funded projects:

AIEDJ – AI Empathic DJ App (musicube GmbH, Germany): Aims to expand on existing AI software for audio and music and adapt it to each listener’s perspective on music so that the AI learns and adapts to different musical tastes.

CIMA – Next-Gen Collaborative Intelligence for Media Authentication (AdVerif.ai, Israel): Aims to develop a next-generation intelligence platform to make a collaborative collection of evidence for media authentication easier and faster. The platform will adopt cutting-edge AI methods from cyber-security to the media domain, empowering fact-checkers and journalists to be more effective.

CUHE – An explainable recommender system for holistic exploration and CUration of media HEritage collections (IN2 Digital Innovations GmbH, Germany): Aims to develop and demonstrate a web-based application based on AI recommendations that will allow cultural heritage professionals as well as (humanities) researchers to explore existing media and cultural heritage digital collections in a more holistic way and allow them to curate new galleries or create digital stories and exhibitions which can showcase and share the new insights gained.

InPreVI – Inauthentic web traffic Prediction in Video marketing campaigns for investment optimization (JOT Internet Media, Spain): Aims to develop an innovative AI-based system, using the existing JOT-owned video web traffic data to (1) identify the main behavioural patterns of inauthentic users to predict their actions and limit their impact in the video marketing campaigns and (2) model the quality score associated to a campaign.

VRES – Varia Research (Varia UG, Germany): Aims to bring AI power to the frontlines of the media industry, to the journalists. While journalistic research processes today are highly fragmented and based on workarounds, Varia Research will be the first holistic application that gives all central research activities a common home. 

edgeAI4UAV – Computer Vision and AI Algorithms Edge Computation on UAVs (lnternational Hellenic University, Greece): Aims to develop an edge computation node for UAVs equipped with lightweight active computer vision and AI (deep learning) algorithms capable of detecting and tracking moving objects, while at the same time will ensure robust UAV localization and reactive navigation behaviour.

NeurAdapt – Development of a Bio-inspired, resource efficient design approach for designing Deep Learning models (Irida Labs, Greece): Aims to deliver a framework, where established techniques such as channel gating, channel attention and calibrated dropout, are synthesized in order to formulate a building block of and novel methodology for designing CNN models.

RobaCOFI – Robust and adaptable comment filtering (Institut Jozef Stefan, Slovenia): Aims to develop new methods to bypass the problem of filtering and moderating comments and make the initial implementation process easy and fast; develop methods for semi-automatic annotation of data, including new variants of active learning in which the AI tools can quickly select the data they need to be labelled.

SMAITE – Preventing the Spread of Misinformation with AI-generated Text Explanations (University of Manchester, United Kingdom): Aims to develop a fact-checking system underpinned by deep learning-based, generative language models that will generate explanations that meet the identified requirements.

TRACES – AuTomatic Recognition of humAn-written and deepfake-generated text disinformation in soCial mEdia for a low-reSourced language (Sofia University “St. Kliment Ohridski”, GATE Institute, Bulgaria): Aims to provide solutions to the problem of fake content and disinformation spread worldwide and across Europe, and the detection of deep fakes, by creating methods and resources for detecting both human and deepfake disinformation in social media for low-resourced languages.

More information about the projects HERE

Authors: Samuel Almeida & Catarina Reis, (F6S)

New white paper maps the societal potentials and challenges of AI for the media

A newly published white paper has conducted an in-depth mapping of the main potentials and challenges of AI applications in the media cycle, providing a unique overview of the state-of-the-art discussion of societal impacts of AI. Based on this mapping, some provisional guidelines and considerations are distilled to guide the future work of industry professionals, policy makers and researchers.

The white paper has been produced by researchers from the University of Amsterdam, The Institute for Sound and Vision and KU Leuven as part of the AI4Media project and is based on a thorough literature review of academic journals published by scholars within the field of humanities, social science, media and legal studies. As well as, reports developed either with a specific focus on AI in the media sector or with a broader outlook on AI in society. 

The white paper is divided into two major parts. The first part identifies the main potentials and challenges across the entirety of the media cycle including i) ideation and content, ii) gathering, iii) media content production, iv) media content curation and distribution, v) deliberation over the content, and vi) archival practices. The second part explores six societal concerns that affect or impact the media industry. These included:

  • Biases and discrimination: AI is on one hand discussed as a potential solution to mitigating existing media biases (e.g., overrepresentation of male sources). On the other hand, there is also concern about how AI systems might sustain or further enhance existing biases (e.g., in content moderation where minorities are less protected from hate speech) and how that might have severe long-term effects on the role of media in society and the democratic practices it cultivates. 
  • Media (in)dependence and commercialisation: The “platformisation” of society also applies to the media sector, which is dependent on e.g., social media in their distribution of content and entangled in commercial data infrastructures. One major concern regarding this commercialisation and dependence on different platforms is the effects of such dependencies on media independence.
  • Inequalities in access to AI: While the use of AI is expanding rapidly, it is not doing so equally across the world. The primary benefactors of AI solutions remain to be the global north and particularly English-speaking countries. Inequality in access is, therefore, also a major concern. In the media sector this is also further widening, because of the existing competitional differences between smaller and larger media organisations, which could reduce media diversity.
  • Labour displacements, monitoring, and professional control: AI is often discussed in terms of the risk of labour displacement. In the media sector, the effects of AI on existing jobs remain limited, although some examples of displacement are emerging. However, AI also induces new power asymmetries between employees and employers as metrics and monitoring practices are becoming more common. Last, AI is transforming existing media practices (e.g., genres and formats) and challenging the professional control and oversight of both production and distribution practices.
  • Privacy, transparency, accountability, and liability: The privacy discussion regarding AI for media relate mostly to data privacy, where the conflict between commercial and democratic ideals intersects. Media organisations must consider their responsibility regarding data privacy models and new best practices of responsible data practices are needed. Transparency is mainly discussed regarding the practices of disclosure that media organisations currently employ and how streamlining is needed to ensure better transparency in the media landscape. Accountability is mainly discussed in relation to how and where to place responsibility as new actors enter the media landscape with the use of AI (e.g., service providers of AI).
  • Manipulation and mis- and disinformation as an institutional threat: The threat of manipulation is highly present in the discussion of AI and media as well as in society at large through concepts such as ‘fake news’. In the media sector specifically, much discussion centre on how other actors through the manipulation of content (e.g., deep fakes) or by affecting modes of distribution (e.g., bots) can manipulate public opinion. As media continue to serve an important role in society as trusted sources of information, the negative effects this might have on the trustworthiness of media are significant. As a core actor in the fight against disinformation, the development of tools to support the work of media professionals is important.

In the white paper, these discussions are further fleshed out and core points of consideration for the media industry, policy makers and AI researchers who engage with the media sector are suggested to help guide future work and research on AI.

Access the full white paper HERE.

A second version of the whitepaper will be developed and published in December 2023, in this version some of these core points of consideration will be further explored and qualified through workshops with relevant media organizations who can help provide even more concrete suggestions of best practices.

Author: Anna Schjøtt Hansen, (University of Amsterdam)

Discover the AI4Media Roadmap on AI technologies and applications for the Media Industry!

The AI4Media project developed a Roadmap on AI technologies and applications for the Media that aims to provide a detailed overview of the complex landscape of AI for the media industry.

This Roadmap:

  • analyses the current status of AI technologies and applications for the media industry;
  • highlights existing and future opportunities for AI to transform media workflows, assist media professionals, and enhance the user experience in different industry sectors;
  • offers useful examples of how AI technologies are expected to benefit the industry in the future; and
  • discusses facilitators, challenges, and risks for the wide adoption of AI by the media.

The roadmap comprises 35 white papers discussing different AI technologies and multimedia applications, use of AI in different media sectors, AI risks for the society and economy, legal and ethical aspects and latest EU regulations, AI datasets, benchmarks & open repositories, opportunities in the time of the pandemic, environmental aspects and many more.

The AI4Media Roadmap offers an in-depth analysis of the AI for Media landscape based on a multi-party, multi-dimensional and multi-disciplinary approach, involving the AI4Media partners, external media or AI experts but also the AI research community, and the community of media professionals at large. Three main tools have been used to describe this landscape, including:

  • a multi-disciplinary state-of-the-art analysis involving AI experts, experts on social sciences, ethics and legal issues, as well as media industry practitioners; 
  • a public survey targeted at AI researchers/developers and media professionals; and
  • a series of short white papers on the future of AI in the media industry that focus on different AI technologies and applications as well as on different media sectors, exploring how AI can positively disrupt the industry, offering new exciting opportunities and mitigating important risks.

Based on these tools, we provide a detailed analysis of the current state of play and future research trends with regard to media AI (short for “use of AI in media”), which comprises the following parts.

State-of-the-art analysis of AI technologies and applications for the media. Based on an extensive analysis of roadmaps, surveys, review papers and opinion articles focusing on the trends, benefits, and challenges of the use of AI, we provide a clear picture of the most transformative applications of AI in the media and entertainment industry. Our analysis identifies AI applications that are already having or can have a significant impact in most media industry sectors by addressing common needs and shared aspirations about the future as well as AI technologies that hold the greatest potential to realise the media’s vision for AI. 

Discussion of social, economic and ethical implications of AI. Complementing the previous state-of-the-art analysis, which highlights AI’s potential for the media industry from a technology and practical application point of view, this analysis dives into the social and ethical implications of AI, offering the point of view of social scientists, ethics experts and legal scholars, based on an extensive literature review of both industry reports and scholar articles. The most prevalent societal concerns and risks are identified, including bias and discrimination; media (in)dependence; unequal access to AI; privacy, transparency, accountability and liability; etc. In addition, we identify practices to counteract the potential negative societal impacts of media AI.

EU policy initiatives and their impact on future AI research for the media. We provide an overview of EU policy initiatives on AI, focusing on initiatives having a clear focus on the media industry. We discuss both policy (non-binding provisions) and regulatory initiatives (leading to the adoption of binding legal provisions), including the Digital Services Act, the AI Act, the Code of Practice on disinformation, the Proposal on transparency and the targeting of political advertising and more.

Analysis of survey results. Two online surveys were launched: i) a public survey aiming to collect the opinions of the AI research community and media industry professionals with regard to the benefits, risks, technological trends, challenges and ethics of AI use in the media industry (150 respondents from 26 countries); and b) a small-scale internal survey addressed to the consortium, aiming to collect their opinions on the benefits and risks of media AI for society and democracy. 

Main AI technology & research trends for the media sector. Based on the results of the state-of-the-art analysis, we highlight the potential of specific AI technologies to benefit the media industry, including reinforcement learning, evolutionary learning, learning with scarce data, transformers, causal AI, AI at the edge, bioinspired learning, quantum computing for AI learning. For each technology, a white paper offers an overview of the current status of the technology, drivers and challenges for its development and adoption, and future outlook. The white papers also include vignettes, i.e. short stories with media practitioners or users of media services as the main characters, aiming to vividly showcase how AI innovations could help the media industry in practice. 

Main AI applications for the media sector. Based on the results of the state of the art analysis, we highlight the potential of specific AI applications to benefit the media industry, including multimodal knowledge representation and retrieval, media summarisation, automatic content creation, affective analysis, NLP-enabled applications, and content moderation. Similarly to the above, a short white paper is presented for each application, offering a clear overview of the current status of the technology, drivers and challenges for its development and adoption, and future outlook. 

Future of AI in different media sectors. We present a collection of white papers, focusing on the deployment of AI in different media industry sectors, including news, social media, film/TV, games, music and publishing. We also explore the use of AI to address critical societal phenomena such as disinformation and to enhance the online political debate. Finally, we explore how AI can help the study of media itself in the form of AI-enabled social science tools. These papers offer an in-depth look at the current status of each sector with regard to AI adoption, most impactful AI applications, main challenges encountered, and future outlook. 

Analysis of future trends for trustworthy AI. We present four white papers focusing on different aspects of trustworthy AI, namely AI robustness, AI explainability, AI fairness, and AI privacy, with a focus on media sector applications. The analysis explains existing trustworthy AI limitations and potential negative impacts.

AI datasets and benchmarks. We analyse existing AI datasets and benchmark competitions, discussing current status, research challenges and future outlook, while also providing insights on the ethical and legal aspects of the availability of quality data for AI research.

AI democratisation. We discuss issues related to AI democratisation, focusing on open repositories for AI algorithms and data and research in the direction of integrated intelligence, i.e. AI modules that could be easily integrated into other applications to provide AI-enabled functionalities. 

External forces that could shape the future. We discuss the forces that could shape the future of the use of AI in the media sector, focusing on legislation/ regulation, the pandemic and its impact, and the climate crisis.

The Roadmap has been developed as part of the AI4Media public deliverable D2.3 “AI technologies and applications in media: State of Play, Foresight, and Research Directions”.

Access the full version of Roadmap

Access the Web version of Roadmap

Author: Filareti Tsalakanidou, (Information Technologies
Institute – Centre for Research and Technology Hellas)