A successful launch of the International Doctoral Academy on Artificial Intelligence – AIDA

The AI4Media project – funded by the European Union – has announced the fruitful launch of the International Artificial Intelligence Doctoral Academy (AIDA). Kicked off in November 2021 it already counts with 73 top Artificial Intelligence (AI) members, from excellent European Universities as well as research institutes and companies.

Founded by the joint collaboration of five European-funded projects (AI4Media, VISION, ELISE, HumanE-AI Net, and TAILOR), AIDA is fostering Ph.D. education excellence in AI through the involvement of their leading European AI partners. AIDA’s scope is pan-European and international, it is the first academy of its kind in Europe and beyond aiming to become a world reference for AI Ph.D. studies. This effort has strong momentum, as manifested by the 20 new Members that joined AIDA in the last 4 months.

AIDA is therefore attaining a critical mass to have a large impact on AI academic education, industry workforce upskilling and in addressing very important social challenges. These challenges range from the fight against disinformation to the provision of a human-centered and trustworthy AI that serves not only European citizens, but humanity in general.

It also aims to ensure European strategic autonomy in AI technology, which has a big potential socio-economic impact, and to reinforce Europe’s assets in AI, by benefiting from its world-class researcher community that stays at the forefront of AI developments. AIDA has the potential to form a common AI resource center and become a shared facility offering access to knowledge and expertise and attracting talented researchers as well as creating an easy entry point to AI excellence in Europe.

Additionally, AIDA is boosting a much-needed link between AI and humanities towards creating an anthropocentric (human-centered) European brand of AI that serves citizens worldwide. AIDA is also building an educational sildenafilo cinfa 50 mg momentum to cater to the ever-growing societal and industrial needs towards a strong, rich, human-centric, and democratic world.

AIDA is coordinated by the Aristotle University of Thessaloniki, Greece that is proudly named after Aristotle, the ancient Greek philosopher that founded Logic and Ethics, which both are at the core of AI.

See more details in: https://www.i-aida.org/

One year of AI4Media – The main achievements

The first year of the AI4media project has been successfully completed in August 2021, delivering important outcomes as we work to achieve our main objectives: delivering the next generation of core AI advances for the Media sector, enhancing AI training, and reimagining AI as a crucial beneficial enabling technology in the service of Society, Democracy and Media.

Our main achievements during this year can be summarised as follows:

  • Novel research tools & methodologies have been developed in four core AI areas: a) new learning paradigms and distributed AI, b) trustworthy (explainable, robust, privacy-preserving, fair) AI, c) multimedia content creation and analysis, and d) AI in the service of citizens and society (disinformation detection, fair recommendation systems, political debate analysis, effects of content sharing in real-life, etc.). Our research has already resulted in more than 70 publications in prestigious venues, open-access software in GitHub, and 4 new open datasets;
  • The user requirements for the seven AI4Media use cases have been identified through an intense co-creation process that involved strong collaboration between AI researchers & media industry partners. This process resulted in an impressive mapping of AI4Media use cases and the media industry’s AI requirements across the whole media value chain.
  • A detailed overview and analysis of AI policy and regulatory initiatives on the EU level have been delivered as a first step for the establishment of the Media AI Observatory. The analysis covers a wide variety of policy initiatives, including political, ethics & trust, IPR, safety & liability, and AI-specific, while it also examines regulatory initiatives like the AI package, DSA, DMA, DGA, and Data Act. In every case, the impact of these initiatives on the media sector has been explored;
  • The International AI Doctoral Academy (AIDA) has been successfully established with the support of its 53 founding members, including leading European universities, research centres and industry. Offering three distinct curricula (AI core, AI & Media, AI & Society) and a variety of AI courses, AIDA is already becoming a reference point on AI PhD-level education in Europe;
  • Close collaboration with the AI4EU platform has already been established, aiming to enrich the platform with new AI4Media resources;
  • The first AI4Media open call for equity-free funding of new research and applications for AI & Media has been launched on September 1st, 2021. 10 projects will be selected and will be funded with 50K euros each, after an evaluation process;
  • We have successfully organised a series of theme-based public workshops that cover different aspects of AI, aiming to disseminate the project’s research outcomes but also to allow fruitful discussion among peers and experts on hot AI topics. In addition, we joined forces with ELISE to co-organise an AI Mellontology e-symposium in the context of AIDA to debate interesting AI society/industry/economy topics. All recordings are available on AI4Media’s and AIDA’s YouTube channels.
  • During the next period we will focus on the development of the first version of the use case demonstrators and the integration of new AI features in them; the development of new AI technologies for media and society; the establishment of the European AI Media Observatory; the expansion of AIDA with new members, new AI courses and new educational activities; the mobility of young AI researchers in the context of our Junior Fellows Exchange Program; the implementation of the first 10 open call research & application projects and the launch of a 2nd Open Call!

Follow us on the rest of this 4-year journey to explore the opportunities offered by AI for the Media Sector and co-shape the European AI agenda!

International AI Doctoral Academy (AIDA)

Author: Prof. Pitas Ioannis (The Aristotle University of Thessaloniki – AUTH)

The four ICT-48 networks (AI4Media, ELISE, HumanE-AI NET, TAILOR) and the VISION project joined forces and, under the joint initiative of AI4Media and VISION, founded a new joint instrument to support a world-level AI education and research programme.

The International AI Doctoral Academy (AIDA) has been created for offering access to knowledge and expertise and attracting Ph.D. talents in Europe. AIDA was very successfully launched on 3/11/2021. It has now 67 top AI Members (a very good mix of 50 excellent European Universities and 17 Research Institutes and Companies). AIDA is the first academy of its kind in Europe and internationally.

AIDA membership is excellent, its scope is pan-European/international and its aims are high to become a world reference for AI Ph.D. studies.

It is very important that 67 leading European AI partners joined efforts with the 5 Horizon Europe ICT48 AI flagship projects to foster Ph.D. education excellence in AI. This effort has strong momentum, as manifested by the 14 new Members that joined AIDA in the last 3 months. Therefore, AIDA can indeed attain a critical mass to have a large impact on AI academic education, industry workforce upskilling and in addressing very important social challenges, which range from the fight against disinformation to the provision of a human-centered and trustworthy AI that serves not only European citizens, but the humanity in general.

AIDA can also ensure European strategic autonomy in such critical technology like AI, with huge potential socio-economic impact and to reinforce Europe’s assets in AI, by benefiting its world-class researcher community, so that it stays at the forefront of AI developments. It can form a common AI resource center and become a shared facility offering access to knowledge and expertise and attracting talented researchers. It indeed aims high at becoming a world reference point, creating an easy entry point to AI excellence in Europe. AIDA is coordinated by the Aristotle University of Thessaloniki, Greece that is proudly named after Aristotle, the ancient Greek philosopher that founded Logic and Ethics, which both are at the core of AI.

Therefore, AIDA can boost a much-needed link between AI and humanities towards creating an anthropocentric (human-centered) European brand of AI that serves citizens worldwide. AIDA will also build a much-needed educational momentum to cater to the ever-growing societal and industrial needs towards building a strong, rich, human-centric and democratic Europe.

AI4Media’s integration with the European AI-on-demand platform

Author: Andreas Steenpaß (FRAUNHOFER IAIS)

The European AI-on-demand platform is a one-stop shop for anyone looking for AI knowledge, technology, tools, services and experts. The aim of this platform, which has been initiated by the AI4EU project, is to bring together the AI community while promoting European values and to facilitate technology transfer from research to industry. As a follow-up project of AI4EU, AI4Media is collaborating closely with the AI4EU platform by integrating the project’s outputs such as modules, services and algorithms into it as well as by organizing web cafés for community building. Due to these activities, AI4Media is one of the pillars for ensuring the sustainability of the AI-on-demand platform over the years to come.

In January 2019, the AI4EU consortium was established to build the first European artificial intelligence on-demand platform and ecosystem with the support of the European Commission under the H2020 program. As more and more features are integrated, the AI4EU platform serves as a catalyst to aid AI-based innovation, resulting in new products, services and solutions to benefit European industry, commerce and society. By bringing people together, the platform counterbalances the fragmentation of the European AI landscape.

Since the end of the year 2021 also marks the official end of the AI4EU project, it is now the task and the responsibility of the follow-up projects within the funding initiatives ICT-48 and ICT-49 to continuously animate the AI-on-demand platform by integrating new assets and features. The integration of AI4Media with the platform covers a wide spectrum of aspects which are reflected by the different sub-activities:

First, AI4Media ensures the publication of the AI resources developed within the project to the AI-on-demand platform. There is a large variety of types of resources such as service, dataset, docker container, library, or tutorial. All of them will be published online in the AI Catalogue. The high quality of the uploaded assets is guaranteed by the publication process, and each entry contains detailed information about the respective resource including a textual description, relevant documents, the license, and the GDPR requirements.

Second, AI4Media supports the community-building activities of the AI-on-demand platform by offering a series of live web cafés on AI. The goal of these sessions is to gain insights into the international AI scene, to share knowledge and experiences, and to meet stakeholders from various areas of AI research and application. The live web cafés regularly reach a very wide audience, while recordings of past web cafés are available on GoToStage. So far, there have been six sessions with contributions from AI4Media members and this exceptionally successful format will surely be continued.

Third, a selection of the resources published in the AI Catalogue is also technically integrated into AI4EU Experiments, an open-source platform for the development, training, sharing and deployment of AI models which constitutes the technical part of the AI-on-demand platform. Of course, this only applies to resources of those types where a technical integration is reasonable such as datasets and docker containers, but no tutorials. The selection is made based on the requirements of AI4Media’s use cases and on the impact of the relevant research.

Going beyond the publication and technical integration of AI resources, AI4Media will also provide showcases for the interoperability of AI4EU Experiments with other media platforms, which is a major success factor for wider dissemination on both sides. For example, it is foreseen to provide adapters for making modules from other platforms available in AI4EU Experiments.

Finally, the project also conducts in-depth research on the shifting approach from platform liability to platform responsibility for third-party infringing and/or illegal content. For this task, the focus lies on specific guidelines and recommendations regarding the impact of legal regulations on the AI-on-demand platform. It is worth emphasizing that some activities of the AI4EU project will continue under the umbrella of AI4Media, such as the very successful web cafés and the further development of AI4EU Experiments. Some of the aspects outlined above have also been discussed in much greater detail at the AI4Media workshop on the European AI-on-demand platform which took place on 11 November 2021 (Available on YouTube). Once again, the workshop has illustrated that the integration of AI4Media with this platform is a continuous process for the benefit of both projects.

Detecting deepfakes in multimedia content

Authors: Roberto Caldelli (MICC – University of Florence); Fabrizio Falchi (National Council Research – CNR); Adrian Popescu (French Alternative Energies and Atomic Energy Commission- CEA

When created by malevolent entities, deepfakes pollute the online space and have deleterious effects in users’ real lives, especially when aimed to interfere with debates related to polarizing situations. Deep learning has enabled the generation of credible deepfakes for different types of multimedia content, such as texts, videos and images. AI4Media proposes tools for efficient deepfake detection, regardless of the nature of forged documents.

Promising results were already obtained for texts and videos. Fake texts are difficult to distinguish from human-generated texts for short sequences. An efficient method was designed for the detection of fake tweets generated by specific accounts by learning adapted deep language models based per account. Deepfake videos are hard to detect when models are not specifically trained for a specific type of forgery. An algorithm that leverages the optical flow in videos was introduced and it successfully generalizes the detection capabilities to unlearnt forgeries.

Deep language models can be used to generate short texts, such as Tweets, which are difficult to distinguish from real tweets. Fake tweets are written by bots that mimic specific users by exploiting language models fine-tuned on the user’s past contributions.  The more refined the language models are, the more credible the generated fake tweets will be. The proposed fake tweet detection method is designed to match these generation practices and thus successfully distinguish fake from real tweets. A wide array of detection models were tested and the best results were obtained using a RoBERTa, a recent algorithm whose objective is to produce deep language models. It provides a detection accuracy of over 90%. The method can be used for an effective flagging of fake texts on Twitter. Importantly, it is easy to deploy for a large number of users who are of interest in AI4Media (or beyond) since language models are created per account. The work also led to the creation of TweepFake, a public dataset dedicated to the detection of deepfake tweets. The availability of this dataset will facilitate future research in the area and ensure the proposal of comparable and replicable results (Read the paper).

AI-technologies can be used in various ways to generate realistic fake videos. While the detection of known forgeries is well handled, the same is not true for forgeries that are not known to the detection algorithms and thus cannot be learned. The proposed model exploits the optical flow fields of the videos in order to improve the robustness of the detection of unlearnt forgeries. It also has competitive performance for the detection of learned forgeries. The main novelty is the integration of the bi-dimensional optical flow fields with the pre-trained network which usually receives the inputs of the three channels. This allows the detection of temporal inconsistencies which complement the information obtained from the usual frame-based analysis of content. This work paves the way toward the proposal of deepfake detection methods which are exploitable in practice since it generalizes to forgeries that are unknown and thus learned by detectors (Read the paper).

Work is currently ongoing to propose methods that combine different cues available in documents in order to seamlessly detect forgeries in multimedia documents. Early results obtained for deepfake videos which combine the visual and audio channels are particularly promising.

References:

Caldelli, R., Galteri, L., Amerini, I., & Del Bimbo, A. (2021). Optical Flow based CNN for detection of unlearnt deepfake manipulations. Pattern Recognition Letters, 146, 31-37.

Fagni, T., Falchi, F., Gambini, M., Martella, A., & Tesconi, M. (2021). TweepFake: About detecting deepfake tweets. Plos one, 16(5), e0251415.

Building trust in Artificial Intelligence – AI4Media’s contribution to an ethical AI

Author: Killian Levacher, Research Scientist (IBM Research Europe – Dublin)

One of the research domains where AI4Media has focused on is the critical infrastructure necessary for the inclusion of AI tools within our society by investigating the various dimensions of Trusted AI. During the first year of the project, our activities have already successfully provided various research contributions in areas such as AI Robustness, AI Fairness, Explainable AI and AI Privacy.

This work has also led to the publication of 6 papers in prestigious AI conferences, and the submission of 4 conference papers and 1 journal paper. These early accomplishments represent a solid foundation to expand our research throughout the remaining years of the project.

Artificial Intelligence (AI) is an area of strategic importance to the European Union with respect to its ability to support and shape future economic and social development. While the recent leaps in innovation in this space offer immense opportunities, due to the increasing importance and prevalence of AI systems across industries various aspects of this technology present many security, as well as, societal risks which may conflict with the ethical and democratic principles shared across the European Union such as transparency, privacy and inclusion among others.

Trustworthy AI hence aims at providing a framework for the development of Machine Learning (ML) technologies, which guarantees their suitability with respect to the democratic and ethical values shared in our society. This recently emerging field of AI can be typically divided into four broad dimensions, namely AI robustness, Explainable AI, AI fairness and AI privacy.

AI Robustness focuses on machine learning vulnerabilities that can be exploited by malicious attackers seeking to either steal capacities of proprietary models, identify private information used to train these models, or purposely push a model in making incorrect predictions. These attacks can be achieved through the use of adversarial samples in various forms (images, texts, tabular data, etc.) and across a wide range of model types. In the first year of the AI4Media project, our activities already successfully provided various research contributions in this field.

  • The Aristotle University of Thessaloniki was successful in creating a novel AI training method which uses hyperspherical class prototypes to increase neural network robustness of AI models. (Read the paper)
  • IBM discovered that deep generative models can also be vulnerable to a new set of backdoor attacks. As part of this work, IBM developed new defence capabilities to protect generative AI models against such attacks. (Read the paper)
  • A new attack algorithm promoting the robustness of re-identification systems was developed by the University of Trento which addresses known vulnerabilities of such systems when used in domains unseen during their training phase. (Read the paper).

Explainable AI deals with the trust that needs to be established between an AI model and its user. European legislation states that technical measures must be put in place in order to facilitate the interpretation of the outputs of AI systems by the public. In other words, users of AI models must be able to understand why predictions were made, regardless of the precision or validity of each prediction. While the recent explosion of deep learning models has led to amazing gains in performance, these models in particular provide very limited visibility even to their own designers as to how they reached a decision. It is, therefore, crucial to develop a set of technologies that can support users in understanding how specific predictions were made, in order for these technologies to be safely incorporated within the fabric of society.

During the first year of the project, AI4Media partners successfully made a few contributions in this dimension of Trusted AI.

  • The University of Applied Sciences and Arts Western Switzerland developed a new method which enables the public to understand and interpret the most salient internal features of deep learning models in order to understand why a specific decision was made. The Centre for Research and Technology Hellas on the other hand developed a technique which can identify within videos, the most important items present within a frame which should be used by AI models to explain and describe a specific scene to the public (Read the paper).
  • The Universite Cote D’Azur developed a novel method for providing explanations for decision-marking systems.
  • The Commissariat `a l’Energie Atomique et aux Energies Alternatives proposed a new technique which enables vector arithmetic to be used in the underlying process used by generational models to produce new synthetic material.

Thanks to the collaboration of various partners, a public workshop (available on YouTube),  dedicated to developing a taxonomy of Explainable AI across various disciplines was also organised, bringing together 16 experts (7 invited speakers and 6 invited panelists) from a wide range of disciplines (technologists, philosophers, lawyers etc.) to discuss the various meanings, legal constraints and social impacts of Explainable AI and how these will impact the future technical development of the field.

Finally, the process of training and building AI models requires the management of large amounts of data which in many cases contain sensitive information which should not be shared beyond a dedicated group of data processors and owners. This generates a conflict of interest between the need to have the most numerous and accurate data available to reach high precision accuracy while at the same time reducing the amount of data being used to minimise any impact on an individual’s privacy.

Private information leakage can occur both while a model is being trained as well as after deployment. AI Privacy hence aims at threading the needle between these two forces by providing the means to produce reliable ML models while simultaneously protecting individuals’, as well as, corporations’ sensitive information. In this domain, during the first year of the project, the IDIAP Research Institute developed a new tool to secure privacy within a specific type of neural network based on graphs. (Read the paper)

This resulted in a publication on “Locally Private Graph Neural Networks” which was shortlisted as one of the ten finalists for the CSAW 2021 Europe Applied Research Competition. This competition awards the best paper of the year written by doctoral students in the field of security and privacy. A differential privacy library for AI models (Access the GitRepo), was developed by IBM and a novel method for data protection using adversarial attack was developed by the Aristotle University of Thessaloniki (Read the paper).

The use of AI in the media sector: policy and legislative developments at the EU level

Author: Lidia Dutkiewicz, Legal researcher (Center for IT & IP Law (CiTiP), KU Leuven)

During the past year, the European Commission (EC) proposed a comprehensive package of regulatory measures that address problems posed by the development and use of AI and digital platforms. These include the AI Package, the Digital Services Act and the Digital Markets Act, as well as the Data Governance Act and the forthcoming Data Act. In particular, the Artificial Intelligence Act (AI Act) presented by the EC in April 2021 represents a key milestone in defining the European approach to AI.

In a first of its kind legislative proposal, the EC aims to set a global standard of how to address the risks generated by specific uses of AI through a set of proportionate and flexible legal rules. The key question is how will these legislative proposals affect the use of AI in the media sector?

During the first year of the AI4Media project, one of the key milestones was to acheter sildenafil 100mg provide a clear overview of existing and upcoming EU policy and regulatory frameworks in the field of AI. In the last few years, there has been a variety of publications, guidelines, and political declarations from various EU institutions on AI. These documents provide valuable insight into the future of AI regulation in the EU. However, the large number of developments in the EU in the area of the “AI policy initiatives” makes it very difficult for AI providers and researchers to monitor the ongoing debates and understand the legal requirements applicable to them. The key challenge is to assess the possible implications of the proposed rules on AI applications in the media sector, i.e. for content moderation, advertising, and recommender systems.

Our analysis of the EU policy on AI envisages the impact of these EU initiatives for the AI4Media project in four distinctive areas. First, access to social media platforms’ data allows researchers to carry our public interest research into platforms’ takedown decisions, recommender systems, mis- and disinformation campaigns and so on. However, in recent years it has become increasingly difficult for researchers to access that data. That’s why there is a clear need for a legally binding data access framework that provides independent researchers with access to a range of different types of platform data. Recent regulatory initiatives, such as the Digital Services Act (DSA) try to address this problem.

Article 31 of the DSA proposal provides a specific provision on data access. However, it narrows access to platforms’ data to “vetted researchers”, namely university academics, which excludes a variety of different actors: journalists, educators, web developers, fact-checkers, digital forensics experts, and open-source investigators. Moreover, “vetted researchers” will be able to access platforms’ data only for purposes of research into “systemic risks”… The final scope of this provision will, undoubtedly, shape the way in which (vetted) researchers, journalists, and social science activists will be able to access platforms’ data. This is particularly relevant for the AI4Media activities such as opinion mining from social media platforms or detection of disinformation trends.

Second, it is extremely important to clarify the position of academic research within the AI Act. It is currently unclear whether the AI Act’s primary objective i.e. “to set harmonised rules for the development, placement on the market and use of AI systems” and its legal basis exclude non-commercial academic research from the scope of the Regulation or not.

Third, the scope of the AI Act is unclear when it comes to its applicability to media applications. Importantly, certain practices such as the use of subliminal techniques or the use of the AI system which exploits the vulnerabilities of a specific group of persons, are prohibited. However, the current wording of these provisions makes it unclear whether and to which extent the online social media practices such as dark patterns fall within the scope of this prohibition. The AI Act also proposes certain transparency obligations applicable to AI systems intended to interact with natural persons, emotion recognition systems and deep fakes. However, the requirements lack precision on what should be communicated (the type of information), when (at which stage this should be revealed) and how. The important research questions which will be tackled in future AI4Media activities include: 

  • Does the ‘AI systems intended to interact with natural persons’ encompass recommender systems or robot journalism?
  • Does ‘sentiment analysis’ and measuring and predicting user’s affective response to multimedia content distributed on social media with the use of physiological signals fall under the “emotion recognition” system?

Fourth, the use of AI to detect IP infringements and/or illegal content is one of the key legal and societal challenges in the field of AI and media. The key questions center around the role of ex-ante human review mechanisms before removing content and the potential violation of human rights, i.e. freedom of expression when legal content is being removed. The platform responsibility for third-party infringing and/or illegal content will be particularly relevant in AI4Media’s activity related to the “Integration with AI-On-Demand Platform”.

Considering the ever-changing legal landscape, the work performed in the analysis of the EU policy and regulatory frameworks in the field of AI is not a one-off exercise. Rather, the preliminary analysis done so far serves as a solid basis for the upcoming work in the later stage of the project, namely “Pilot Policy Recommendations for the use of AI in the Media Sector” and “Assessment of social/economic/political impact from future advances in media AI technology and applications”.


Stay tuned for more!

AI4Media Workshop on the European AI-on-demand platform

On November 11th, 2021, AI4Media organised a workshop on the European AI-on-demand platform.

The objective of this workshop was to allow a better understanding of the technical and non-technical facets of the AI-on-demand platform, highlighting the role of the platform as the central link between the European AI networks, and to encourage everyone interested in AI to join it.

The AI-on-demand platform has been initiated by the AI4EU project, and it aims to bring together the AI community while promoting European values, and facilitating technology transfer from research to business.

As a follow-up project of AI4EU, AI4Media is collaborating closely with the AI4EU platform by integrating the project’s outputs such as modules, services and algorithms into it as well as by organizing AI4EU Web Cafés for community building.

The workshop was divided into two parts, the first one was dedicated to presenting the platform, including the organisations and projects involved and its different parts such as the AI Catalogue and the web cafés. During this session it was also illustrated the cooperation between European AI networks on the AI-on-demand platform, with special emphasis on digital innovation hubs.

The second part was focused on presenting the AI4EU Experiments, which is an open-source platform for the development, training, sharing and deployment of AI models which constitutes the technical part of the AI-on-demand platform. This included the introduction to general features such as the Marketplace and the Design Studio as well as some example pipelines. How AI4EU Experiments can be connected to other media platforms and how new modules can be integrated into it, was also shown in a tutorial in this session.

Workshop Recording

Agenda

DOWNLOAD

Presentations

AI4Media’s workshop on “European AI Vision & Policy – The Future of European AI regulation”

AI4Media’s workshop on “European AI Vision & Policy – The Future of European AI regulation” took place on September 14th, 2021 via webex.

The objective of the workshop was to present the recent research advances achieved in the AI4Media project on this domain, allowing a better understanding of the legal, political and societal challenges faced by the media sector. Ultimately, the goal was to contribute to the ongoing debate on the policy recommendations in the field of media and AI regulation.

The workshop addressed the following topics:

  • the latest European developments in the field of AI regulation, e.g. the European Commission’s Proposal for a Regulation on Artificial Intelligence (Artificial Intelligence Act);
  • the relevance of the AI Act for media and journalism;
  • the use of AI in the media sector: lessons learned and recommendations from Austrian and French case studies;
  • the role of data for academic research;
  • responsible and ethical use of AI in the media sector.

In addition, two invited speakers addressed these complementary topics:

  • Muriël Serrurier Schepper from Media Perspectives gave a guest talk on “Reaching consensus in the Dutch media industry on a Declaration of Intent for responsible use of Artificial Intelligence in the media: background and current state” where she will present the insights from the Dutch case study on the use of AI in the Media sector
  • Matthias Spielkamp from AlgorithmWatch gave a guest talk on “Gatecrashing the platforms’ party: Data access between self-defence and red herring” where he will address the key topic of accessibility of online platforms’ data for academic research.

Last but not least, the workshop ended with the keynote by Prof. Paul Keller and Dr. Alek Tarkowski from the OpenFuture Institute, who gave a presentation on “Regulating AI and the cultural commons” tackling the issue of the use of openly licensed photos and datasets for AI training.

Workshop recording

AGENDA

Download

AI4Media’s workshop on “Content-centered AI”

The objective of the workshop on “Content-centered AI”, organised by AI4Media on September 1st, 2021, was to present the contributions from AI4Media partners in fundamental machine learning research, and innovative Artificial Intelligence-based methods and tools for content production and usage.

The workshop addressed limitations of Deep Learning related to training with data scarcity, extending the potential applicability of AI to a wider set of media. It also presented innovative solutions for (semi-) automated multimedia content production, analysis of content provenance, visual data and audio retrieval, annotation, and summarization.

The workshop had two invited talks from worldwide renowned scientists:

  • Prof. Mohan Kankanhalli, National University of Singapore, will give a keynote talk on “Privacy-aware Analytics for Human Attributes from Images”, where he will address the key topic of how analyzing human emotions, gender and age in images and videos under privacy-preserving conditions.
  • Prof. Alan Smeaton, Dublin City University,  will give a keynote on “Multimedia analysis and multimedia retrieval: Is there a mismatch?”, where he will explore the relationship between visual information and human memory and raise questions on the real effectiveness of the current search and analysis tools that we use.

Recording of the Workshop:

AGENDA

Download the Agenda

AI4Media workshop on “Human- and Society-centred AI”

On June 25, 2021 May 4th, AI4media organised a technical workshop on Human- and Society-centred AI” via webex. The objective of the workshop was to present the recent research advances achieved in the AI4Media project on this domain, particularly addressing the following topics:

  • the detection of deep fakes in multimedia content
  • the dynamics of social media conversations
  • the automatic analysis of political discourse
  • the fusion of different signals for user behaviour characterization
  • the human-centered analysis of news consumption, and
  • the real-life effects of data sharing

The workshop aimed to allow a better understanding of the relation between AI and different aspects of news distribution and consumption, and contribute to the design of a healthier public debate, with a particular focus on societally impactful topics such as health and politics.

In addition to the interventions from AI4Media partners, the workshop also had two guest speakers:

Recording of the workshop:

Agenda

AI4Media workshop on “New learning paradigms & distributed AI”

Last Tuesday, May 4th, 2021, AI4Media organised the first of a series of workshops, that aimed at sharing the work and progress of the project in different research activities.

The first workshop was focused on “New learning paradigms & distributed AI”, and its main objective was to address the current limitations of learning approaches and improve speed and performance, by looking not only into the new approaches going beyond the current achievements in deep learning, such as lifelong and continuous learning, moving beyond Variational Autoencoders (VAEs), manifold and transfer learning, neural architecture search, quantum-assisted learning, the fusion of evolutionary algorithms and deep learning, but also, the present solutions for decentralised and distributed computation.

The agenda included presentations from the AI4Media partners and two guest speakers:

  • Dr. Xavier Alameda-Pinedia from INRIA, which addresses the research direction of audio-visual fusion for human behaviour understanding.
  • Prof. Rita Cucchiara from University of Modena and Reggio Emilia, who will tackle the hot subject of sustainable and privacy-preserving action understanding in videos.

Recording of the workshop:

Agenda of the Workshop:

Why is AI4Media important at technological, societal, political levels?

The AI4Media project is expected to transform the European AI landscape with regard to media & society and strengthen Europe’s position on the global AI stage.

Strengthening Europe’s leadership in AI research

AI4Media reinforces Europe’s research capacity in AI by performing cutting-edge research that caters to the needs of the media sector, tested through seven industrial use cases that range from combating disinformation in social media and supporting journalists for news story creation, to game design and artistic co-creation.

Delivering innovations for the News Media Industry

Ai4Media aims to provide new AI-based solutions for the News media sector by developing novel tools for journalistic research, journalistic creation, news production and service packaging, bias and disinformation detection, and audience/content research and performance optimisation.

Safeguarding democracy and political discourse

Media manipulation and disinformation campaigns relying on manipulated content are increasingly used as instruments for influencing key democratic processes, undermining democratic institutions, and re-shaping public opinion. AI4Media addresses these problems by developing AI technology for detecting disinformation in social media, with a view to supporting journalistic fact-checking and verification workflows in news organisations.

Putting AI in the service of the society

The research community carries a particular responsibility to help protect European societies from the potential negative impact of AI. By bringing together a diverse community of scientific experts on AI, media organisations, and social scientists, AI4Media aims to promote a unique brand of Ethical AI, powered by European actors and promoted by European media industries.

AI4Media at the workshop “Towards a Global Taxonomy of Interpretable AI”

“Towards a Global Taxonomy of Interpretable AI” was a half-day workshop, supported by AI4Media, that was held on April 29th, where around 20 experts actively participated in the two sessions of the workshop: a round table discussion with short presentations and a panel session.

Invited talks were given by the experts to illustrate their perspective on Interpretable AI from the cognitive, social, ethical, and legal perspectives.

During the panel session, experts answered questions on interpretability toolboxes, ethical and philosophical concerns, and how these tools can be used in practice, for example in medicine.

The slides and streaming of the event are available on the event’s website HERE

AI4Media at the workshop “Building Interpretable AI for Digital Pathology”

A half-day workshop on “Building Interpretable AI for Digital Pathology” was held online for the Applied Machine Learning Days of Lausanne on April 27th.

The workshop, supported by AI4Media, tadalafil 10 mg en linea offered an introduction to digital pathology presented by Prof. Inti Zloebec, and an overview of interpretability techniques for machine learning algorithms applied to digital pathology given by Mara Graziani.

Two hands-on sessions were hosted on how to apply interpretability techniques to histopathology data, basic and the latest approaches in the research field such as regression concept vectors and graph-based modeling.

More information about the event HERE

AI4Media’s presence at “European Vision for AI 2021”

European scientists, leaders, high-level experts, and the general public discussed the present and future of artificial intelligence in Europe.

On Thursday 22 April 2021, an important discussion on the present and future development of Artificial Intelligence (AI) in Europe, which involved key European scientists, leaders, high-level experts, and the general public, took place during the “European Vision for AI 2021” event, just a day after the European Commission published its European Approach to Artificial Intelligence. The event, organised by the VISION project consortium partners in cooperation with four networks of centres of excellence on AI (AI4Media, ELISE, TAILOR, HumanE-AI-Net), was motivated by the need to discuss with the general public how the European scientific community is currently planning to move European AI forward, to future success in a competitive environment increasingly dominated by the US and China.

The general public was able to follow the event online and was able to interact via chat and a number of polls. Participants also had the opportunity to choose from three parallel sessions focussing on society, industry and skills & training, depending on their particular interests. A diverse set of panelists, covering a broad range of stakeholders in AI, discussed how the new EU plans affect these areas of high public interest.

At the event, European citizens with an interest in AI had the chance to gain an overview of Europe’s position in the field of AI and to learn about its impact on Europe’s economy and society. Thus, “European Vision for AI 2021” represented the first opportunity for a public discussion on the world’s first legal framework on AI and the new Coordinated Plan with the Member States, announced by the European Commission the day before, following up on its vision for a European eco-system of excellence and trust in AI.

The round-table discussion provided a chance to react to the proposed regulation and action plan with Dita Charanzová, Vice-President of the European Parliament; Kristian Kersting, TU Darmstadt, Germany; Ieva Martinkenaite, VP AI, Telenor, Norway; and Gabriele Mazzini, DG CNECT, European Commission.

According to Ms. Charanzová the rules “are a good and positive step. We are now all digesting the proposal [..]. I welcome it very much but as I always say ‘the devil is in detail’.  So we have to look at every detail of these regulations”. Ms. Charanzová pointed out that there already is legislation in place so the new regulation is about filling the gaps.

The need to coordinate efforts between research, industry, and society was also stressed by Ieva Martinkenaite: “We need to create best practices, positive examples for industry, involving policymakers, involving academia so that we learn how to apply these practices of trustworthy AI in business. It is not about creating compliance and regulations and directives. It’s about helping us to implement it”.

Kristian Kersting also stressed the importance of maintaining excellence in research through coordinated actions and the creation of large-scale infrastructure, through the creation of a “CERN for AI”.

AI4Media’s video was shown during the event and the project also participated in the parallel session dedicated to “Skills & Training: Education and personal development in AI” where AI4Media’s partner, Prof. Ioannis Pitas from Aristotle University of Thessaloniki, was one of the speakers.

The recorded video of the “Skills & Training” session is available below:

All videos from the event are available at VISION’s website HERE

AI4Media endorsed the Code-against-hate hackathon 2021

AI4Media endorsed and supported the Code-against-hate hackathon that was carried out on March 19-21, 2021. In particular, AI4Media’s partner, the AI Multimedia Lab of University Politehnica of Bucharest participated with a team in the hackathon and Prof. Bogdan Ionescu was a Mentor in the event.

Scope of the event

Public communicators, influencers, popular brands, journalists, etc. are targeted with online hate speech on a daily basis, particularly on Facebook and Twitter, when creating content. The amount of user-generated comments on these platforms is huge and identifying hate speech manually has become such a time-consuming task, that most public communicators feel discouraged to tackle it.

Let´s empower public communicators and social media users in their infinite fight against hate speech. The real challenge is not only to detect hate speech effectively but to develop a solution that would make moderation of online debate containing hate speech easier.

More information about the event HERE

AI4Media on Predicting Media Memorability at MediaEval 2021

On 11, 14 and 15 of December 2020, AI4Media supported the organisation of a workshop on “Predicting Media Memorability” at MediaEval 2021.

Media platforms such as social networks, media advertisement, information retrieval and recommendation systems deal with exponentially growing data day after day. Enhancing the relevance of multimedia occurrences in our everyday life requires new ways to organize – in particular, to retrieve – digital content. Like other metrics of video importance, such as aesthetics or interestingness, memorability can be regarded as useful to help make a choice between competing videos. This is even truer when one considers the specific use cases of creating commercials or creating educational content. Because the impact of different multimedia content, images or videos, on human memory is unequal, the capability of predicting the memorability level of a given piece of content is obviously of high importance for professionals in the field of advertising. Beyond advertising, other applications, such as filmmaking, education, content retrieval, etc., may also be impacted by the proposed task.

The task addressed in this workshop requires participants to automatically predict memorability scores for videos, that reflect the probability for a video to be remembered. Participants received an extensive data set of videos with memorability annotations, related information, and pre-extracted state-of-the-art visual features.

The AI4Media’s partners involved in this workshop were Mihai Gabriel Constantin and Bogdan Ionescu from Multimedia Lab, UPB and Claire-Hélène Demarty, InterDigital.

More information about the event HERE

AI4Media Workshop on GANs for Media Content Generation

On October 1st, 2020, AI4Media organised an online workshop on “GANs for Media Content Generation”, with the objective of looking into the use of Generative Adversarial Networks (GANs) for Media production and related challenges.

The workshop was moderated by AI4Media’s Coordinator Yiannis Kompatsiaris, and the following topics were addressed by the speakers from AI4Media’s consortium.

  • Learning to Predict Pixels Using AI for Content Enhancement and Delivery
  • Deepfake Detection: The Importance of Training Data Pre-processing and Practical Considerations
  • Image and Video Generation: A deep Learning Approach
  • Adversarial Face De-identification for Privacy Protection
  • Major Challenges in the Detection of Synthetic Media and Deepfakes

Watch the recording of this workshop below:

Access to all the presentations HERE

First meeting of many

AI4Media is composed of 30 organisations from 16 EU countries, comprising 9 universities, 9 research centers, and 12 industrial partners, which was built aiming to bring together both leading experts from top-ranked institutes and big industries, as well as high-growth institutes and innovative SMEs across the whole Europe. This diversity and multidisciplinarity will enable the project to reinforce collaboration and exchanges between academia and industry.

In September 2020, this group of organisations met for the first time under the scope of AI4Media, in the first e-meeting of the project. This meeting was a great opportunity to meet the team that will be collaborating during the next 48 months.

Screen shot AI4Media Kick off Meeting

Each respective partner presented the strategies and plans for the different activities to be implemented during the project, relevant questions and opinions were shared and discussed. After 2 productive days, all the partners had a better understanding of the work ahead, their respective roles, and their involvement.