2023 . 05 . 12

First open call projects come to an end with promising results and contributions to the community

The 10 projects funded under the first AI4Media open call have finalised their activities. The projects – five from the application track and five from the research track – initiated their activities on 1 March 2022 and ended on 31 October 2022 and 28 February 2023, respectively. The projects, which addressed different topics in the AI and media domains, delivered new applications and research work focusing on audio and music, recommendation systems, edge computation, misinformation, and others.

The main results and achievements of the 10 projects are presented below, each having also provided a contribution to the AIM4Media ecosystem.

VRES (Application project by Varia)

The VRES (Varia Research) project set out to revolutionise journalistic research by providing an integrated SaaS solution that allows media monitoring and research organisation in one place. The machine learning powered application Varia Research promises more efficient research and additional automated insights. The project has contributed to the AI4Media ecosystem and to the broader media audience with a freely available online research application that brings AI to the people – to the heavy lifters of the news media industry, the journalists.

AIEDJ (Application project by musicube GmbH)

The AIEDJ (AI Empathic DJ) project has focused on developing neural networks that process audio files and automatically tag them with musical features, sound features and emotions. The project has developed a software for Spotify-User data from the Spotify API which is then fed into a neural network. The neural network was trained with both music metadata and audio files. The project has contributed software that allows search operations based on the musical information retrieved with the neural nets and shifted by the user’s Spotify listening behaviour (meaning the user’s perspective on music).

InPreVI (Application project by JOT Internet Media)

The InPreVI (Inauthentic web traffic Prediction in Video marketing campaigns for investment optimization) project has set out to develop an innovative AI based system that can, first, identify the main behavioural patterns of inauthentic users to predict their actions and limit their impact in the video marketing campaigns; and secondly,  model the quality score associated with a campaign. InPreVI has contributed a dataset that can be used to train and validate predictive and classification models as well as enrich other data with it; a classification model that provides ideas related to the potential use of the data set; and a predictive model for conversion difference.

CUHE (Application project by IN2 Digital Innovations GmbH)

The CUHE (An explainable recommender system for holistic exploration and CUration of media HEritage collections) project has looked to develop and demonstrate a web-based application based on AI recommendations that allow cultural heritage professionals (e.g. museum curators, archivists) as well as researchers to explore existing media and cultural heritage digital collections in a more holistic way and allow them to curate new galleries or create digital stories and exhibitions which can showcase and share the new insights gained. The project has contributed with the CUHE recommender system, which will be made available as a service,  as well as a related dataset.

CIMA (Application project by AdVerif.ai)

The CIMA (Next-Gen Collaborative Intelligence for Media Authentication) project has focused on creating a next-gen collaborative intelligence platform powered by the latest AI advancements to make journalists and fact-checkers more effective in media authentication. The work is focused on collaborative investigation and collection of evidence to support cross-EU investigations and knowledge sharing. Moreover, the CIMA project has also looked to provide a novel system for preservation of evidence on the Internet. The project has contributed algorithms for integrations with common open-source intelligence.

RobaCOFI (Research project by the Institut Jozef Stefan)

The RobaCOFI (Robust and adaptable comment filtering) project has looked to develop new methods to overcome the challenge of moderating contents associated with news articles, which is often done by human moderators and therefore decisions may be subjective and hard to make consistently. The project has developed methods for semi-automatic annotation of data, including new variants of active learning in which the AI tools can quickly select the data that need to be labelled. Work has been built on recent progress in topic-dependent comment filtering to build tools that can take the context of the associated news article into account, reducing the new data needed. The project has contributed with several public resources, including a pre-trained offensive language moderation classifier and software tools for model adaptation and active learning.

NeurAdapt (Research project by Irida Labs)

The NeurAdapt (Development of a Bio-inspired, resource efficient design approach for designing Deep Learning models) project has set out to explore a new path in the design of deep Convolutional Neural Networks (CNNs), which could enable a new family of more efficient and adaptive models for any application that rely on the predictive capabilities of deep learning. Inspired by recent advances in the field of biological Interneurons that highlight the importance of inhibition and random connectivity to the encoding efficiency of neuronal circuits, the project has looked to investigate the mechanisms that could impart similar qualities to artificial CNNs. The NeurAdapt project has contributed with an “As A Service” asset that provides access to a Dynamic Computation CNN feature extraction network for image classification, and a free to use executable that provides a hands-on experience on the NeurAdapt technology, by using a small and fast feature extraction network trained on a CIFAR-10 database.

SMAITE (Research project by the University of Manchester)

The SMAITE (Preventing the Spread of Misinformation with AI-generated Text Explanations) project has focused on developing a novel tool for automated fact checking of online textual content, that contextualises and justifies its decision by generating human-accessible explanations. The project’s vision has been to equip citizens with a digital literacy tool that not only judges the veracity of any given claim, but more importantly, also presents explanations that contextualise and describe the reasoning behind the judgement.

TRACES (Research project by the Sofia University “St. Kliment Ohridski”, GATE Institute)

The TRACES (AuTomatic Recognition of humAn-written and deepfake-generated text disinformation in soCial mEdia for a low-reSourced language) project has set out to find solutions and develop new methods for disinformation detection in low-resourced languages. The innovativeness of TRACES has been in detecting both human and deep fakes disinformation, recognising disinformation by its intent, the interdisciplinary mix of solutions, and creating a package of methods, datasets, and guidelines for creating such methods and resources for other low-resourced languages. The project has contributed with machine learning models for detecting untrue information and automatically generated texts in Bulgarian with the models GPT-2 and ChatGPT; social media datasets automatically annotated with markers of lies; and others.

edgeAI4UAV (Research project by the lnternational Hellenic University)

The edgeAI4UAV (Computer Vision and AI Algorithms Edge Computation on UAVs) project has focused on developing a complete framework for moving people and objects detection and tracking in order to extract evidence data (e.g. photos and videos from specific events) at real-time (when the event occurs), like cinematography tasks, though a reactive Unmanned Aerial Vehicle (UAV). To this end, the edgeAI4UAV project implemented an edge computation node for UAVs, equipped with a stereoscopic camera, which will provide lightweight stereoscopic depth information to be utilised for the evidence detection and UAV locomotion.

Author: Samuel Almeida & Catarina Reis (F6S)