Artificial Intelligence-generated content (Generative AI, for example, ChatGPT, DALL.E, or Midjourney) has garnered significant attention recently due to its ability to create cheaply and at scale credible-looking artificial content, including text, deep fake images and videos, art, and music. Generative AI systems are capable of creating highly convincing content at low cost, which could then be harnessed by bad actors in online disinformation and abuse, and is difficult to detect.
Another drawback of generative AI is that the models are trained on datasets that have been found to reflect implicit societal biases on issues such as gender and race. On the positive side, generative AI models also provide opportunities for countering online abuse and disinformation, by becoming useful companions to media professionals, engaged citizens, and debunkers.
This event focuses on all relevant issues and challenges around generative AI and tackling disinformation. We cordially invite you to join this policy & innovation conference co-organised by the new Horizon Europe projects vera.ai, AI4TRUST, and TITAN and H2020 project AI4Media, which develop novel AI techniques to counter online disinformation, in particular in light of the recent advances in generative AI.