How can we integrate AI successfully in news production? VRT shares their experiences and good practices

As part of the AI4Media project, the national public-service broadcaster for the Flemish Community of Belgium (VRT), has been involved in integrating new AI applications into their workflows. This is a highly complicated process, where there is much to learn from good practices – here VRT shares a few of their insights. 

To facilitate better integration processes VRT has developed a stand-alone tool in which the possibilities and functionalities of a new AI application are made visible and tangible to the team involved. This enables key stakeholders, such as editors-in-chief to better assess the added value and make an informed decision whether to go ahead with the integration.

The tool is called Smart News Assistant, which is aimed at expressing the tool’s role in ‘assisting’ or ‘co-creating’. This is emphasised due to the importance for news professionals to be in control of the production process.

Starting from the content source

When developing this tool, the VRT’s news department expressed that it would be highly important to always start from existing content when assessing the capabilities of the potential AI solution, as this would ensure a sense of reliability and control.

Therefore, the tool allows the user to start from an existing piece of content, video, audio or text, and see how AI would enable the generation of new content formats, like for example a short video or Instagram post.

After making the possibilities of AI tangible in the Smart News Assistant, the editors-in-chief can assess the added value.  If the editors see potential in the solution via this initial test, the innovation team of VRT will proceed to explore integration possibilities. This involves collaboration with the technology teams who are responsible for the surrounding systems that the AI application is to be integrated with.

Towards integrating automatic summarization

One of the potential applications that were made tangible with the Smart News Assistant was automated summarisation, where a news article was automatically turned into bullet points. This was seen as highly valuable for editors and is something that VRT is now working towards integrating. However, there are many challenges in this process, such as the integration into the existing CMS system.

In this work, the team involved are focused not only on integrating the AI functionalities into the familiar news production flow but they are also taking into account emerging formats. VRT, for example, recently introduced WhatsApp updates where VRT NWS has its own channel – a format where the summarization tool might also be useful. By thinking beyond the existing news flow, they can work more efficiently in integration work and pre-empt other potential use cases.

However, this is where the Smart News Assistant is providing additional value, because while the integration process is ongoing, the editors can still use the AI tool as it is presented in the Smart News Assistant interface. So while it is not directly integrated into workflows, editors can copy-paste the textual suggestions manually into their CMS system from the interface, which places less stress on the integration team and enables immediate value from the AI solution for the editors – even if it requires a few extra clicks. 


Screenshot Smart News Assistant (summary) – by Chaja Libot (VRT)


Screenshot Smart News Assistant (Whatsapp Update + fine-tuning result) – Chaja Libot

Author: Chaja Libot (Design Researcher, VRT) 

Recommenders: Amplifiers or Mitigators of Human Biases?

Recommender systems are often criticized for potentially contributing to filter bubbles. This phenomenon is sometimes attributed to algorithmic bias, suggesting that systems operate contrary to user interests. However, this perspective may be overly simplistic; recommender systems are typically optimized for the “utility” as a metric, driven by immediate user engagement such as clicks and likes. In doing so, they inherently reinforce human biases, particularly (i) confirmation bias—the tendency to search for, interpret, favor, and recall information in a way that confirms one’s preexisting beliefs supported rapid decision-making, which was crucial for survival: In the face of imminent threats, this bias can simplify complex information processing, enabling quicker responses by focusing on data that supported known strategies or dangers—and (ii) in-group bias, which is the predisposition to engage with content or groups that share similar attributes or opinions, enhancing social cohesion and cooperation within tribes, fostering trust and mutual support, all crucial for survival in environments where human groups competed for resources. These biases, while advantageous throughout most of human evolution, pose serious challenges in today’s digital environment, which offers unprecedented freedom to filter information and engage only with agreeable content and like-minded individuals. As a result, recommenders can reinforce such human biases and reaffirm users’ beliefs by “filtering us” into information bubbles. Likewise, however, the same technology can also be used to reduce them. Recommender systems can be designed to provide us with a broader range of viewpoints and content, pushing us to also consider opinions and information outside of our bubbles, thereby promoting the diverse public discourse that is essential for democratic engagement.

To tackle this issue, we can utilize metrics such as novelty, diversity, unexpectedness, and serendipity in recommendation algorithms, which aim to broaden users’ informational horizons. Moreover, this approach can be supported by technologies that automatically analyze and annotate content, providing the data needed to drive recommendations that are both subtle and transparent. The goal is to encourage user engagement with a variety of topics and viewpoints without overwhelming them.

Can services and business models that prioritize long-term user satisfaction over short-term metrics like clicks be successful? Similar shifts have succeeded in other sectors. For example, despite our preference for sugary foods, the market for healthier options has flourished. Moreover, as our understanding of the underlying problems deepens, regulatory measures become more likely.A precondition to this, however, is that we start recognizing our personal biases and limitations and how they are contributing to the creation of filter bubbles and all related problems, creating the willingness to tackle them. This includes the exploration of new business models for a healthier information diet, because the current models do not yet address this. They have, for better or worse reasons, catered to our immediate urges too much, at the expense of long-term well-being and societal discourse.

Author: Patrick Aichroth (Fraunhofer IDMT)