2024 . 03 . 11

Recommenders: Amplifiers or Mitigators of Human Biases?

Recommender systems are often criticized for potentially contributing to filter bubbles. This phenomenon is sometimes attributed to algorithmic bias, suggesting that systems operate contrary to user interests. However, this perspective may be overly simplistic; recommender systems are typically optimized for the “utility” as a metric, driven by immediate user engagement such as clicks and likes. In doing so, they inherently reinforce human biases, particularly (i) confirmation bias—the tendency to search for, interpret, favor, and recall information in a way that confirms one’s preexisting beliefs supported rapid decision-making, which was crucial for survival: In the face of imminent threats, this bias can simplify complex information processing, enabling quicker responses by focusing on data that supported known strategies or dangers—and (ii) in-group bias, which is the predisposition to engage with content or groups that share similar attributes or opinions, enhancing social cohesion and cooperation within tribes, fostering trust and mutual support, all crucial for survival in environments where human groups competed for resources. These biases, while advantageous throughout most of human evolution, pose serious challenges in today’s digital environment, which offers unprecedented freedom to filter information and engage only with agreeable content and like-minded individuals. As a result, recommenders can reinforce such human biases and reaffirm users’ beliefs by “filtering us” into information bubbles. Likewise, however, the same technology can also be used to reduce them. Recommender systems can be designed to provide us with a broader range of viewpoints and content, pushing us to also consider opinions and information outside of our bubbles, thereby promoting the diverse public discourse that is essential for democratic engagement.

To tackle this issue, we can utilize metrics such as novelty, diversity, unexpectedness, and serendipity in recommendation algorithms, which aim to broaden users’ informational horizons. Moreover, this approach can be supported by technologies that automatically analyze and annotate content, providing the data needed to drive recommendations that are both subtle and transparent. The goal is to encourage user engagement with a variety of topics and viewpoints without overwhelming them.

Can services and business models that prioritize long-term user satisfaction over short-term metrics like clicks be successful? Similar shifts have succeeded in other sectors. For example, despite our preference for sugary foods, the market for healthier options has flourished. Moreover, as our understanding of the underlying problems deepens, regulatory measures become more likely.A precondition to this, however, is that we start recognizing our personal biases and limitations and how they are contributing to the creation of filter bubbles and all related problems, creating the willingness to tackle them. This includes the exploration of new business models for a healthier information diet, because the current models do not yet address this. They have, for better or worse reasons, catered to our immediate urges too much, at the expense of long-term well-being and societal discourse.

Author: Patrick Aichroth (Fraunhofer IDMT)