Recommender Systems: Techniques, Effects, and Measures Toward Pluralism and Fairness

  • Open Access
  • First Online: 21 December 2023

Cite this chapter

You have full access to this open access chapter

recommender system research topics

  • Peter Knees 8 ,
  • Julia Neidhardt 9 &
  • Irina Nalis 9  

12k Accesses

Recommender systems are widely used in various applications, such as online shopping, social media, and news personalization. They can help systems by delivering only the most relevant and promising information to their users and help people by mitigating information overload. At the same time, algorithmic recommender systems are a new form of gatekeeper that preselects and controls the information being presented and actively shapes users’ choices and behavior. This becomes a crucial aspect, as, if unaddressed and not safeguarded, these systems are susceptible to perpetuate and even amplify existing biases, including unwanted societal biases, leading to unfair and discriminatory outcomes. In this chapter, we briefly introduce recommender systems, their basic mechanisms, and their importance in various applications. We show how their outcomes and performance are assessed and discuss approaches to addressing pluralism and fairness in recommender systems. Finally, we highlight recently emerging directions within recommender systems research, pointing out opportunities for digital humanism to contribute interdisciplinary expertise.

You have full access to this open access chapter,  Download chapter PDF

Similar content being viewed by others

recommender system research topics

Fairness in recommender systems: research landscape and future directions

recommender system research topics

Consumer-side fairness in recommender systems: a systematic survey of methods and evaluation

recommender system research topics

Multistakeholder recommendation: Survey and research directions

1 introduction.

Recommender systems (RSs) are software tools and techniques that use data from users to suggest items they will probably like. These suggestions cover a wide range of items including books, news articles, music, as well as more complex products such as tourist offers, job vacancies, or financial loans. Today, recommender systems are widely adopted. Personalization techniques are used by all major online platforms such as Google, Amazon, Facebook, YouTube, Netflix, Spotify, Booking.com , LinkedIn, and many others to tailor their services to the specific preferences and needs of individual users. It can be argued that recommender systems provide a backbone to the modern, industrialized Web, as they facilitate—as well as steer—access to the world’s digital content. As such, they have become a new, albeit more subtle form of gatekeeper for information, culture, and other resources, built on technological opportunities and interests of those operating them.

The implications of incorporating automatic recommender systems to structure access to information and digital goods in virtually all large-scale Web services and platforms are wide-reaching and have led to increased interest from the general public. While initially being welcomed as another commodity of the digital world that effortlessly identifies matching content, limitations and frustrations have soon led to a more critical view of their effects. They become of particular interest when they have the potential to affect society and democratic processes, such as “filter bubbles” (Pariser, 2011 ).

From the perspective of digital humanism, technology that threatens democracy and leads to isolation of individuals must be redesigned and shaped in accordance with human values (Werthner et al., 2019 ). More specifically, artificial intelligence (AI) and automated decision-making systems such as recommender systems are often resulting in black-box models for which decision processes remain intransparent, bias unknown, and results unfair (Werthner et al., 2023 ). As such, we need to understand the principles of recommender systems and analyze strategies to overcome a situation where the mechanisms of the applied method and/or the characteristics of the underlying data dictate undesired and unfair outcomes.

In this contribution, we focus on two desiderata for recommender systems with possible broader societal implications: diversity, as a proxy for pluralism, and fairness. Specifically, we outline how recommender systems can be turned from a threat to pluralism and fairness to an instrument for promoting them. As always, the situation is complex, and no single (technical) solution can fully remedy the undesired effects. After describing the main concepts, methods, and practices in recommender systems research, we discuss the concepts of diversity and fairness in the context of filter bubbles. This is followed by a discussion of optimization goals beyond accuracy, such as diversity, and fairness in more detail. We then investigate methods and research directions for promoting both concepts, diversity and fairness. Finally, we touch on the emerging field of moral and human-centered recommender systems. We include examples to illustrate the concepts and methods discussed before summing the discussion up by presenting the main take-away messages. This discussion is continued and deepened in the following contribution of this chapter, where topics of bias, balancing of various and diverse interests (see chapter of Baeza-Yates and Murgai), and automatic content moderation (see chapter of Prem and Krenn) are addressed.

In the following, we investigate the topics of recommender systems and optimization goals beyond accuracy, such as diversity and fairness in more detail, and highlight current research directions.

2 Recommender Systems: Concepts and Practices

RSs are software applications that use a variety of data sources, including users’ past behavior and preferences, to suggest items (e.g., goods, articles, content) to a user that he/she is likely to find interesting (Ricci et al., 2022 ). The overall schema of a RS is illustrated in Fig. 1 . To provide personalized suggestions, the RS needs to have knowledge about the items as well as knowledge about the users. With respect to the items, this knowledge can include textual descriptions, keywords, genres, product categories, release date, or price. With respect to the users, demographic data such as age and gender; data about a user’s past behavior such as previous purchases, clicks, or ratings; or a user’s online social network are commonly used by RSs. Importantly, a relationship between the two sides, items and users, has to be established by the RS so that it knows which items are possibly enjoyed by a specific user. This relationship is typically established using previous purchases, clicks, ratings, or other behavioral data. There are several fundamental recommendation approaches that are traditionally distinguished, which all exploit different aspects to determine those items to be suggested to a user (Burke, 2007 ; Ricci et al., 2022 ): In the content-based approach, items are recommended that have similar attributes to those that the user previously liked (e.g., same category). With collaborative filtering, items liked by users with similar preferences are considered important (e.g., “users who bought this also bought that”). Demographic systems recommend items based on user demographics (e.g., items are recommended that are popular in a specific age group). Knowledge-based approaches make recommendations based on domain knowledge about how specific item properties match user preferences (e.g., a travel recommender system that leverages domain knowledge about various travel destinations and their properties). Community-based approaches recommend items liked by the user’s friends, often within an online social network. Hybrid recommender systems combine different recommendation techniques to make more accurate and personalized recommendations. Since recommender systems aim to offer personalized suggestions, all the techniques mentioned rely on knowledge about the user. Therefore, every RS needs to include a user model or user profile where this knowledge is accumulated and stored (Jannach et al., 2010 ). However, this dependency on user data gives rise to significant concerns regarding privacy and misuse.

An illustration of the R S structure. Item features, user-item preferences data, and user features are mapped to the recommender system. It is followed by a set of items A, B, and C, where B is suggested to the users.

Overall structure of a RS

RS can face a challenge when they encounter new users since there may not be enough data to build a user model. This situation is called “cold-start problem,” and the best way to address it depends on the specific use case. The same is true for items that are new to the system. In Fig. 2 , the two most common RS approaches, i.e., collaborative filtering and the content-based approach, are conceptually shown. Collaborative filtering encounters the cold-start problem when there is a lack of user-item interaction data, making it challenging to identify similar users. For new users, the recommender system may suggest popular items initially. Similarly, for new items, the system faces difficulties in making recommendations until some users interact with them. In contrast, content-based approaches overcome the cold-start problem for new items by relying on the item’s inherent characteristics or features. The system can recommend the item to users who have shown interest in similar items, even if the item has not been previously interacted with. Additionally, for new users, the system can provide recommendations based, for example, on user-provided preferences during the onboarding process.

A set of 2 illustrations. A. Collaborative filtering. User 1 is mapped to items A and B. User 2 is mapped to item C. User 3 is mapped to items A, B, and D. User 1 and 3 are similar users. B. Content-based approach. User 1 is mapped to Item A out of 3 items. Items A and C are similar items.

Collaborative filtering vs. content-based approach

In the last few years, deep learning architectures have been increasingly used in recommender systems, particularly for capturing various patterns and dealing with high complexity (Zhang et al., 2019 ). Large language models have also emerged as powerful tools within recommender systems very recently (Liu et al., 2023 ).

Traditionally, recommendation approaches have focused on predicting how a given user would rate certain items. These approaches are typically tested through so-called offline evaluation, where actual ratings are withheld and used for forecast assessment. The better a method can accurately predict the withheld ratings, the more successful it is. This evaluation approach has significantly advanced the field of recommendation systems. However, there are limitations to this approach such as the absence of real-time feedback, the limited availability of contextual information, and the inability to directly measure user satisfaction (Jannach et al., 2016 ). To address these limitations, offline evaluation is often supplemented with online evaluation, user studies, and A/B testing, which represent a more realistic and dynamic assessment of recommender systems in real-life settings.

Relying solely on accuracy measurements and a lack of diversity in tailored content consumption can introduce bias and lead to filter bubbles, echo chambers, and related phenomena (Stray et al., 2022 ). Specifically, users may become trapped in echo chambers and filter bubbles when only considering users’ preexisting likes and interests in order to produce the most accurate recommendations, which may lead to a lack of “media pluralism” or the exposure to and consumption of a variety of information, ideas, and viewpoints (Vermeulen, 2022 ). Correspondingly, there is a growing recognition that the quality of a recommender system extends beyond accuracy measurements only.

3 Recommender Systems as a Threat to Pluralism and Fairness?

An often-discussed effect of automatic information filters such as recommender systems is a loss of diversity in the presented options, due to emphasizing similarity to previous choices. This has been branded and popularized by Pariser ( 2011 ) as “filter bubbles.” For instance, when consuming social media, and showing interest in posts and articles dealing with, say, migration, recommender systems can pick up this signal and increasingly suggest content dealing with migration. This might lead to overrepresentation of the topic in the shown posts and oust other topics potentially of interest. As such, the topic of migration, despite originally being of (temporal) interest, eventually disproportionally occupies space in the contents recommended and continues to draw the user’s attention. Moreover, the recommender system might increasingly present posts from the authors of the consumed posts, i.e., the providers, ousting other authors and leading to a loss of diversity in sources.

Pariser ( 2011 ) argues that algorithmic information filters and personalized services are directly connected to individualization and intellectual isolation, ultimately leading to polarization and social fragmentation, posing a threat to democratic societies. Several works have subsequently investigated this connection (e.g., Nguyen et al., 2014 ; Aridor et al., 2020 ) and found inconclusive results regarding users’ behavior upon usage of recommender systems and its impact on the diversity of items consumed over time. As a consequence, Dahlgren ( 2021 ) suggests a further differentiation between technological filter bubbles and their consequences that manifest in societal filter bubbles . To investigate the former, Michiels et al. ( 2022 ) provide an operational definition of technological filter bubbles as “a decrease in the diversity of a user’s recommendations over time, in any dimension of diversity, resulting from the choices made by different recommendation stakeholders.” Correspondingly, the personalization-polarization hypothesis assumes that these filter bubbles influence the division of large crowds (and thus also of society) into individual groups due to their strongly divergent opinions (Keijzer & Mäs, 2022 ). The importance of the concept of diversity on the technical side is linked to the societal relevance, as, e.g., stated by Helberger et al. ( 2018 ): “As one of the central communication policy goals, diversity refers to the idea that in a democratic society informed citizens collect information about the world from a diverse mix of sources with different viewpoints so that they can make balanced and well-considered decisions.”

The definition by Michiels et al. ( 2022 ) further highlights the aspect of different recommendation stakeholders (Abdollahpouri & Burke, 2022 ). While diversity is an important mechanism to avoid one-sidedness with regard to topics and/or sources, for a recommender system, different interests are competing and ultimately might also conflict with the goal of diversity. The typical stakeholders to consider in a recommender system are the consumers (who are typically referred to as the “users”), the providers of items, and the system (or service provider) itself. They all want to ensure to not be treated in an unfair manner and optimize their gain or utility. For instance, users might be treated unfairly if the quality of service they receive depends on individual traits, if these relate to sensitive attributes such as race or gender. Item providers might be treated unfairly if they are deprived of exposure to users, for instance, by not being recommended. The system has the task of maintaining fairness toward all different stakeholders (or at least plausibly argue for it) while maximizing utility, e.g., by recommending items that are most profitable or otherwise beneficial for the system. A typical example to showcase fairness in recommendation toward multiple stakeholders are job recommendations, as performed on business-oriented social media platforms such as LinkedIn or Xing (Ekstrand et al., 2022 ). In addition to country-specific regulations that might also play a role, matching candidates with job offers is an inherently multi-sided fairness problem. In this scenario, job seekers and employers and both consumers and providers alike, always with the goal to obtain recommendations with the highest utility, raising questions such as: Are recommendations of job opportunities distributed fairly across users? Are job candidate fit scores fair, or do they under- or over-score certain candidates? Do users have a fair opportunity to appear in result lists when recruiters are looking for candidates for a job opening? Are employers in protected groups (e.g., minority-owned businesses) having their jobs fairly promoted to qualified candidates?

Besides fair distribution of opportunities, questions of bias with regard to certain candidates or employers, esp. in protected groups, arise. Beyond this simplified view of competing interests within a recommender system, there are potentially many more stakeholders to consider. For instance, in music streaming, there are multiple types of providers, e.g., the composers, the record labels, or the rights owners; food order platforms add delivery drivers as stakeholders; etc.

4 Beyond Accuracy: Diversity, Novelty, and Serendipity

The “beyond-accuracy” paradigm in recommender system research has been sparked by users scrolling endlessly through items they are already familiar with or that are too similar to their current preferences. This field of study investigates different evaluation measures to improve the value and caliber of recommendations (Smets et al., 2022 ). Other aspects, such as diversity, serendipity, novelty, and coverage, are being more and more considered in evaluation. These concepts are briefly characterized in the following (Kaminskas & Bridge, 2016 ; Castells et al., 2022 ).

Diversity in recommender systems means including a range of different items in the recommendations for users. The goal is to provide a broad selection of items that cover various categories or genres. When the recommender system offers a diverse list of recommendations, users get to see a wide range of options. This allows them to explore and discover new items, which helps them to expand their horizons and ideally improves their overall experience. Content-based approaches (see Sect. 2 ) often lack diversity because they focus on recommending items that are similar with, e.g., genre. Collaborative filtering can have higher diversity than content-based approaches because it considers the preferences of other users, which can vary widely and thus expose the user to a wider variety of items. An area that attracts a growing number of studies is the domain of news recommendations. Users may not be exposed to opposing viewpoints if tailored news recommendations lack diversity (Stray et al. 2022 ). A news recommender must find a balance between remaining relevant to users’ interests and delivering enough diversity, such as exposing users to new topics and categories, to maintain their interest. The deep neural network presented by Raza and Ding ( 2020 ) satisfies the user’s requirement for information on subjects in which they have previously expressed interest while going above and beyond accuracy metrics. With an emphasis on the effectiveness of news diversity and confidence in news recommenders, Lee and Lee ( 2022 ) investigated the function of perceived personalization and diversity in services. They investigated the effects of perceived personalization and news diversity on users’ inclinations to stick around and found that diversity had a positive effect on user satisfaction and continuance intention. From the perspective of the interplay of news diversity and democracy, Helberger et al. ( 2019 ) highlight the importance of perspective diversity for well-informed citizens of a democratic society. Furthermore, they underline that interests of the users (autonomy, privacy, and accuracy) need to be considered and balanced against the power and opportunities data and algorithms have to offer—herein lies a great challenge for the design of recommender systems. Another challenge for RS lies in finding a balance between the most accurate and simultaneously diversified recommendations for the user (Möller et al., 2018 ; Ribeiro et al., 2015 ).

Occasionally used in connection with measuring diversity is coverage . Coverage refers to the proportion of items within the system that the recommender system can recommend to users. A high coverage indicates that the recommender system can suggest items from different genres, topics, or domains, accommodating the varied tastes of its user base.

The concept of novelty refers to the degree of uniqueness or freshness of the recommended items. The goal is to suggest items that the user is not familiar or has not seen before. Novel recommendations aim to introduce users to new and unexpected items, encouraging them to explore and avoid repeating previously consumed items.

Serendipity refers to the element of surprise or unexpectedness in the recommendations. It aims to suggest items that go beyond a user’s explicit preferences or direct expectations. Serendipitous recommendations should surprise users by presenting them with items they did not anticipate but still find enjoyable or valuable. Serendipity has been examined regarding its potential to reduce popularity bias and boost the utility of recommendations by facilitating better discoverability. However, designing serendipity is challenging, as it requires balancing surprises as well as relevance. One line of research that combines the necessity to provide users with surprising and yet relevant items is presented by Björneborn ( 2017 ) and has seen operationalization in recent attempts to design recommender systems beyond the algorithm (Smets et al. 2022 ). Björneborn ( 2017 ) identifies three key affordances for serendipity: diversifiability, traversability, and sensoriability. These affordances are linked to three personal serendipity factors: curiosity, mobility, and sensitivity. Diversifiability relates to curiosity and includes factors such as interest, playfulness, and inclusiveness. Traversability is associated with mobility and encompasses searching, immersion, and exploration. Sensoriability is linked to sensitivity and involves stumbling upon, attention, surprise, and experiential aspects. The essential components of human interactions with information environments are covered by these affordances. A quintessential understanding that can be derived from this operationalization is that environments can be designed in ways that cultivate serendipitous encounters, whereas serendipity itself cannot be designed for (Smets, 2023 ; Björneborn, 2017 ).

As we have seen, beyond-accuracy measures attempt to introduce other aspects besides accurately re-predicting historic interactions to evaluate recommender systems. Still, these measures focus only on the items that are recommended. However, the bigger context of who is affected in which way by the recommendations given by a system and whether the results or mechanisms underlying this are “fair” has become an increasingly important factor for evaluating recommender systems (Ekstrand et al., 2022 ). In recent years, the fairness of machine learning has gained significant attention in discussions about machine learning systems. Fairness in classification and scoring or ranking tasks has been extensively studied (Chouldechova & Roth, 2020 ). Here concepts like individual fairness and group fairness are typically investigated. Individual fairness aims to treat similar individuals similarly, ensuring comparable decisions for those with similar abilities. Group fairness examines how the system behaves concerning group membership or identities, addressing discriminatory behaviors and outcomes. Ekstrand et al. ( 2022 , p. 682) list the following fundamental concepts in terms of fairness definitions, harm, and motivations for fairness.

Definitions

Individual fairness: Similar individuals have similar experience.

Group fairness: Different groups have similar experiences.

Sensitive attribute: Attribute identifying group membership.

Disparate treatment: Groups explicitly treated differently.

Disparate impact: Groups receive outcomes at different rates.

Disparate mistreatment: Groups receive erroneous (adverse) effects at different rates.

Distributional harm: Harm caused by (unfair) distribution of resources or outcomes.

Representational harm: Harm caused by inaccurate internal or external representation.

Motivations

Anti-classification: Protected attributes should not play a role in decisions.

Anti-subordination: Decision process should actively work to undo past harm.

With regard to the motivations for fairness, the most commonly discussed and addressed aspect in technology-oriented works is that of anti-classification, i.e., to prevent harm before occurring. The concept of anti-subordination, i.e., addressing past harm and therefore introducing current “unfairness” in order to support historically disadvantaged users (cf. “affirmative actions”) is a more complex and difficult topic and often remains unaddressed. For digital humanism, this presents an opportunity to engage in a multidisciplinary discourse on the design of future recommender systems.

Although the objective of a fairness-focused system is commonly labeled as “fairness,” it is crucial to recognize that achieving universal fairness is unattainable. Fairness is a multifaceted issue that is subject to social debates and disagreements, making it impossible to fully resolve. The existence of competing notions of fairness, the diverse requirements of multiple stakeholders, and the fact that fairness is inherently subjective and debatable are reasons for it (Ekstrand et al., 2022 ).

Emerging approaches aim to address fairness-related issues in recommender systems (Boratto & Marras, 2021 ). As already sketched before in recommender systems, there are various stakeholders with different fairness concerns (Ekstrand et al., 2022 ):

Consumer fairness involves treating users fairly and ensuring no systematic disadvantages.

Provider fairness focuses on treating content creators fairly, giving them equal opportunity for their work to be recommended.

Subject fairness is the fair treatment of the people or entities that the recommended items are about.

While fairness concerns for these stakeholders are typically considered separately, some work aims to analyze or provide fairness for multiple stakeholders simultaneously. To promote fairness in recommender systems, it is crucial to identify and address specific harms, understand the stakeholders involved, and contribute to building systems that promote equity and avoid discrimination (Ekstrand et al., 2022 ). Ideally, responsibility for these tasks is taken by the platforms providing the recommendation services. An overview of different works approaching fairness metrics in ranking and recommendation tasks is given by Patro et al. ( 2022 ).

Other venues to design for fairness could be found in a better understanding of users, their values, and motivations. Hence, future studies could delve into psychological theories and empirical studies to understand individuals’ preferences and their association with contextual information, personality, and demographic characteristics. Recommender systems are designed to assist human decision-making. Additionally, group recommender systems leverage social psychology constructs to provide recommendations beneficial for groups. While current recommender systems provide useful recommendations, they often lack interpretability and fail to incorporate the underlying cognitive reasons for user behavior (Wilson et al., 2020 ). This is discussed in the next section on relevant and promising research directions.

6 Human- and Value-Centered Recommender Systems

6.1 psychology-informed recommender systems.

For instance, a survey on psychology-informed recommender systems by Lex et al. ( 2021 ) identifies three categories in which different streams from psychological research are being integrated: cognition-inspired, personality-aware, and affect-aware recommender systems. Cognition-inspired recommender systems employ models from cognitive psychology to enhance the design and functionality of recommender systems. Personality-aware recommender systems consider individual personality traits to alleviate cold-start situations for new users and improve personalization by increasing recommendation list diversity. For instance, the widely used Five-Factor Model (FFM), also known as the Big Five model or the OCEAN model, is often applied in recommender systems research to describe human personality traits (McCrae & John, 1992 ). Neidhardt et al. ( 2015 ) introduced a picture-based approach to elicit user preferences in tourism. Tourism products are complex, and users often have difficulty expressing their needs, especially in the early stages of the travel decision process. The approach introduces seven factors that combine the FFM and tourism roles from literature and creates a mapping between the factors and pictures. In this way, pictures can be used to implicitly and nonverbally elicit the preferences of users and allow users to interact with the RS in a more enjoyable way. Additionally, affect-aware recommender systems consider the emotional state and affective responses of users to provide more tailored recommendations.

With these approaches aiming to better describe the user, one needs to remain aware that these ultimately highly indirect methods to derive human traits and emotions are often built upon strongly debated theories in psychology (see also below) and that their validity is very limited due to technological shortcomings, assumptions, and negligence (Agüera y Arcas et al., 2017 ). Whether these research directions therefore actually constitute progress in building more “human-centered” systems or are yet another unsuitable attempt that effectively “dehumanizes” users and violates their privacy needs to be painstakingly observed and investigated. As such, from a digital humanist’s perspective, this research direction of recommender systems needs to be met with caution.

6.2 Value-Oriented Recommender Systems

Lately, researchers have been attempting to create more moral and human-centered recommender systems that are in line with human values and support human welfare. In order to create recommender systems that reflect human values and advance well-being, Stray et al. ( 2022 ) advocate incorporating human values into them. They stress the importance of taking an interdisciplinary approach to this task. The psychological mechanisms that drive changes in user behavior, including their needs and individual abilities to cope with uncertainty and ambiguous situations, are frequently overlooked (FeldmanHall & Shenhav, 2019 ).

However, it is important to acknowledge that the field of recommender systems research tends to overly rely on easily accessible and quantifiable data, often neglecting discussions on the stability of observable attitudes and behaviors over time (“McNamara Fallacy”; Jannach & Bauer, 2020 ) and the potential for interventions to bring about change. Many of the prevailing psychological theories and concepts in the quickly developing field of recommender systems are based on early psychological research (such as Ekman’s theory of basic emotion, 1992 ), which has since frequently been shown to be oversimplified and unable to adequately capture the complex and dynamic nature of human attitudes, behaviors, cognition, and emotion (Barrett, 2022 ). To illustrate, the stability of a person’s personality across different situations has been challenged, as individuals do not consistently behave in accordance with their inner urges (Montag & Elhai, 2019 ). Montag and Elhai also highlight that while longitudinal studies have demonstrated the overall stability of personality over time, subtle changes can occur, and life events impact personality development. This knowledge emphasizes the importance of considering the context in psychology-aware recommender systems. Integrating these insights into recommender systems could provide a more nuanced understanding of users’ preferences and behaviors.

6.3 Embodiment in Recommender Systems

Additionally, recent advancements in cognitive science shine a light on the intricate relationship between the decision-making processes and brain-body functions, which holds significance for the design and functionality of recommender systems. Renowned psychologist and cognitive scientist Lisa Feldman Barrett emphasizes the brain’s role in maintaining the body’s vital resources, referred to as allostasis, to facilitate various cognitive and physical activities (Barrett, 2017 ). Considering these insights, it becomes evident that incorporating an understanding of brain-body functions is crucial in the design of recommender systems. Acknowledging the interplay between cognitive processes and physiological regulation allows for a more holistic approach to recommendation algorithms. However, it is essential to recognize the characteristics of human decision-making, its potential as well as its vulnerabilities (Turkle, 2022 ). To illustrate, while a serendipitous recommendation might fit the user’s profile perfectly, their emotional state might simply not allow them to receive it as such (Nguyen et al., 2018 ). Furthermore, some users’ personalities are more, others less accepting of serendipitous recommendations.

In summary, recent discoveries in cognitive science, including the understanding of brain-body functions and decision-making processes, have direct implications for the design and improvement of recommender systems. Integrating insights from cognitive psychology and neuroscience can enhance the accuracy and relevance of recommendations.

6.4 Trust in Recommender Systems

The interaction between a user and a recommender system is also defined by the amount of trust the user holds against it. The more a user trusts the recommender system to generate useful items, the more the user will accept the items (Harman et al., 2014 ). This is especially important when recommending serendipitous items, as these may appear unexpected which can lead to trust issues (Afridi, 2019 ). Providing a user with relevant recommendations will establish trust over time, while providing unsatisfying recommendations will erode trust. There are also other challenges; according to Ricci et al. ( 2022 , p. 7) “some users do not trust recommender systems, thus they play with them to see how good they are at making recommendations,” and they noted that “a certain system may also offer specific functions to let the users test its behavior in addition to those just required for obtaining recommendations.”

Recent evidence highlighting the importance of autonomy support for human well-being and positive outcomes has raised concerns regarding autonomy within technology design (Calvo et al., 2020 ). However, incorporating design strategies that promote human autonomy faces two major challenges. Firstly, the breadth of designing for autonomy is extensive, as technologies now play a role in various aspects of our lives, such as education, workplace, health, and relationships, spanning different stages of human development. Secondly, these design practices present ethical dilemmas that challenge existing conceptions of autonomy across disciplines, particularly considering that most technologies are designed to influence human behaviors and decision-making processes.

6.5 Socially Responsible Designs

The inclusion of “socially responsible designs” (Heitz et al., 2022 , p. 2) in research and development programs could open opportunities to create recommenders that result in actions and choices that are advantageous to both individuals and society (e.g., Stray et al., 2021 ). Incorporating individual-level elements and user characteristics, psychology-aware recommender systems can provide a fresh viewpoint in the field of recommender systems research. These systems aim to offer more individualized, varied, and interpretable recommendations by making use of psychological categories and ideas. To ensure a more thorough understanding of user preferences and behavior in the design of recommender systems, additional exploration and consideration of the trait-state perspectives and research development in psychology and the cognitive sciences of human characteristics, intervention possibilities, and the impact of social context are required.

7 Conclusions

We have provided but a glimpse into the area of recommender systems, their importance for the modern Web, and their potential impact on individuals and democracy. Following an overview of techniques used in recommender systems and strategies to evaluate and optimize them, we have focused on the ongoing research discussions dealing with the topics of diversity and fairness. From these discussions, the following take-away messages emerge:

Optimizing systems for historic patterns and behavior data can indicate effectiveness and improvements of systems that in fact lead to decreasing user satisfaction and narrowing of utility. Other aspects, such as diversity in the results, even if they are not considered correct according to the chosen accuracy-oriented evaluation measures, are important to judge the quality of a system.

When deployed in areas relevant to democracy, such as news and media, or for the well-being and success of individuals, such as job recommendation, values defined by society shall be given preference over the objectives of service providers, for instance, by means of policy and regulation. Operationalizing these values is challenging but imperative.

Recommendation settings are complex tasks involving multiple stakeholders. Questions such as diversity and fairness must always be addressed from their diverse and mostly conflicting points of view. Again, whose interests are to be prioritized should ultimately be decided by society or the affected community. Interdisciplinary approaches are required to define concepts such as fairness, e.g., by involving political scientists and others. These are challenging and complex tasks that ultimately require approaches that model societal values. Currently, these are open issues despite the growing body of work addressing these topics.

Not every research direction dealing with human features is human centered. In fact, there is a chance that they are not even scientific as they are often built on very weak assumptions, spurious effects, and insufficient technology. Conclusions drawn based on such systems are not only invalid but potentially harmful as they can build the basis for decisions that affect individuals. As such, poorly designed and careless research poses the risk of building “de-humanizing” systems, rather than providing the claimed “human centricity.”

For digital humanism, recommender systems are a central technology. They are information filters and automatic decision systems. They make content accessible and at the same time act as opaque gatekeepers. They serve humans as well as business interests. They can be shaped according to values—including those of Digital Humanism.

Discussion Questions for Students and Their Teachers

Select an area where recommender systems are used and identify stakeholders.

For each stakeholder, discuss how they would benefit from a concept of diversity if applied to the recommender system.

Which concept would that be?

How would this connect to a notion of fairness from their perspective?

Which values could they follow and how would that affect their goals and the definitions chosen?

Where do the interests of different stakeholders align?

Recommender systems are necessary to efficiently navigate the vast amounts of online data and content; at the same time they are a normative factor and can be used to exert control and power. Discuss the usefulness and threats imposed by recommender systems. Collect anecdotal evidence of success stories of recommenders, failures, and concerns and identify individually desired functions of improved, future recommenders and platforms.

For technical solutions, a model of the real world and the operationalization of functions and goals is necessary. Discuss how human and societal values could be modeled and operationalized to enable more fair systems.

Learning Resources for Students

For a deeper understanding of the inner workings and principles of recommender systems, it is strongly suggested to directly refer to the Recommender Systems Handbook (3rd edition), in particular the chapters on techniques, applications, and challenges; novelty and diversity; multistakeholder systems; and fairness in recommender systems:

Ricci, F., Rokach, L., & Shapira, B. (2022). Recommender systems: Techniques, applications, and challenges. In Recommender Systems Handbook, 3rd ed., 1–35. DOI: 10.1007/978-1-0716-2197-4_1.

Castells, P., Hurley, N., & Vargas, S. (2022). Novelty and diversity in recommender systems. In Recommender Systems Handbook, 3rd ed., 603–646. DOI: 10.1007/978-1-0716-2197-4_16.

Abdollahpouri, H. & Burke, R. (2022) Multistakeholder Recommender Systems. In Recommender Systems Handbook, 3rd ed., 647–677. DOI: 10.1007/978-1-0716-2197-4_17.

Ekstrand, M. D., Das, A., Burke, R., & Diaz, F. (2022). Fairness in recommender systems. In Recommender Systems Handbook, 3rd ed., 679–707. DOI: 10.1007/978-1-0716-2197-4_18.

Critical takes on current practices and methodology in recommender systems and machine learning research can be found in:

Jannach, D., & Bauer, C. ( 2020 ). Escaping the McNamara fallacy: towards more impactful recommender systems research. AI Magazine, 41(4):79–95.

Agüera y Arcas, B., Mitchell, M., & Todorov, A. ( 2017 ). Physiognomy’s New Clothes. Medium. URL: https://medium.com/@blaisea/physiognomys-new-clothes-f2d4b59fdd6a

For a broader, multi-perspective discussion on the topics of diversity, fairness, and value-based recommendation, the following articles will provide additional input:

Helberger, N., Karppinen, K., & D’Acunto, L. ( 2018 ). Exposure diversity as a design principle for recommender systems, Information, Communication & Society, 21(2):191–207. DOI: 10.1080/1369118X.2016.1271900.

Binns, R. ( 2018 ) Fairness in Machine Learning: Lessons from Political Philosophy. Conference on Fairness, Accountability, and Transparency. Proceedings of Machine Learning Research, 81:1–11.

Stray, J., Halevy, A., Assar, P., Hadfield-Menell, D., Boutilier, C., Ashar, A., Beattie, L., Ekstrand, M., Leibowicz, C., Sehat, C. M., Johansen, S., Kerlin, L., Vickrey, D., Singh, S., Vrijenhoek, S., Zhang, A., Andrus, M., Helberger, N., Proutskova, P., Mitra, T., & Vasan, N. ( 2022 ). Building Human Values into Recommender Systems: An Interdisciplinary Synthesis. arXiv preprint arXiv:2207.10192.

Abdollahpouri, H., & Burke, R. (2022). Multistakeholder recommender systems. In Recommender systems handbook (3rd ed., pp. 647–677). Springer. https://doi.org/10.1007/978-1-0716-2197-4_17

Chapter   Google Scholar  

Afridi, A. H. (2019). Transparency for beyond-accuracy experiences: A novel user interface for recommender systems. Procedia Computer Science, 151 , 335–344. https://doi.org/10.1016/j.procs.2019.04.047

Article   Google Scholar  

Agüera y Arcas, B., Mitchell, M., & Todorov, A. (2017). Physiognomy’s new clothes. Medium . https://medium.com/@blaisea/physiognomys-new-clothes-f2d4b59fdd6a

Aridor, G., Goncalves, D., & Sikdar, S. (2020). Deconstructing the filter bubble: User decision-making and recommender systems. In Proceedings of the 14th ACM conference on recommender systems (RecSys ’20) (pp. 82–91). ACM. https://doi.org/10.1145/3383313.3412246 .

Barrett, L. F. (2017). The theory of constructed emotion: An active inference account of interoception and categorization. Social Cognitive and Affective Neuroscience, 12 (1), 1–23. https://doi.org/10.1093/scan/nsw154

Barrett, L. F. (2022). Context reconsidered: Complex signal ensembles, relational meaning, and population thinking in psychological science. American Psychologist, 77 (8), 894–920. https://doi.org/10.1037/amp0001054

Binns, R. (2018). Fairness in machine learning: Lessons from political philosophy. Conference on Fairness, Accountability, and Transparency. Proc Mach Learn Res, 81 , 1–11. https://doi.org/10.48550/arXiv.1712.03586

Björneborn, L. (2017). Three key affordances for serendipity: Toward a framework connecting environmental and personal factors in serendipitous encounters. Journal of Documentation, 73 (5), 1053–1081. https://doi.org/10.1108/JD-07-2016-0097

Boratto, L., & Marras, M. (2021). Advances in bias-aware recommendation on the web. In Proceedings of the 14th ACM international conference on web search and data mining (pp. 1147–1149). https://doi.org/10.1145/3437963.3441665

Burke, R. (2007). Hybrid web recommender systems. The adaptive web: Methods and strategies of web personalization (pp. 377–408). https://doi.org/10.1007/978-3-540-72079-9_12

Calvo, R. A., Peters, D., Vold, K., & Ryan, R. M. (2020). Supporting human autonomy in AI systems: A framework for ethical enquiry. Ethics of digital well-being: A multidisciplinary approach (pp. 31–54). https://doi.org/10.1007/978-3-030-50585-1_2

Castells, P., Hurley, N., & Vargas, S. (2022). Novelty and diversity in recommender systems. In Recommender systems handbook (3rd ed., pp. 603–646). https://doi.org/10.1007/978-1-0716-2197-4_16 .

Chouldechova, A., & Roth, A. (2020). A snapshot of the frontiers of fairness in machine learning. Communications of the ACM, 63 (5), 82–89. https://doi.org/10.1145/3376898

Dahlgren, P. (2021). A critical review of filter bubbles and a comparison with selective exposure. Nordicom Review, 42 , 15–33. https://doi.org/10.2478/nor-2021-0002

Ekman, P. (1992). An argument for basic emotions. Cognition and Emotion, 6 (3–4), 169–200. https://doi.org/10.1080/02699939208411068

Ekstrand, M. D., Das, A., Burke, R., & Diaz, F. (2022). Fairness in recommender systems. In Recommender systems handbook (3rd ed., pp. 679–707). https://doi.org/10.1007/978-1-0716-2197-4_18 .

FeldmanHall, O., & Shenhav, A. (2019). Resolving uncertainty in a social world. Nature Human Behaviour, 3 (5), 426–435. https://doi.org/10.1038/s41562-019-0590-x

Harman, J. L., O’Donovan, J., Abdelzaher, T., & Gonzalez, C. (2014). Dynamics of human trust in recommender systems (pp. 305–308). In Proceedings of the 8th ACM Conference on Recommender systems (RecSys '14). https://doi.org/10.1145/2645710.2645761

Book   Google Scholar  

Heitz, L., Lischka, J. A., Birrer, A., Paudel, B., Tolmeijer, S., Laugwitz, L., & Bernstein, A. (2022). Benefits of diverse news recommendations for democracy: A user study. Digital Journalism, 10 (10), 1710–1730. https://doi.org/10.1080/21670811.2021.2021804

Helberger, N., Karppinen, K., & D’Acunto, L. (2018). Exposure diversity as a design principle for recommender systems. Information, Communication and Society, 21 (2), 191–207. https://doi.org/10.1080/1369118X.2016.1271900

Helberger, N. (2019). On the democratic role of news recommenders. Digital Journalism, 7 (8), 993–1012. https://doi.org/10.1080/21670811.2019.1623700

Jannach, D., & Bauer, C. (2020). Escaping the McNamara fallacy: Towards more impactful recommender systems research. AI Magazine, 41 (4), 79–95. https://doi.org/10.1609/aimag.v41i4.5312

Jannach, D., Zanker, M., Felfernig, A., & Friedrich, G. (2010). Recommender systems: An introduction . Cambridge University Press.

Jannach, D., Resnick, P., Tuzhilin, A., & Zanker, M. (2016). Recommender systems—Beyond matrix completion. Communications of the ACM, 59 (11), 94–102. https://doi.org/10.1145/2891406

Kaminskas, M., & Bridge, D. (2016). Diversity, serendipity, novelty, and coverage: A survey and empirical analysis of beyond-accuracy objectives in recommender systems. ACM Transactions on Interactive Intelligent Systems, 7 (1), 1–42. https://doi.org/10.1145/2926720

Keijzer, M. A., & Mäs, M. (2022). The complex link between filter bubbles and opinion polarization. Data Science, 5 (2), 139–166. https://doi.org/10.3233/DS-220054

Lee, S. Y., & Lee, S. W. (2022). Normative or effective? The role of news diversity and trust in news recommendation services. International Journal of Human–Computer Interaction, 39 (6), 1216–1229. https://doi.org/10.1080/10447318.2022.2057116

Lex, E., Kowald, D., Seitlinger, P., Tran, T. N. T., Felfernig, A., & Schedl, M. (2021). Psychology-informed recommender systems. Foundations and Trends®. Information Retrieval, 15 (2), 134–242. https://doi.org/10.1561/1500000090

Liu, P., Zhang, L., & Gulla, J. A. (2023). Pre-train, prompt and recommendation: A comprehensive survey of language modelling paradigm adaptations in recommender systems . arXiv preprint https://doi.org/10.48550/arXiv.2302.03735 .

McCrae, R. R., & John, O. P. (1992). An introduction to the five-factor model and its applications. Journal of Personality, 60 (2), 175–215. https://doi.org/10.1111/j.1467-6494.1992.tb00970.x

Michiels, L., Leysen, J., Smets, A., & Goethals, B. (2022). What are filter bubbles really? A review of the conceptual and empirical work. In Adjunct proceedings of the 30th ACM conference on user modeling, adaptation and personalization (pp. 274–279). https://doi.org/10.1145/3511047.3538028 .

Möller, J., Trilling, D., Helberger, N., & van Es, B. (2018). Do not blame it on the algorithm: an empirical assessment of multiple recommender systems and their impact on content diversity. Information, Communication and Society, 21 (7), 959–977. https://doi.org/10.1080/1369118X.2018.1444076

Montag, C., & Elhai, J. D. (2019). A new agenda for personality psychology in the digital age? Personality and Individual Differences, 147 , 128–134. https://doi.org/10.1016/j.paid.2019.03.045

Neidhardt, J., Seyfang, L., Schuster, R., & Werthner, H. (2015). A picture-based approach to recommender systems. Information Technology and Tourism, 15 , 49–69. https://doi.org/10.1007/s40558-014-0017-5

Nguyen, T. T., Hui, P.-M., Harper, F. M., Terveen, L., & Konstan, J. A. (2014). Exploring the filter bubble: The effect of using recommender systems on content diversity. In Proceedings of the 23rd international conference on World wide web (pp. 677–686). https://doi.org/10.1145/2566486.2568012 .

Nguyen, T. T., Maxwell Harper, F., Terveen, L., & Konstan, J. A. (2018). User personality and user satisfaction with recommender systems. Information Systems Frontiers, 20 , 1173–1189. https://doi.org/10.1007/s10796-017-9782-y

Pariser, E. (2011). The filter bubble: What the internet is hiding from you . Penguin Press.

Google Scholar  

Patro, G. K., Porcaro, L., Mitchell, L., Zhang, Q., Zehlike, M., & Garg, N. (2022). Fair ranking: A critical review, challenges, and future directions. In Proceedings of the 2022 ACM conference on fairness, accountability, and transparency (pp. 1929–1942). https://doi.org/10.1145/3531146.3533238 .

Raza, S., & Ding, C. (2020). A regularized model to trade-off between accuracy and diversity in a news recommender system. In 2020 IEEE international conference on big data (pp. 551–560). https://doi.org/10.1109/BigData50022.2020.9378340 .

Ribeiro, M. T., Ziviani, N., Moura, E. S. D., Hata, I., Lacerda, A., & Veloso, A. (2015). Multiobjective pareto-efficient approaches for recommender systems. ACM Transactions on Intelligent Systems and Technology, 53 , 1–20. https://doi.org/10.1145/2629350

Ricci, F., Rokach, L., & Shapira, B. (2022). Recommender systems: Techniques, applications, and challenges. In Recommender systems handbook (3rd ed, pp. 1–35). https://doi.org/10.1007/978-1-0716-2197-4_1 .

Smets, A. (2023). Designing for serendipity: A means or an end? Journal of Documentation, 79 (3), 589–607. https://doi.org/10.1108/JD-12-2021-0234

Smets, A., Michiels, L., Bogers, T., & Björneborn, L. (2022). Serendipity in recommender systems beyond the algorithm: A feature repository and experimental design. In Proceedings of the 9th joint workshop on interfaces and human decision making for recommender systems co-located with 16th ACM conference on recommender systems (pp. 44–66). https://ceur-ws.org/Vol-3222/paper4.pdf

Stray, J., Vendrov, I., Nixon, J., Adler, S., & Hadfield-Menell, D. (2021). What are you optimizing for? Aligning recommender systems with human values. CoRR, abs/2107.10939. https://doi.org/10.48550/arXiv.2107.10939

Stray, J., Halevy, A., Assar, P., Hadfield-Menell, D., Boutilier, C., Ashar, A., Beattie, L., Ekstrand, M., Leibowicz, C., Sehat, C. M., Johansen, S., Kerlin, L., Vickrey, D., Singh, S., Vrijenhoek, S., Zhang, A., Andrus, M., Helberger, N., Proutskova, P., Mitra, T., & Vasan, N. (2022). Building human values into recommender systems: An interdisciplinary synthesis . arXiv preprint https://doi.org/10.48550/arXiv.2207.10192 .

Turkle, S. (2022). The empathy diaries: A memoir . Penguin.

Vermeulen, J. (2022). To nudge or not to nudge: News recommendation as a tool to achieve online media pluralism. Digital Journalism, 10 , 1–20. https://doi.org/10.1080/21670811.2022.2026796

Werthner, H., et al. (2019). The Vienna manifesto on digital humanism . https://dighum.org/dighum-manifesto/

Werthner, H., Stanger, A., Schiaffonati, V., Knees, P., Hardman, L., & Ghezzi, C. (2023). Digital humanism: The time is now. Computer, 56 (1), 138–142. https://doi.org/10.1109/MC.2022.3219528

Wilson, J. R., Gilpin, L., & Rabkina, I. (2020). A knowledge driven approach to adaptive assistance using preference reasoning and explanation . arXiv preprint https://doi.org/10.48550/arXiv.2012.02904 .

Zhang, S., Yao, L., Sun, A., & Tay, Y. (2019). Deep learning based recommender system: A survey and new perspectives. ACM Computing Surveys, 52 (1), 1–38. https://doi.org/10.1145/3285029

Download references

Acknowledgments

This work was supported by the Christian Doppler Research Association (CDG). This research was funded in whole, or in part, by the Austrian Science Fund (FWF) [P33526]. For the purpose of open access, the authors have applied a CC BY public copyright license to any author accepted manuscript version arising from this submission.

Author information

Authors and affiliations.

Faculty of Informatics, TU Wien, Vienna, Austria

Peter Knees

Christian Doppler Lab for Recommender Systems, TU Wien, Vienna, Austria

Julia Neidhardt & Irina Nalis

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Peter Knees .

Editor information

Editors and affiliations.

TU Wien, Vienna, Austria

Hannes Werthner

DEIB, Politecnico di Milano, Milano, Italy

Carlo Ghezzi

Department of Computing, Imperial College London, London, UK

Jeff Kramer

Ludwig-Maximilians-Universität München, München, Germany

Julian Nida-Rümelin

Lero & The Open University, Milton Keynes, UK

Bashar Nuseibeh

University of Vienna, Vienna, Austria

Middlebury College and Santa Fe Institute, Middlebury, VT, USA

Allison Stanger

Rights and permissions

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Reprints and permissions

Copyright information

© 2024 The Author(s)

About this chapter

Knees, P., Neidhardt, J., Nalis, I. (2024). Recommender Systems: Techniques, Effects, and Measures Toward Pluralism and Fairness. In: Werthner, H., et al. Introduction to Digital Humanism. Springer, Cham. https://doi.org/10.1007/978-3-031-45304-5_27

Download citation

DOI : https://doi.org/10.1007/978-3-031-45304-5_27

Published : 21 December 2023

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-45303-8

Online ISBN : 978-3-031-45304-5

eBook Packages : Computer Science Computer Science (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

IMAGES

  1. Trending Research Topics in Recommender Systems for PhD

    recommender system research topics

  2. Recommender Systems

    recommender system research topics

  3. Introduction to Recommender system

    recommender system research topics

  4. ML

    recommender system research topics

  5. (PDF) A systematic review and research perspective on recommender systems

    recommender system research topics

  6. Classification of Recommender System [5][6][7]

    recommender system research topics

VIDEO

  1. Recommender System Important Questions (CCS360)

  2. OPERATING SYSTEM RESEARCH PAPER

  3. Research Paper Recommender System

  4. Unit 5 Recommender Systems: Pastry 🥐

  5. Unit 5 Assignment Recommender System

  6. 16.5 Personalized Ranking for Recommender Systems