ACL-OCL / Base_JSON /prefixH /json /hackashop /2021.hackashop-1.7.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:35:40.092108Z"
},
"title": "No NLP Task Should be an Island: Multi-disciplinarity for Diversity in News Recommender Systems",
"authors": [
{
"first": "Myrthe",
"middle": [],
"last": "Reuver",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Leiden University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Antske",
"middle": [],
"last": "Fokkens",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Leiden University",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Suzan",
"middle": [],
"last": "Verberne",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Leiden University",
"location": {}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Natural Language Processing (NLP) is defined by specific, separate tasks, with each their own literature, benchmark datasets, and definitions. In this position paper, we argue that for a complex problem such as the threat to democracy by non-diverse news recommender systems, it is important to take into account a higher-order, normative goal and its implications. Experts in ethics, political science and media studies have suggested that news recommendation systems could be used to support a deliberative democracy. We reflect on the role of NLP in recommendation systems with this specific goal in mind and show that this theory of democracy helps to identify which NLP tasks and techniques can support this goal, and what work still needs to be done. This leads to recommendations for NLP researchers working on this specific problem as well as researchers working on other complex multidisciplinary problems.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Natural Language Processing (NLP) is defined by specific, separate tasks, with each their own literature, benchmark datasets, and definitions. In this position paper, we argue that for a complex problem such as the threat to democracy by non-diverse news recommender systems, it is important to take into account a higher-order, normative goal and its implications. Experts in ethics, political science and media studies have suggested that news recommendation systems could be used to support a deliberative democracy. We reflect on the role of NLP in recommendation systems with this specific goal in mind and show that this theory of democracy helps to identify which NLP tasks and techniques can support this goal, and what work still needs to be done. This leads to recommendations for NLP researchers working on this specific problem as well as researchers working on other complex multidisciplinary problems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The field of Natural Language Processing (NLP) uses specific, self-defined definitions for separate tasks -each with their own leaderboards, benchmark datasets, and performance metrics. When dealing with complex, societal problems, it may however be better to take into account a broader view, starting from the actual needs to solve the overall societal problem. In particular, this paper addresses the complex issue of non-diverse news recommenders potentially threatening democracy (Helberger, 2019) . We focus on a theory of democracy and its role in news recommendation, as described in Helberger (2019) , and reflect on which NLP tasks may help address this issue. In doing so, we consider work by experts on the problem and domain, such as political scientists, recommender system experts, philosophers and media and communication experts.",
"cite_spans": [
{
"start": 485,
"end": 502,
"text": "(Helberger, 2019)",
"ref_id": "BIBREF19"
},
{
"start": 592,
"end": 608,
"text": "Helberger (2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "News recommender systems play an increasingly important role in online news consumption (Karimi et al., 2018) . Such systems recommend several news articles from a large pool of possible articles whenever the user wishes to read news. Recommender systems usually attempt to make the recommended articles increase the user's interaction and engagement. In a news recommender system, this typically means optimizing for the individual user's \"clicks\" or \"reading time\" (Zhou et al., 2010) . These measures are considered a proxy for reader interest and engagement, but other metrics could also be used, including the time spent on a page or article ratings.",
"cite_spans": [
{
"start": 88,
"end": 109,
"text": "(Karimi et al., 2018)",
"ref_id": "BIBREF23"
},
{
"start": 467,
"end": 486,
"text": "(Zhou et al., 2010)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recommender systems are tailored to individual user interests. For other types of recommender systems, e.g. entertainment systems (recommending music or movies), this is less of a problem. However, news recommendation is connected to society and democracy, because news plays an important role in keeping citizens informed on recent societal issues and debates (Helberger, 2019) . Personalization to user interest in the news recommendation domain can lead to a situation where users are increasingly unaware of different ideas or perspectives on current issues. The dangers of such news 'filter bubbles' (Pariser, 2011) and online 'echo chambers' (Jamieson and Cappella, 2008) due to online (over)personalization have been pointed out before (Bozdag, 2013; Sunstein, 2018) .",
"cite_spans": [
{
"start": 361,
"end": 378,
"text": "(Helberger, 2019)",
"ref_id": "BIBREF19"
},
{
"start": 605,
"end": 620,
"text": "(Pariser, 2011)",
"ref_id": "BIBREF30"
},
{
"start": 648,
"end": 677,
"text": "(Jamieson and Cappella, 2008)",
"ref_id": "BIBREF21"
},
{
"start": 743,
"end": 757,
"text": "(Bozdag, 2013;",
"ref_id": "BIBREF6"
},
{
"start": 758,
"end": 773,
"text": "Sunstein, 2018)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Political theory provides several models of democracy, which each also imply different roles for news recommendation. We follow the deliberative model of democracy, which states citizens of a functioning democracy need to get access to different ideas and viewpoints, and engage with these and with each other (Manin, 1987; Helberger, 2019 ) (a further explanation of this model is given in Section 2). A uniform news diet and personalization to only personal interests can, in theory if not in practice, lead to a narrow view on current issues and a lack of deliberation in democracy. When considering this model, it becomes clear that news personalization on user interest alone is potentially harmful for democracy. The normative goal of a recommender system then becomes: supporting a deliberative democracy by showing a diverse set of views to users. NLP can play a role here, by automatically identifying viewpoints, arguments, or claims in news texts. Output of such trained models can help recommend articles that show a diverse set of views and arguments, and thus support a deliberative democracy.",
"cite_spans": [
{
"start": 310,
"end": 323,
"text": "(Manin, 1987;",
"ref_id": "BIBREF25"
},
{
"start": 324,
"end": 339,
"text": "Helberger, 2019",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The explicit goals and underlying values of democracy expressed in the model of deliberative democracy can help in defining what NLP tasks and analyses are relevant for tackling the potential harmful effects of news recommendation. This can increase the societal impact of relevant NLP tasks. We believe considering such theories and normative models can also help work on other complex concepts and societal problems where NLP plays a role. In this paper, we outline societal challenges and a theoretical model of the role of non-diverse news recommenders in democracy, as developed by experts such as political scientists and media experts. We then argue that argument mining, viewpoint detection, and related NLP tasks can make a valuable contribution to the effort in diversifying news recommendation and thereby supporting a deliberative democracy. This position paper provides the following contributions to the discussion: We argue that taking normative and/or societal goals into account can provide insights in the usefulness of specific NLP tasks for complex societal problems. As such, we believe that approaching such problems from an interdisciplinary point of view can help define NLP tasks better and/or increase their impact. In particular, we outline the normative and societal goals for diversifying news recommendation systems and illustrate how these goals relate to various NLP tasks. This results in a discussion on how, on the one hand, news recommendation can make better use of NLP and, on the other hand, how the goal of diversifying news provides inspiration for improving existing tasks or developing new ones.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper is structured as follows: We first describe the problem that personalized news recommendation could pose for democracy, as well as the importance of an interdisciplinary approach to solving this problem in Section 2. Section 3 provides an overview of literature tackling diversity in news recommendation as a solution to this problem, and points out remaining gaps in these efforts, specifically connected to the idea of a deliberative democracy. Section 4 outlines several related NLP tasks and their connection to this overarching normative goal. In Section 5, we discuss what we think the NLP community should take away from this reflection, and in Section 6 we will conclude our paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The online news domain has increasingly moved towards personalization (Karimi et al., 2018) . In the news domain, such personalization comes with specific issues and challenges. A combination of personalizaton and (political) news can lead to polarization, Filter Bubbles (Pariser, 2011) , and Echo Chambers (Jamieson and Cappella, 2008) . This trend to personalize leads to shared internet spaces becoming much more tailored to the individual user rather than being a shared, public space (Papacharissi, 2002) . Such phenomena could negatively impact a citizen's rights to information and right to not be discriminated (Eskens et al., 2017; Wachter, 2020) . Evidence for filter bubbles is under discussion (Borgesius et al., 2016; Bruns, 2019) , but empirical work does indicate that especially fringe groups holding extreme political or ideological opinions may end up into such a conceptual bubble (Boutyline and Willer, 2017) . Helberger (2019) points out that a lack of diversity in news recommendation can also harm democracy. This clearly holds for the deliberative model of democracy. This model assumes that democracy functions on deliberation, and the exchange of points of view. A fundamental assumption in this model is that individuals need access to diverse and conflicting viewpoints and argumentation to participate in these discussions (Manin, 1987) . News recommendations supporting a deliberative democracy should then play a role in providing access to these different viewpoints, ideas, and issues in the news (Helberger, 2019) .",
"cite_spans": [
{
"start": 70,
"end": 91,
"text": "(Karimi et al., 2018)",
"ref_id": "BIBREF23"
},
{
"start": 272,
"end": 287,
"text": "(Pariser, 2011)",
"ref_id": "BIBREF30"
},
{
"start": 308,
"end": 337,
"text": "(Jamieson and Cappella, 2008)",
"ref_id": "BIBREF21"
},
{
"start": 490,
"end": 510,
"text": "(Papacharissi, 2002)",
"ref_id": "BIBREF29"
},
{
"start": 620,
"end": 641,
"text": "(Eskens et al., 2017;",
"ref_id": "BIBREF14"
},
{
"start": 642,
"end": 656,
"text": "Wachter, 2020)",
"ref_id": "BIBREF39"
},
{
"start": 707,
"end": 731,
"text": "(Borgesius et al., 2016;",
"ref_id": "BIBREF4"
},
{
"start": 732,
"end": 744,
"text": "Bruns, 2019)",
"ref_id": "BIBREF7"
},
{
"start": 901,
"end": 929,
"text": "(Boutyline and Willer, 2017)",
"ref_id": "BIBREF5"
},
{
"start": 932,
"end": 948,
"text": "Helberger (2019)",
"ref_id": "BIBREF19"
},
{
"start": 1353,
"end": 1366,
"text": "(Manin, 1987)",
"ref_id": "BIBREF25"
},
{
"start": 1531,
"end": 1548,
"text": "(Helberger, 2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Personalization in the News, Theories of Democracy, and Interdisciplinarity",
"sec_num": "2"
},
{
"text": "The threat to democracy of non-diverse news recommenders is a complex problem. It requires input from different academic disciplines, from media studies and computer science to political science and philosophy (Bernstein et al., 2020) . Political theory can provide a framework that helps define what is needed from more empirical and technical researchers to address this problem. In the next section, we will discuss recent work in diversity in news recommendation. We point out remaining gaps in these efforts, specifically connected to the idea of a deliberative democracy.",
"cite_spans": [
{
"start": 210,
"end": 234,
"text": "(Bernstein et al., 2020)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Personalization in the News, Theories of Democracy, and Interdisciplinarity",
"sec_num": "2"
},
{
"text": "3 Diversity in News Recommendation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Personalization in the News, Theories of Democracy, and Interdisciplinarity",
"sec_num": "2"
},
{
"text": "Previous work on diversity in news recommender systems has mainly focused on assessing the current state of diversity in news recommendation (M\u00f6ller et al., 2018) , or on assessing diversity especially at the end of a computational pipeline, in the form of (evaluation) metrics (Vrijenhoek et al., 2021; Kaminskas and Bridge, 2016) , or on computational implementations of diversity (Lu et al., 2020) . Less attention has been given to defining and identifying the viewpoints, entities, or perspectives that are being diversified, or to the underlying values and goals of diversification.",
"cite_spans": [
{
"start": 141,
"end": 162,
"text": "(M\u00f6ller et al., 2018)",
"ref_id": "BIBREF27"
},
{
"start": 278,
"end": 303,
"text": "(Vrijenhoek et al., 2021;",
"ref_id": "BIBREF37"
},
{
"start": 304,
"end": 331,
"text": "Kaminskas and Bridge, 2016)",
"ref_id": "BIBREF22"
},
{
"start": 383,
"end": 400,
"text": "(Lu et al., 2020)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Recent Diversity Efforts",
"sec_num": "3.1"
},
{
"text": "Within the recommender systems field, there are several ideas and concepts related to diversity, especially where it concerns evaluation or optimization metrics. Diversity, serendipity, and unexpectedness all are metrics used in the recommender systems literature that go beyond mere click accuracy (Kaminskas and Bridge, 2016). There are two gaps we see in many of these earlier metrics. Firstly, these metrics rarely focus on linguistic or conceptual features or representations of (aspects of) diversity in the news articles. Or, when they do, the NLP approaches are simplified (e.g. topic models in Draws et al. (2020b)) to centralize the recommendation algorithm and its optimization. Secondly, such \"beyond user interest\" optimization in recommender systems is usually not connected to normative goals and societal gains, but still geared towards user interest and the idea that users react positively to unexpected or previously unseen items. However, several fairly recent works (Lu et al., 2020; Vrijenhoek et al., 2021 ) have attempted to go beyond \"click accuracy\" for user interest and tackle the diversity in news recommendation problem while also explicitly considering normative values.",
"cite_spans": [
{
"start": 987,
"end": 1004,
"text": "(Lu et al., 2020;",
"ref_id": "BIBREF24"
},
{
"start": 1005,
"end": 1028,
"text": "Vrijenhoek et al., 2021",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Recent Diversity Efforts",
"sec_num": "3.1"
},
{
"text": "Lu et al. (2020) discuss how to implement \"editorial values\" in a news recommender for a Dutch online newspaper. Editorial values were defined as journalistic missions or ideals found important by the newspaper's editors and journalists. One of these values is diversity, but their case-study concerns implementing and optimizing for \"dynamism\" -a diversity-related metric the authors define as \"how much a list changes between updates\". The authors note the computational difficulty of measuring and optimizing for diversity, and propose a proxy. They define \"intra-list diversity\" as the inverse of the similarity of a recommendation set. This similarity is calculated over pre-defined news categories of the articles, such as 'sports' and 'finance', as well as over different authors. Viewpoints or perspectives are not mentioned. Lu et al. (2020)'s \"editorial values\" seem to correspond to the public values mentioned in Bernstein et al. (2020) , and implicitly also relate to the democratic values described by Helberger (2019) . Both mention diversity as a central important aspect, but Lu et al. (2020) still centralize the user's satisfaction, rather than public values or democracy.",
"cite_spans": [
{
"start": 925,
"end": 948,
"text": "Bernstein et al. (2020)",
"ref_id": "BIBREF3"
},
{
"start": 1016,
"end": 1032,
"text": "Helberger (2019)",
"ref_id": "BIBREF19"
},
{
"start": 1093,
"end": 1109,
"text": "Lu et al. (2020)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Recent Diversity Efforts",
"sec_num": "3.1"
},
{
"text": "Vrijenhoek et al. 2021connect several democratic models to computational evaluative metrics of news recommender diversity. The paper discusses several metrics that could be used as optimization and evaluation functions for diversity for news recommender systems supporting a deliberative democracy, such as one to measure and optimize for the \"representation\" of different societal opinions and voices, and another to measure the \"fragmentation\": whether different users receive different news story chains. These evaluation metrics are, to our knowledge, the first to explicitly consider normative values and models of democracy in news recommender system design. However, this work does not discuss how to represent or identify different voices in news articles. The NLP-related components discussed are limited to annotating different named entities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recent Diversity Efforts",
"sec_num": "3.1"
},
{
"text": "We argue that the inclusion of more fine-grained and state-of-the-art NLP methods allows more precise identification of different \"voices\" and viewpoints in support of diverse news recommender systems. The connection of these NLP tasks to diversifying news recommendation is as follows. We compare the building of diverse news recommenders in support of a deliberative democracy to building a tower, with the identification of the different voices or viewpoints as the base of that tower. When an approach can reliably and consistently identify different viewpoints or arguments, we can also diversify these viewpoints in recom-mendations. A solid definition of viewpoints and reliable methods to detect them thus form the foundation of our diverse news recommendation tower, and builds it towards the goal of a functioning deliberative democracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recent Diversity Efforts",
"sec_num": "3.1"
},
{
"text": "The news is a specific domain for recommender systems, with much faster-changing content than for instance movie or e-commerce recommendation. This leads to a number of unique technical challenges.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Technical and Conceptual Challenges",
"sec_num": "3.2"
},
{
"text": "Two specific technical and conceptual challenges to a (diverse) news recommendation have been addressed in previous work. The first is the cold start problem (Zhou et al., 2010) , which occurs when a news recommender needs data on articles to decide whether to recommend the article to a (new) user. Recommendation, in news as well as in other domains, often uses the interaction data of similar users to recommend data to new users, such as in the method \"collaborative filtering\". Such data is missing on the large volumes of new articles added in the news domain every day, which makes such approaches less useful in this domain. This leads to other recommendation techniques being more common in the news recommendendation domain.",
"cite_spans": [
{
"start": 158,
"end": 177,
"text": "(Zhou et al., 2010)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Technical and Conceptual Challenges",
"sec_num": "3.2"
},
{
"text": "The second challenge specific to our problem is the continuous addition of new and many different topics, issues, and entities in public discussion and in the news. This makes detecting viewpoints with one automated, single model and one set of training data difficult. Previous work often explores one well-known publicly debated topic, such as abortion (Draws et al., 2020a) or misinformation related to COVID-19 (Hossain et al., 2020) . However, in an ideal solution we would also be able to continuously identify all kinds of new debates and related views.",
"cite_spans": [
{
"start": 355,
"end": 376,
"text": "(Draws et al., 2020a)",
"ref_id": "BIBREF12"
},
{
"start": 415,
"end": 437,
"text": "(Hossain et al., 2020)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Technical and Conceptual Challenges",
"sec_num": "3.2"
},
{
"text": "We believe that a combination of state-of-the-art NLP techniques such as neural language models can help address this problem without resorting to manual or unsupervised techniques. A possible interesting research direction is zero-shot or oneshot learning as in Allaway and McKeown (2020) , where a model with the help of large(-scale) language models learns to identify new debates and viewpoints not seen at training time. In our case, this would mean identifying new debates and new viewpoints without explicit training on these when training for our task. We elaborate on potentially useful NLP tasks to focus on for our problem in the following section.",
"cite_spans": [
{
"start": 263,
"end": 289,
"text": "Allaway and McKeown (2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Technical and Conceptual Challenges",
"sec_num": "3.2"
},
{
"text": "Within the NLP, text mining, and recommender systems literature, there are several (related) tasks that deal with identifying viewpoints, perspectives, and arguments in written language. We define a task in NLP as a clearly defined problem such as \"stance detection\", with each task having connected methods, benchmark datasets, leaderboards and literature. The literature is currently fragmented in different related tasks and also definitions of viewpoint, argument or claim, and perspective. Researchers also use different datasets and contenttypes (tweets and microblogs, internet discussions on websites like debate.org, or news texts).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relevant NLP Tasks",
"sec_num": "4"
},
{
"text": "In this section we discuss NLP tasks that are related to viewpoint and argumentation diversity as defined in relation to the normative goal of a healthy deliberative democracy. Recall that a deliberative model assumes that participants of a democracy need access to a variety of (conflicting) viewpoints and lines of argumentation. As such, we focus on NLP tasks that help identify what claims, stances, and argumentation are present in news articles, and how specific items in the news are presented or framed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relevant NLP Tasks",
"sec_num": "4"
},
{
"text": "An important distinction that needs to be made is the one between stance and sentiment: a negative sentiment does not necessarily mean a negative stance or viewpoint on an issue, and vice versa. An example would be someone who supports the use of mouth masks as COVID-19 regulation (positive stance), and expresses negative sentiment towards the topic by criticizing the shortage of mouth masks available for caregivers. In this paper, we concern ourselves with stance on issues (being in favor of masks) rather than with sentiment expressed about such issues (being negative about their shortage).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relevant NLP Tasks",
"sec_num": "4"
},
{
"text": "The remainder of this section is structured as follows. We first describe work on recommender systems that explicitly refers to detecting viewpoints. We then address three relatively established NLP tasks: argumentation mining, stance detection and polarization, frames & propaganda. We then briefly address work that refers to 'perspectives'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relevant NLP Tasks",
"sec_num": "4"
},
{
"text": "The recommender systems literature specifically uses the term 'viewpoint' in relation to diversifying recommendation. In these viewpoint-based papers, we notice a systems-focused tendency. Defining a viewpoint is less of a concern, nor is evaluating the viewpoint detection. Instead, researchers centralize viewpoint presentation to users, or how these respond to more diverse news, as in Lu et al. (2020) and Tintarev (2017). As a result, there is no standard definition of 'viewpoint' and the concept is operationalized differently by various authors. Draws et al. (2020a) use topic models to extract and find viewpoints in news texts with an unsupervised method, with the explicit goal to diversify a news recommender. They explicitly connect different sentiments to different viewpoints or perspectives. For this study, they use clearly argumentative text on abortion from a debating website. The words 'viewpoint' and 'perspective' are used interchangeably in this study. Carlebach et al. (2020) also address what they call \"diverse viewpoint identification\". Here as well, we see a wide range of definitions and terms related to viewpoints and perspectives (e.g. 'claim', 'hypothesis', 'entailment'). The authors use stateof-the-art methods including large neural language models, but the study does not seem to consider carefully defining their task, term definitions, and the needs of the problem. As such, it is unclear what they detect exactly. This is mainly due to the detection itself not being the main focus of their paper.",
"cite_spans": [
{
"start": 389,
"end": 405,
"text": "Lu et al. (2020)",
"ref_id": "BIBREF24"
},
{
"start": 554,
"end": 574,
"text": "Draws et al. (2020a)",
"ref_id": "BIBREF12"
},
{
"start": 977,
"end": 1000,
"text": "Carlebach et al. (2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Viewpoint Detection and Diversity",
"sec_num": "4.1"
},
{
"text": "With the more NLP-based tasks and definitions in the following sections, we explore how NLP tasks relate to this 'viewpoints' idea from the recommender systems community, and see what ideas and techniques these other tasks can add to diversity in news recommendation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Viewpoint Detection and Diversity",
"sec_num": "4.1"
},
{
"text": "Argument Mining is the automatic extraction and analysis of specific units of argumentative text. It usually involves user-generated texts, such as comments, tweets, or blogposts. Such content is often highly argumentative by design, with high sentiment scores. In some studies, arguments are related to stances, as in the Dagstuhl ArgQuality Corpus (Wachsmuth et al., 2017) , where 320 arguments cover 16 (political or societal) topics, and are balanced for different stances on the same topic. These arguments are from websites specifically aimed at debating. Stab and Gurevych (2017) identify the different sub-tasks in argumentation mining, and use essays as the argumented texts in question. For instance, one sub-task is separating argumentative from non-argumentative text units. Then, their pipeline involves classifying argument components into claims and premises, and finally it involves identifying argument relations. This first sub-task is also sometimes called claim detection, and is related to detecting stances and viewpoints when connecting claims to issues.",
"cite_spans": [
{
"start": 350,
"end": 374,
"text": "(Wachsmuth et al., 2017)",
"ref_id": "BIBREF38"
},
{
"start": 562,
"end": 586,
"text": "Stab and Gurevych (2017)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Argument Mining",
"sec_num": "4.2"
},
{
"text": "For a deliberative democracy, the work on distinguishing argumentative from non-argumentative text in argument mining is useful, since our goal requires the highlighting of deliberations and arguments, and not statements on facts. Identifying this distinction might enable us to identify viewpoints in news texts. The precise identification of claims and premises may also prove valuable, because supporting a deliberative democracy requires the detection of different deliberations and arguments in news texts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Argument Mining",
"sec_num": "4.2"
},
{
"text": "Stance detection is the computational task of detecting \"whether the author of the text is in favor of, against, or neutral towards a proposition or target\" (Mohammad et al., 2017, p. 1). This task usually involves social media texts and, once again, user-generated content. Commonly, these are shorts texts such as tweets. For instance, Mohammad et al. (2017) provide a frequently used Twitter dataset that strongly connects stances with sentiment and/or emotional scores of the text. Another common trend in stance detection is to use text explicitly written in the context of an (online) debate, such as the website debate.org and social media discussions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stance Detection",
"sec_num": "4.3"
},
{
"text": "A recent study on Dutch social media comments highlights the difficulties in annotating stances on vaccination (Bauwelinck and Lefever, 2020) . The authors identify the need to annotate topics, but also topic aspects and whether units are expressing an argument or not. Getting to good inter-annotator agreement (IAA) is difficult, showing that these concepts related to debate and stance are not uniform to all annotators even after extensive training. The same is found by Morante et al. (2020) : Annotating Dutch social media text as well as other debate text on the vaccination debate, they find obtaining a high IAA is no easy task.",
"cite_spans": [
{
"start": 111,
"end": 141,
"text": "(Bauwelinck and Lefever, 2020)",
"ref_id": "BIBREF1"
},
{
"start": 475,
"end": 496,
"text": "Morante et al. (2020)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stance Detection",
"sec_num": "4.3"
},
{
"text": "Other work related to stance detection is more related to the news domain. The Fake News Classification Task (Hanselowski et al., 2018b ) has a sub-task that concerns itself with predicting the stance of a news article towards the news headline. In their setup stances can be 'Unrelated', 'Discuss', 'Agree' or 'Disagree'. The Fake News Classification tasks also introduces claim verification as a sub-task. This task is also related to the claim detection task: in order to verify claims, one needs to detect them first.",
"cite_spans": [
{
"start": 109,
"end": 135,
"text": "(Hanselowski et al., 2018b",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stance Detection",
"sec_num": "4.3"
},
{
"text": "Several papers specifically aim at stance detection in the news domain. Conforti et al. (2020) note that different types of news events, from wars to economic issues, might lead to stance classes that are not uniform across events. As a response, they decide to annotate stance on one specific type of news event: company acquisitions. The authors explicitly note here that textual entailment and sentiment analysis are different tasks from stance detection, but acknowledge that all these tasks are related. However, as stated before, in the news domain new topics or issues occur constantly. Data on only one type of news event is less representative of all texts in the news domain. Some recent work aims to address this through one-shot or zeroshot learning for detecting issues and viewpoints on issues (Allaway and McKeown, 2020) . In such an approach, unseen topics or viewpoints would be detected even when they are very different from what is annotated or seen at training time.",
"cite_spans": [
{
"start": 72,
"end": 94,
"text": "Conforti et al. (2020)",
"ref_id": "BIBREF10"
},
{
"start": 808,
"end": 835,
"text": "(Allaway and McKeown, 2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stance Detection",
"sec_num": "4.3"
},
{
"text": "Based on the above, there are three challenges involved in applying previous approaches on stance detection for diversifying news: First, most work on stance detection aims at short, high-sentiment user-generated texts with one specific stance. News articles are more complex. News texts might highlight a debate with several viewpoints of different people, with the emphasis on one rather than the other. Secondly, the authors of news articles generally do not express opinions explicitly, unlike authors of tweets or blogs. News articles can express viewpoints in more subtle ways, in the way a story is told or framed. Additionally, training data that does come from the news domain may not generalize well to new topics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stance Detection",
"sec_num": "4.3"
},
{
"text": "We conclude that stance detection is, in principle, a relevant task when aiming to ensure news recommendation supports a deliberative democracy, but the challenges generalizing to new topics and dealing with more subtle ways of expressing viewpoints must be addressed. One shot learning may provide means to deal with new topics in the every-changing news landscape. The focus on longer, less explicitly argumentative text is helpful for our goal, and exists in for instance the first subtasks of fake news detection (Hanselowski et al., 2018a) and other recent news-focused datasets and papers (Conforti et al., 2020; Allaway and McKeown, 2020) .",
"cite_spans": [
{
"start": 517,
"end": 544,
"text": "(Hanselowski et al., 2018a)",
"ref_id": "BIBREF17"
},
{
"start": 595,
"end": 618,
"text": "(Conforti et al., 2020;",
"ref_id": "BIBREF10"
},
{
"start": 619,
"end": 645,
"text": "Allaway and McKeown, 2020)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stance Detection",
"sec_num": "4.3"
},
{
"text": "Some work already explicitly takes into account the more complex political dimension of news texts when defining an NLP task. This work is often interdisciplinary in nature, with NLP researchers working with political scientists or media scholars. The idea of (political) perspectives is prominent in these papers, though researchers in this subfield use different definitions and names for similar tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Polarization, Frames, and Propaganda",
"sec_num": "4.4"
},
{
"text": "'Frames', 'propaganda', and 'polarization' are loaded terms, with less nuance than terms such as 'stance' and 'argument'. Terms like 'polarization' are (ironically) more polarizing due to their political connotations. An explicitly political aspect in the task definition can be useful for our societal problem -as stated, the deliberative democracy goal is also inherently connected to political debates. However, it can also lead to a confusion of terminology or the use of (accidentally) loaded terminology, for instance terms that are controversial in related disciplines such as communication science or media studies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Polarization, Frames, and Propaganda",
"sec_num": "4.4"
},
{
"text": "An example is a recent shared task on Propaganda techniques (Da San Martino et al., 2019) . It distinguishes 18 classes of what the authors call 'rhetorical strategies' that are not synonymous with, but related to, propaganda. These include 'whataboutism', 'bandwagon', and 'appeal to fear and prejudice', as well as 'Hitler-comparisons'. These terms are, incidentally, also known as cognitive biases (the bandwagon effect) or framing (appeal to fear) and argumentation flaws (Hitlercomparisons, on the internet known as Godwin's Law). Such confusion of terminology, especially in a politically sensitive context, makes it less straightforward to see how this task can be used for viewpoint diversification in support of a deliberative democracy.",
"cite_spans": [
{
"start": 60,
"end": 89,
"text": "(Da San Martino et al., 2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Polarization, Frames, and Propaganda",
"sec_num": "4.4"
},
{
"text": "Sometimes, the task of identifying different viewpoints on an issue or event in the news is translated to 'political bias'. In such work, the viewpoints are related to a certain ideology or political party (Roy and Goldwasser, 2020) or 'media frames'. However, we would argue that a viewpoint in the public debate does not have to be a political standpoint related to a specific political ideology. Limiting ourselves only to detecting debates and viewpoints explicitly related to political parties would also limit the view on public debate and deliberative democracy, and thus would not support our normative goal to its full extent.",
"cite_spans": [
{
"start": 206,
"end": 232,
"text": "(Roy and Goldwasser, 2020)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Polarization, Frames, and Propaganda",
"sec_num": "4.4"
},
{
"text": "Other NLP work that addresses the political nature of news texts and perspectives is Fokkens et al. (2018) . In this work, stereotypes on Muslims are detected with a self-defined method known as 'micro-portrait extraction'. This paper is an example of work where other disciplines (communication and media experts) are heavily involved in task definition and execution, aiding clear and careful definitions and aiding to the problem and the societal complex issue (stereotypes in the news) at hand.",
"cite_spans": [
{
"start": 85,
"end": 106,
"text": "Fokkens et al. (2018)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Polarization, Frames, and Propaganda",
"sec_num": "4.4"
},
{
"text": "'Fake news' related tasks are also connected to the political content of news. The Fake News Classification Task (Hanselowski et al., 2018b) has the explicit goal to identify fake news. It consists of several sub-tasks related to argument mining and stance detection. The debate on (fake) news has recently shifted away from the simple label 'fake news', since it is not only the simple distinction between fake and true that is interesting. This again shows the importance of multi-disciplinary work: computational tasks are often aimed at a simple classification such as 'true' versus 'false', while social scientists and media experts call for different labels not directly related to the truth of an entire article or claim, such as 'false news', 'misleading news', 'junk news' (Burger et al., 2019) , or 'clickbait'. All these are terms for a media diet with lower quality (or with less 'editorial values' to use the term from Lu et al. (2020) ).",
"cite_spans": [
{
"start": 113,
"end": 140,
"text": "(Hanselowski et al., 2018b)",
"ref_id": "BIBREF18"
},
{
"start": 782,
"end": 803,
"text": "(Burger et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 932,
"end": 948,
"text": "Lu et al. (2020)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Polarization, Frames, and Propaganda",
"sec_num": "4.4"
},
{
"text": "It can be useful for a deliberative democracysupporting diverse news recommender when tasks already incorporate the political dimension of news texts. However, it can also be harmful when the political or social science definitions are not clear and uniform, or when the political dimension actually narrows what a deliberative democracy is by only considering explicitly political viewpoints, or only views tied to political parties or ideologies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Polarization, Frames, and Propaganda",
"sec_num": "4.4"
},
{
"text": "In NLP, definitions of 'perspective' range from 'a relation between the source of a statement (i.e. the author or another entity introduced in the text) and a target in that statement (i.e. an entity, event, or (micro-)proposition)' (Van Son et al., 2016) to stances to specific (political) claims in text (Roy and Goldwasser, 2020) . These definitions are similar to those seen in the Stance Detection literature. Sometimes, it is unclear what the difference is between a stance and a perspective.",
"cite_spans": [
{
"start": 233,
"end": 255,
"text": "(Van Son et al., 2016)",
"ref_id": "BIBREF36"
},
{
"start": 306,
"end": 332,
"text": "(Roy and Goldwasser, 2020)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Perspectives",
"sec_num": "4.5"
},
{
"text": "Common debate content used for analysis and task definition of perspectives is political elections (Van Son et al., 2016) , vaccination (Morante et al., 2020) , and also societally debated topics like abortion. Perspectives are especially useful for our goal, since they assume different groups in society are seeing one issue from different angles. This allows us to identify an active debate in society, which explicitly supports a deliberative democracy.",
"cite_spans": [
{
"start": 99,
"end": 121,
"text": "(Van Son et al., 2016)",
"ref_id": "BIBREF36"
},
{
"start": 136,
"end": 158,
"text": "(Morante et al., 2020)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Perspectives",
"sec_num": "4.5"
},
{
"text": "In the previous section, we have outlined a number of relevant NLP tasks, and made their possible contribution to the support of a deliberative democracy through diverse news recommendation explicit. In the following section, we discuss the implications and considerations following from these separate tasks for diversity in news recommendations, and provide some advice for NLP researchers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "There has been a general push in NLP evaluation to go \"beyond accuracy\" (Ribeiro et al., 2020) and in recommender systems to go \"beyond click accuracy\" (Lu et al., 2020; Zhou et al., 2010) in evaluation and optimization. We believe that going beyond these evaluations might also mean looking at normative, societal goals and values, and the implications for the task and its effect on these goals and values. A possible advantage of a higher-level evaluation with a normative goal is that it allows the measurement of real-world impact. One explicit problem however is how to evaluate whether support of a deliberative democracy has been achieved.",
"cite_spans": [
{
"start": 152,
"end": 169,
"text": "(Lu et al., 2020;",
"ref_id": "BIBREF24"
},
{
"start": 170,
"end": 188,
"text": "Zhou et al., 2010)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5.1"
},
{
"text": "Recent work by Vrijenhoek et al. (2021) has identified evaluation metrics to evaluate whether a recommender system supports specific models of democracy, one of which is the deliberative model. They propose a number of evaluation metrics for recommender system diversity that are explicitly connected to different models of democracy. These metrics could be used to evaluate different aspects of diversity related to a (deliberative) democracy. The aspects discussed are the representation of different groups in the news, whether alternative voices from minority groups are represented in the recommendations, whether the recommendations activate users to take action, and the degree of fragmentation between different users.",
"cite_spans": [
{
"start": 15,
"end": 39,
"text": "Vrijenhoek et al. (2021)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5.1"
},
{
"text": "However, Vrijenhoek et al. (2021) does not address the evaluation of the NLP tasks involved. Where specific, clearly defined NLP tasks can generally be evaluated through hand-labelled evaluation sets, such sets do not provide the necessary insights to determine their role in supporting a deliberative democracy. In the end, we need to find a way to connect accuracy of NLP technologies to the overall increased diversity of news offers. Ideally, we would then also measure the ultimate impact on the users of a diverse recommender system diversifying viewpoints or stances with an NLP method. Such an evaluation is highly complex and clearly requires expertise from various fields (including technology, user studies and methods for investigating social behavior). It could for instance involve longitudinal studies on user knowledge of issues and viewpoints.",
"cite_spans": [
{
"start": 9,
"end": 33,
"text": "Vrijenhoek et al. (2021)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5.1"
},
{
"text": "We argue that NLP tasks have a clear role in the development of diverse recommender systems. Especially recent developments in the field, such as the use of pre-trained language models and neural models, could be used to obtain a reliable and useful representations of issues in the news, as well as viewpoints and perspectives on these issues. Such approaches are possibly more fine-grained and can be more reliable than the now commonly used unsupervised methods such as topic models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "No NLP Task is An Island",
"sec_num": "5.2"
},
{
"text": "Benchmarking with separate datasets, definitions, and shared tasks and challenges has brought our field far, and much progress has been achieved in this manner. However, we feel complex societal issues should be aimed at achieving a societal goal rather than evaluated on task-specific benchmarking dataset. When considering issues such as diversity in news recommendation and its effects on democracy and public debate, we are at the limit of what separate NLP tasks could bring us. We should dare to look past the limits of separate tasks, and attempt to oversee the over-arching normative goals and tasks related to such problems, especially when working on real-world impact.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "No NLP Task is An Island",
"sec_num": "5.2"
},
{
"text": "As discussed in Section 4, the NLP field has many related tasks that seem to be relevant to the problem of news recommender diversity and especially the support of a deliberative democracy. However, we note that NLP tends to use their own definitions, and not consider other fields or even sub-fields, when designing these tasks. This means the field covers a wide array of different implementations and definitions related to perspectives and viewpoints in the news. We therefore urge NLP researchers to not only consider and evaluate their systems on their own definitions and tasks, but also consider the wider societal and normative goals their task connects to, and what other related tasks could be used to achieve the same or similar goals.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "No NLP Task is An Island",
"sec_num": "5.2"
},
{
"text": "NLP, especially NLP working on societal realworld problems, should involve other fields, and expertise in other fields. This is especially true when working on complex problems like viewpoint diversity in news recommendation. This recommendation has also been made at the Dagstuhl perspectives workshop \"Diversity, fairness, and datadriven personalization in (news) recommender systems\" (Bernstein et al., 2020 ), but we would like to emphasize it more specifically for the NLP field.",
"cite_spans": [
{
"start": 387,
"end": 410,
"text": "(Bernstein et al., 2020",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NLP and Other Disciplines",
"sec_num": "5.3"
},
{
"text": "One example where a lack of interdisciplinary seems to sometimes to lead to issues for our problem is in the Polarization, Frames, and Propaganda set of NLP tasks outlined in Section 4.4. Definitions used of 'frame', 'propaganda', and 'polarization' are sometimes seemingly made without consulting relevant experts, or without considering earlier theoretical work defining these terms. This leads to definitions that are easy to computationally measure with existing NLP techniques, such as classification. However, these definitions do not necessarily do justice to the complex problem the model or task is aimed at. Such work also does not consult earlier theoretical and empirical considerations of these terms and definitions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NLP and Other Disciplines",
"sec_num": "5.3"
},
{
"text": "We argue for the inclusion of experts from the social sciences and humanities in every step of the process -designing the tasks and definitions, evaluation of task success and usefulness, and tying the result to broader implications. For diversity in news recommenders, this means discussing and engaging with experts on political theory and philosophy, ethics of technology, and media studies and communication science (Bernstein et al., 2020) .",
"cite_spans": [
{
"start": 420,
"end": 444,
"text": "(Bernstein et al., 2020)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NLP and Other Disciplines",
"sec_num": "5.3"
},
{
"text": "When our goal is to foster a healthy democratic debate, we should consider whether we should highlight or recommend content with fringe opinions that might be dangerous to individuals or the debate itself, e.g. the anti-vaxxing argument in the vaccination debate, conspiracy theories on the state of democracy, or inherently violent arguments. The deliberative model of democracy values rational and calm debate, not emotional or affective language. While this is a question of whether to recommend such views, not whether to detect them, we find it important to stress such considerations here. In a complex problem with a high-level normative goal, it is important to make such considerations explicit, as these also influence whether we are actually fostering a healthy deliberative debate. This means a simple computational solution, e.g. maximize diversity of viewpoints and debates, might not always be the best manner to reach the normative goal (e.g. foster a healthy deliberative democracy).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ethical and Normative Considerations",
"sec_num": "5.4"
},
{
"text": "Such more nuanced and complex issues come to light when we consider public values such as diversity and the normative goal of a deliberative democracy. They are less explicit when only considering the NLP task as a separate task, which only needs to be evaluated by its performance on a benchmark dataset. However, questions such as these are especially important when considering that NLP and its technology is contributing to the solution of a societal problem. The attention to an over-arching normative goal helps NLP researchers to consider their responsibility and the implications of their work when it is used in realworld settings. This has been argued before by researchers in the NLP community (Fokkens et al., 2014; Bender et al., 2021) , and we think it is a positive development when NLP researchers consider the wider ethical and normative considerations of their tasks and goals.",
"cite_spans": [
{
"start": 705,
"end": 727,
"text": "(Fokkens et al., 2014;",
"ref_id": "BIBREF15"
},
{
"start": 728,
"end": 748,
"text": "Bender et al., 2021)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Ethical and Normative Considerations",
"sec_num": "5.4"
},
{
"text": "In this paper, we have provided an overview of several separate NLP tasks related to news recommender system diversity, especially considering the normative goal of a deliberative democracy. An explicit incorporation of such over-arching normative goals is currently missing in these tasks, while this is conceptually very useful and societally relevant. As such, taking this end goal into account can help improve social relevance of NLP and support NLP researchers in defining specific goals and next steps in their research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Research on recommendation systems could benefit from more specific work that operationalizes the theoretical concepts in democratic theory. Such operationalizations should start with the groundwork laid by NLP tasks such as stance detection, argumentation mining and tasks aiming at detecting frames, propaganda and polarization. However, current NLP tasks do not address problems related to viewpoint diversity in news recommendation in its full complexity yet. NLP should take the complexities of news and the news recommendation domain into account. News texts often contain more than one stance or argument, and they tend to have more implicitly expressed viewpoints than other texts. Moreover, news comes with the challenge that new topics constantly appear and training data on detecting viewpoints in some issues may not generalize well to new data on other topics or issues.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "This leads us to the following two concrete steps for future work, specifically in NLP: (1) researchers should further advance methods that aim to identify more subtle ways in which viewpoints occur in real-world news text; (2) methods should address the issue of constant changes in data, with one possible solution being one-shot learning. Last but not least, in order to find out how these tasks can truly be used to improve a deliberative democracy, we face the challenge of evaluating beyond assigning correct labels to pieces of text. This brings us back to the main message of this paper: Answering this question goes beyond the expertise of NLP researchers. In order to maximize the impact of our technologies for addressing this complex problem, we need expertise from other disciplines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
}
],
"back_matter": [
{
"text": "This research is funded through Open Competition Digitalization Humanities and Social Science grant nr 406.D1.19.073 awarded by the Netherlands Organization of Scientific Research (NWO). We would like to thank our interdisciplinary team members, and the anonymous reviewers whose comments helped improve the paper. All opinions and remaining errors are our own.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Zeroshot stance detection: A dataset and model using generalized topic representations",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Allaway",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "8913--8931",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily Allaway and Kathleen McKeown. 2020. Zero- shot stance detection: A dataset and model using generalized topic representations. In Proceedings of the 2020 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 8913- 8931.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Annotating topics, stance, argumentativeness and claims in dutch social media comments: A pilot study",
"authors": [
{
"first": "Nina",
"middle": [],
"last": "Bauwelinck",
"suffix": ""
},
{
"first": "Els",
"middle": [],
"last": "Lefever",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 7th Workshop on Argument Mining",
"volume": "",
"issue": "",
"pages": "8--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nina Bauwelinck and Els Lefever. 2020. Annotat- ing topics, stance, argumentativeness and claims in dutch social media comments: A pilot study. In Pro- ceedings of the 7th Workshop on Argument Mining, pages 8-18.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "On the dangers of stochastic parrots: Can language models be too big",
"authors": [
{
"first": "M",
"middle": [],
"last": "Emily",
"suffix": ""
},
{
"first": "Timnit",
"middle": [],
"last": "Bender",
"suffix": ""
},
{
"first": "Angelina",
"middle": [],
"last": "Gebru",
"suffix": ""
},
{
"first": "Shmargaret",
"middle": [],
"last": "Mcmillan-Major",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shmitchell",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of FAccT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily M Bender, Timnit Gebru, Angelina McMillan- Major, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big. Proceedings of FAccT.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Diversity, fairness, and data-driven personalization in (news) recommender system (dagstuhl perspectives workshop",
"authors": [
{
"first": "Abraham",
"middle": [],
"last": "Bernstein",
"suffix": ""
},
{
"first": "Claes",
"middle": [],
"last": "De Vreese",
"suffix": ""
},
{
"first": "Natali",
"middle": [],
"last": "Helberger",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Schulz",
"suffix": ""
},
{
"first": "Katharina",
"middle": [
"A"
],
"last": "Zweig",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abraham Bernstein, Claes De Vreese, Natali Helberger, Wolfgang Schulz, and Katharina A Zweig. 2020. Di- versity, fairness, and data-driven personalization in (news) recommender system (dagstuhl perspectives workshop 19482).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Should we worry about filter bubbles?",
"authors": [
{
"first": "Frederik J Zuiderveen",
"middle": [],
"last": "Borgesius",
"suffix": ""
},
{
"first": "Damian",
"middle": [],
"last": "Trilling",
"suffix": ""
},
{
"first": "Judith",
"middle": [],
"last": "Moller",
"suffix": ""
},
{
"first": "Bal\u00e1zs",
"middle": [],
"last": "Bod\u00f3",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Claes H De",
"suffix": ""
},
{
"first": "Natali",
"middle": [],
"last": "Vreese",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Helberger",
"suffix": ""
}
],
"year": 2016,
"venue": "Internet Policy Review",
"volume": "5",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frederik J Zuiderveen Borgesius, Damian Trilling, Ju- dith Moller, Bal\u00e1zs Bod\u00f3, Claes H De Vreese, and Natali Helberger. 2016. Should we worry about fil- ter bubbles? Internet Policy Review, 5(1).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The social structure of political echo chambers: Variation in ideological homophily in online networks",
"authors": [
{
"first": "Andrei",
"middle": [],
"last": "Boutyline",
"suffix": ""
},
{
"first": "Robb",
"middle": [],
"last": "Willer",
"suffix": ""
}
],
"year": 2017,
"venue": "Political psychology",
"volume": "38",
"issue": "3",
"pages": "551--569",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrei Boutyline and Robb Willer. 2017. The social structure of political echo chambers: Variation in ideological homophily in online networks. Political psychology, 38(3):551-569.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Bias in algorithmic filtering and personalization",
"authors": [
{
"first": "Engin",
"middle": [],
"last": "Bozdag",
"suffix": ""
}
],
"year": 2013,
"venue": "Ethics and information technology",
"volume": "15",
"issue": "3",
"pages": "209--227",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Engin Bozdag. 2013. Bias in algorithmic filtering and personalization. Ethics and information technology, 15(3):209-227.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Are filter bubbles real?",
"authors": [
{
"first": "Axel",
"middle": [],
"last": "Bruns",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Axel Bruns. 2019. Are filter bubbles real? John Wiley & Sons.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The reach of commercially motivated junk news on facebook",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Burger",
"suffix": ""
},
{
"first": "Soeradj",
"middle": [],
"last": "Kanhai",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Pleijter",
"suffix": ""
},
{
"first": "Suzan",
"middle": [],
"last": "Verberne",
"suffix": ""
}
],
"year": 2019,
"venue": "PloS one",
"volume": "14",
"issue": "8",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Burger, Soeradj Kanhai, Alexander Pleijter, and Suzan Verberne. 2019. The reach of commer- cially motivated junk news on facebook. PloS one, 14(8):e0220446.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Cesar Ilharco Magalhaes, and Sylvain Jaume. 2020. News aggregation with diverse viewpoint identification using neural embeddings and semantic understanding models",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Carlebach",
"suffix": ""
},
{
"first": "Ria",
"middle": [],
"last": "Cheruvu",
"suffix": ""
},
{
"first": "Brandon",
"middle": [],
"last": "Walker",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 7th Workshop on Argument Mining",
"volume": "",
"issue": "",
"pages": "59--66",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Carlebach, Ria Cheruvu, Brandon Walker, Ce- sar Ilharco Magalhaes, and Sylvain Jaume. 2020. News aggregation with diverse viewpoint identifica- tion using neural embeddings and semantic under- standing models. In Proceedings of the 7th Work- shop on Argument Mining, pages 59-66.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Stander: An expertannotated dataset for news stance detection and evidence retrieval",
"authors": [
{
"first": "Costanza",
"middle": [],
"last": "Conforti",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Berndt",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Taher Pilehvar",
"suffix": ""
},
{
"first": "Chryssi",
"middle": [],
"last": "Giannitsarou",
"suffix": ""
},
{
"first": "Flavio",
"middle": [],
"last": "Toxvaerd",
"suffix": ""
},
{
"first": "Nigel",
"middle": [],
"last": "Collier",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
"volume": "",
"issue": "",
"pages": "4086--4101",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Costanza Conforti, Jakob Berndt, Mohammad Taher Pilehvar, Chryssi Giannitsarou, Flavio Toxvaerd, and Nigel Collier. 2020. Stander: An expert- annotated dataset for news stance detection and ev- idence retrieval. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing: Findings, pages 4086-4101.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Findings of the NLP4IF-2019 shared task on fine-grained propaganda detection",
"authors": [
{
"first": "Giovanni",
"middle": [],
"last": "Da San",
"suffix": ""
},
{
"first": "Alberto",
"middle": [],
"last": "Martino",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Barr\u00f3n-Cede\u00f1o",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nakov",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Second Workshop on Natural Language Processing for Internet Freedom: Censorship, Disinformation, and Propaganda",
"volume": "",
"issue": "",
"pages": "162--170",
"other_ids": {
"DOI": [
"10.18653/v1/D19-5024"
]
},
"num": null,
"urls": [],
"raw_text": "Giovanni Da San Martino, Alberto Barr\u00f3n-Cede\u00f1o, and Preslav Nakov. 2019. Findings of the NLP4IF-2019 shared task on fine-grained propaganda detection. In Proceedings of the Second Workshop on Natural Language Processing for Internet Freedom: Censor- ship, Disinformation, and Propaganda, pages 162- 170, Hong Kong. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Helping users discover perspectives: Enhancing opinion mining with joint topic models",
"authors": [
{
"first": "Tim",
"middle": [],
"last": "Draws",
"suffix": ""
},
{
"first": "Jody",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Nava",
"middle": [],
"last": "Tintarev",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of SENTIRE'20",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tim Draws, Jody Liu, and Nava Tintarev. 2020a. Help- ing users discover perspectives: Enhancing opinion mining with joint topic models. In Proceedings of SENTIRE'20.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Alessandro Bozzon, and Benjamin Timmermans. 2020b. Assessing viewpoint diversity in search results using ranking fairness metrics",
"authors": [
{
"first": "Tim",
"middle": [],
"last": "Draws",
"suffix": ""
},
{
"first": "Nava",
"middle": [],
"last": "Tintarev",
"suffix": ""
},
{
"first": "Ujwal",
"middle": [],
"last": "Gadiraju",
"suffix": ""
}
],
"year": null,
"venue": "Informal Proceedings of the Bias and Fairness in AI Workshop at ECML-PKDD (BIAS 2020",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tim Draws, Nava Tintarev, Ujwal Gadiraju, Alessan- dro Bozzon, and Benjamin Timmermans. 2020b. Assessing viewpoint diversity in search results using ranking fairness metrics. In Informal Proceedings of the Bias and Fairness in AI Workshop at ECML- PKDD (BIAS 2020).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Challenged by news personalisation: five perspectives on the right to receive information",
"authors": [
{
"first": "Sarah",
"middle": [],
"last": "Eskens",
"suffix": ""
},
{
"first": "Natali",
"middle": [],
"last": "Helberger",
"suffix": ""
},
{
"first": "Judith",
"middle": [],
"last": "Moeller",
"suffix": ""
}
],
"year": 2017,
"venue": "Journal of Media Law",
"volume": "9",
"issue": "2",
"pages": "259--284",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sarah Eskens, Natali Helberger, and Judith Moeller. 2017. Challenged by news personalisation: five per- spectives on the right to receive information. Jour- nal of Media Law, 9(2):259-284.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Biographynet: Methodological issues when nlp supports historical research",
"authors": [
{
"first": "Antske",
"middle": [],
"last": "Fokkens",
"suffix": ""
},
{
"first": "Serge",
"middle": [],
"last": "Ter Braake",
"suffix": ""
},
{
"first": "Niels",
"middle": [],
"last": "Ockeloen",
"suffix": ""
},
{
"first": "Piek",
"middle": [],
"last": "Vossen",
"suffix": ""
},
{
"first": "Susan",
"middle": [],
"last": "Leg\u00eane",
"suffix": ""
},
{
"first": "Guus",
"middle": [],
"last": "Schreiber",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)",
"volume": "",
"issue": "",
"pages": "3728--3735",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antske Fokkens, Serge ter Braake, Niels Ockeloen, Piek Vossen, Susan Leg\u00eane, and Guus Schreiber. 2014. Biographynet: Methodological issues when nlp supports historical research. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 3728- 3735.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Studying muslim stereotyping through microportrait extraction",
"authors": [
{
"first": "Antske",
"middle": [],
"last": "Fokkens",
"suffix": ""
},
{
"first": "Nel",
"middle": [],
"last": "Ruigrok",
"suffix": ""
},
{
"first": "Camiel",
"middle": [],
"last": "Beukeboom",
"suffix": ""
},
{
"first": "Sarah",
"middle": [],
"last": "Gagestein",
"suffix": ""
},
{
"first": "Wouter",
"middle": [],
"last": "Van Atteveldt",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antske Fokkens, Nel Ruigrok, Camiel Beukeboom, Sarah Gagestein, and Wouter van Atteveldt. 2018. Studying muslim stereotyping through microportrait extraction. In Proceedings of the Eleventh Interna- tional Conference on Language Resources and Eval- uation (LREC 2018).",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A retrospective analysis of the fake news challenge stance-detection task",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Hanselowski",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Avinesh",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Schiller",
"suffix": ""
},
{
"first": "Debanjan",
"middle": [],
"last": "Caspelherr",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chaudhuri",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Christian",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Meyer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1859--1874",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Hanselowski, PVS Avinesh, Benjamin Schiller, Felix Caspelherr, Debanjan Chaudhuri, Christian M Meyer, and Iryna Gurevych. 2018a. A retrospective analysis of the fake news chal- lenge stance-detection task. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1859-1874.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A retrospective analysis of the fake news challenge stance-detection task",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Hanselowski",
"suffix": ""
},
{
"first": "Pvs",
"middle": [],
"last": "Avinesh",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Schiller",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Caspelherr",
"suffix": ""
},
{
"first": "Debanjan",
"middle": [],
"last": "Chaudhuri",
"suffix": ""
},
{
"first": "Christian",
"middle": [
"M"
],
"last": "Meyer",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1859--1874",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Hanselowski, Avinesh PVS, Benjamin Schiller, Felix Caspelherr, Debanjan Chaudhuri, Christian M. Meyer, and Iryna Gurevych. 2018b. A retrospective analysis of the fake news challenge stance-detection task. In Proceedings of the 27th International Conference on Computational Lin- guistics, pages 1859-1874, Santa Fe, New Mexico, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "On the democratic role of news recommenders",
"authors": [
{
"first": "Natali",
"middle": [],
"last": "Helberger",
"suffix": ""
}
],
"year": 2019,
"venue": "Digital Journalism",
"volume": "7",
"issue": "8",
"pages": "993--1012",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Natali Helberger. 2019. On the democratic role of news recommenders. Digital Journalism, 7(8):993-1012.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Covidlies: Detecting covid-19 misinformation on social media",
"authors": [
{
"first": "Tamanna",
"middle": [],
"last": "Hossain",
"suffix": ""
},
{
"first": "I",
"middle": [
"V"
],
"last": "Robert L Logan",
"suffix": ""
},
{
"first": "Arjuna",
"middle": [],
"last": "Ugarte",
"suffix": ""
},
{
"first": "Yoshitomo",
"middle": [],
"last": "Matsubara",
"suffix": ""
},
{
"first": "Sean",
"middle": [],
"last": "Young",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 1st Workshop on NLP for COVID-19",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tamanna Hossain, Robert L Logan IV, Arjuna Ugarte, Yoshitomo Matsubara, Sean Young, and Sameer Singh. 2020. Covidlies: Detecting covid-19 misin- formation on social media. In Proceedings of the 1st Workshop on NLP for COVID-19 (Part 2) at EMNLP 2020.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Echo chamber: Rush Limbaugh and the conservative media establishment",
"authors": [
{
"first": "Kathleen",
"middle": [
"Hall"
],
"last": "Jamieson",
"suffix": ""
},
{
"first": "Joseph N",
"middle": [],
"last": "Cappella",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kathleen Hall Jamieson and Joseph N Cappella. 2008. Echo chamber: Rush Limbaugh and the conserva- tive media establishment. Oxford University Press.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Diversity, serendipity, novelty, and coverage: a survey and empirical analysis of beyond-accuracy objectives in recommender systems",
"authors": [
{
"first": "Marius",
"middle": [],
"last": "Kaminskas",
"suffix": ""
},
{
"first": "Derek",
"middle": [],
"last": "Bridge",
"suffix": ""
}
],
"year": 2016,
"venue": "ACM Transactions on Interactive Intelligent Systems (TiiS)",
"volume": "7",
"issue": "1",
"pages": "1--42",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marius Kaminskas and Derek Bridge. 2016. Diversity, serendipity, novelty, and coverage: a survey and em- pirical analysis of beyond-accuracy objectives in rec- ommender systems. ACM Transactions on Interac- tive Intelligent Systems (TiiS), 7(1):1-42.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "News recommender systems-survey and roads ahead",
"authors": [
{
"first": "Mozhgan",
"middle": [],
"last": "Karimi",
"suffix": ""
},
{
"first": "Dietmar",
"middle": [],
"last": "Jannach",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Jugovac",
"suffix": ""
}
],
"year": 2018,
"venue": "Information Processing & Management",
"volume": "54",
"issue": "6",
"pages": "1203--1227",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mozhgan Karimi, Dietmar Jannach, and Michael Ju- govac. 2018. News recommender systems-survey and roads ahead. Information Processing & Man- agement, 54(6):1203-1227.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Beyond optimizing for clicks: Incorporating editorial values in news recommendation",
"authors": [
{
"first": "Feng",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Anca",
"middle": [],
"last": "Dumitrache",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Graus",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization",
"volume": "",
"issue": "",
"pages": "145--153",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Feng Lu, Anca Dumitrache, and David Graus. 2020. Beyond optimizing for clicks: Incorporating edito- rial values in news recommendation. In Proceed- ings of the 28th ACM Conference on User Modeling, Adaptation and Personalization, pages 145-153.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "On legitimacy and political deliberation",
"authors": [
{
"first": "",
"middle": [],
"last": "Bernard Manin",
"suffix": ""
}
],
"year": 1987,
"venue": "Political theory",
"volume": "15",
"issue": "3",
"pages": "338--368",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bernard Manin. 1987. On legitimacy and political de- liberation. Political theory, 15(3):338-368.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Stance and sentiment in tweets",
"authors": [
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "Parinaz",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Sobhani",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
}
],
"year": 2017,
"venue": "ACM Transactions on Internet Technology (TOIT)",
"volume": "17",
"issue": "3",
"pages": "1--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif M Mohammad, Parinaz Sobhani, and Svetlana Kiritchenko. 2017. Stance and sentiment in tweets. ACM Transactions on Internet Technology (TOIT), 17(3):1-23.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Do not blame it on the algorithm: an empirical assessment of multiple recommender systems and their impact on content diversity",
"authors": [
{
"first": "Judith",
"middle": [],
"last": "M\u00f6ller",
"suffix": ""
},
{
"first": "Damian",
"middle": [],
"last": "Trilling",
"suffix": ""
},
{
"first": "Natali",
"middle": [],
"last": "Helberger",
"suffix": ""
},
{
"first": "Bram",
"middle": [],
"last": "Van Es",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "21",
"issue": "",
"pages": "959--977",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Judith M\u00f6ller, Damian Trilling, Natali Helberger, and Bram van Es. 2018. Do not blame it on the al- gorithm: an empirical assessment of multiple rec- ommender systems and their impact on content di- versity. Information, Communication & Society, 21(7):959-977.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Annotating perspectives on vaccination",
"authors": [
{
"first": "Roser",
"middle": [],
"last": "Morante",
"suffix": ""
},
{
"first": "Chantal",
"middle": [],
"last": "Van Son",
"suffix": ""
},
{
"first": "Isa",
"middle": [],
"last": "Maks",
"suffix": ""
},
{
"first": "Piek",
"middle": [],
"last": "Vossen",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of The 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "4964--4973",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roser Morante, Chantal Van Son, Isa Maks, and Piek Vossen. 2020. Annotating perspectives on vacci- nation. In Proceedings of The 12th Language Re- sources and Evaluation Conference, pages 4964- 4973.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "The virtual sphere: The internet as a public sphere",
"authors": [
{
"first": "Zizi",
"middle": [],
"last": "Papacharissi",
"suffix": ""
}
],
"year": 2002,
"venue": "New media & society",
"volume": "4",
"issue": "",
"pages": "9--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zizi Papacharissi. 2002. The virtual sphere: The inter- net as a public sphere. New media & society, 4(1):9- 27.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "The filter bubble: What the Internet is hiding from you",
"authors": [
{
"first": "Eli",
"middle": [],
"last": "Pariser",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eli Pariser. 2011. The filter bubble: What the Internet is hiding from you. Penguin UK.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Beyond accuracy: Behavioral testing of NLP models with CheckList",
"authors": [
{
"first": "Tongshuang",
"middle": [],
"last": "Marco Tulio Ribeiro",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Guestrin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Singh",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4902--4912",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.442"
]
},
"num": null,
"urls": [],
"raw_text": "Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020. Beyond accuracy: Be- havioral testing of NLP models with CheckList. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4902- 4912, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Weakly supervised learning of nuanced frames for analyzing polarization in news media",
"authors": [
{
"first": "Shamik",
"middle": [],
"last": "Roy",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Goldwasser",
"suffix": ""
}
],
"year": 2020,
"venue": "EMNLP Findings",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shamik Roy and Dan Goldwasser. 2020. Weakly su- pervised learning of nuanced frames for analyzing polarization in news media. EMNLP Findings.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Parsing argumentation structures in persuasive essays",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Stab",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2017,
"venue": "Computational Linguistics",
"volume": "43",
"issue": "3",
"pages": "619--659",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian Stab and Iryna Gurevych. 2017. Parsing ar- gumentation structures in persuasive essays. Com- putational Linguistics, 43(3):619-659.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "# Republic: Divided democracy in the age of social media",
"authors": [
{
"first": "",
"middle": [],
"last": "Cass R Sunstein",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cass R Sunstein. 2018. # Republic: Divided democ- racy in the age of social media. Princeton University Press.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Presenting diversity aware recommendations: Making challenging news acceptable",
"authors": [
{
"first": "",
"middle": [],
"last": "Nava Tintarev",
"suffix": ""
}
],
"year": 2017,
"venue": "The FATREC Workshop on Responsible Recommendation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nava Tintarev. 2017. Presenting diversity aware recom- mendations: Making challenging news acceptable. In The FATREC Workshop on Responsible Recom- mendation.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Antske Fokkens, Isa Maks, Roser Morante, Lora Aroyo, and Piek Vossen",
"authors": [
{
"first": "Chantal",
"middle": [],
"last": "Van Son",
"suffix": ""
},
{
"first": "Tommaso",
"middle": [],
"last": "Caselli",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)",
"volume": "",
"issue": "",
"pages": "1177--1184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chantal Van Son, Tommaso Caselli, Antske Fokkens, Isa Maks, Roser Morante, Lora Aroyo, and Piek Vossen. 2016. GRaSP: A multilayered annotation scheme for perspectives. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 1177-1184.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Recommenders with a mission: assessing diversity in news recommendations",
"authors": [
{
"first": "Sanne",
"middle": [],
"last": "Vrijenhoek",
"suffix": ""
},
{
"first": "Mesut",
"middle": [],
"last": "Kaya",
"suffix": ""
},
{
"first": "Nadia",
"middle": [],
"last": "Metoui",
"suffix": ""
},
{
"first": "Judith",
"middle": [],
"last": "M\u00f6ller",
"suffix": ""
},
{
"first": "Daan",
"middle": [],
"last": "Odijk",
"suffix": ""
},
{
"first": "Natali",
"middle": [],
"last": "Helberger",
"suffix": ""
}
],
"year": 2021,
"venue": "SIGIR Conference on Human Information Interaction and Retrieval (CHIIR) Proceedings",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sanne Vrijenhoek, Mesut Kaya, Nadia Metoui, Judith M\u00f6ller, Daan Odijk, and Natali Helberger. 2021. Recommenders with a mission: assessing diversity in news recommendations. In SIGIR Conference on Human Information Interaction and Retrieval (CHIIR) Proceedings.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Computational argumentation quality assessment in natural language",
"authors": [
{
"first": "Henning",
"middle": [],
"last": "Wachsmuth",
"suffix": ""
},
{
"first": "Nona",
"middle": [],
"last": "Naderi",
"suffix": ""
},
{
"first": "Yufang",
"middle": [],
"last": "Hou",
"suffix": ""
},
{
"first": "Yonatan",
"middle": [],
"last": "Bilu",
"suffix": ""
},
{
"first": "Vinodkumar",
"middle": [],
"last": "Prabhakaran",
"suffix": ""
},
{
"first": "Tim",
"middle": [
"Alberdingk"
],
"last": "Thijm",
"suffix": ""
},
{
"first": "Graeme",
"middle": [],
"last": "Hirst",
"suffix": ""
},
{
"first": "Benno",
"middle": [],
"last": "Stein",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "176--187",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Henning Wachsmuth, Nona Naderi, Yufang Hou, Yonatan Bilu, Vinodkumar Prabhakaran, Tim Al- berdingk Thijm, Graeme Hirst, and Benno Stein. 2017. Computational argumentation quality assess- ment in natural language. In Proceedings of the 15th Conference of the European Chapter of the Associa- tion for Computational Linguistics: Volume 1, Long Papers, pages 176-187.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Affinity profiling and discrimination by association in online behavioural advertising",
"authors": [
{
"first": "Sandra",
"middle": [],
"last": "Wachter",
"suffix": ""
}
],
"year": 2020,
"venue": "Berkeley Technology Law Journal",
"volume": "",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sandra Wachter. 2020. Affinity profiling and discrimi- nation by association in online behavioural advertis- ing. Berkeley Technology Law Journal, 35(2).",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Solving the apparent diversity-accuracy dilemma of recommender systems",
"authors": [
{
"first": "Tao",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Zolt\u00e1n",
"middle": [],
"last": "Kuscsik",
"suffix": ""
},
{
"first": "Jian-Guo",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Mat\u00fa\u0161",
"middle": [],
"last": "Medo",
"suffix": ""
},
{
"first": "Joseph",
"middle": [
"Rushton"
],
"last": "Wakeling",
"suffix": ""
},
{
"first": "Yi-Cheng",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the National Academy of Sciences",
"volume": "107",
"issue": "10",
"pages": "4511--4515",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tao Zhou, Zolt\u00e1n Kuscsik, Jian-Guo Liu, Mat\u00fa\u0161 Medo, Joseph Rushton Wakeling, and Yi-Cheng Zhang. 2010. Solving the apparent diversity-accuracy dilemma of recommender systems. Proceedings of the National Academy of Sciences, 107(10):4511- 4515.",
"links": null
}
},
"ref_entries": {}
}
}