gem_id
stringlengths
37
41
paper_id
stringlengths
3
4
paper_title
stringlengths
19
183
paper_abstract
stringlengths
168
1.38k
paper_content
dict
paper_headers
dict
slide_id
stringlengths
37
41
slide_title
stringlengths
2
85
slide_content_text
stringlengths
11
2.55k
target
stringlengths
11
2.55k
references
list
GEM-SciDuet-train-93#paper-1238#slide-4
1238
Friendships, Rivalries, and Trysts: Characterizing Relations between Ideas in Texts
Understanding how ideas relate to each other is a fundamental question in many domains, ranging from intellectual history to public communication. Because ideas are naturally embedded in texts, we propose the first framework to systematically characterize the relations between ideas based on their occurrence in a corpus of documents, independent of how these ideas are represented. Combining two statistics-cooccurrence within documents and prevalence correlation over time-our approach reveals a number of different ways in which ideas can cooperate and compete. For instance, two ideas can closely track each other's prevalence over time, and yet rarely cooccur, almost like a "cold war" scenario. We observe that pairwise cooccurrence and prevalence correlation exhibit different distributions. We further demonstrate that our approach is able to uncover intriguing relations between ideas through in-depth case studies on news articles and research papers.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228 ], "paper_content_text": [ "Introduction Ideas exist in the mind, but are made manifest in language, where they compete with each other for the scarce resource of human attention.", "Milton (1644) used the \"marketplace of ideas\" metaphor to argue that the truth will win out when ideas freely compete; Dawkins (1976) similarly likened the evolution of ideas to natural selection of genes.", "We propose a framework to quantitatively characterize competition and cooperation between ideas in texts, independent of how they might be represented.", "By \"ideas\", we mean any discrete conceptual units that can be identified as being present or absent in a document.", "In this work, we consider representing ideas using keywords and topics obtained in an unsupervised fashion, but our way of characterizing the relations between ideas could be applied to many other types of textual representations, such as frames (Card et al., 2015) and hashtags.", "What does it mean for two ideas to compete in texts, quantitatively?", "Consider, for example, the issue of immigration.", "There are two strongly competing narratives about the roughly 11 million people 1 who are residing in the United States without permission.", "One is \"illegal aliens\", who \"steal\" jobs and deny opportunities to legal immigrants; the other is \"undocumented immigrants\", who are already part of the fabric of society and deserve a path to citizenship (Merolla et al., 2013) .", "Although prior knowledge suggests that these two narratives compete, it is not immediately obvious what measures might reveal this competition in a corpus of writing about immigration.", "One question is whether or not these two ideas cooccur in the same documents.", "In the example above, these narratives are used by distinct groups of people with different ideologies.", "The fact that they don't cooccur is one clue that they may be in competition with each other.", "However, cooccurrence is insufficient to express the selection process of ideas, i.e., some ideas fade out over time, while others rise in popularity, analogous to the populations of species in nature.", "Of the two narratives on immigration, we may expect one to win out at the expense of another as public opinion shifts.", "Alternatively, we might expect to see these narratives reinforcing each other, as both sides intensify their messaging in response to growing opposition, much like the U.S.S.R. and immigration, deportation republican, party Figure 1 : Relations between ideas in the space of cooccurrence and prevalence correlation (prevalence correlation is shown explicitly and cooccurrence is encoded in row captions).", "We use topics from LDA (Blei et al., 2003) to represent ideas.", "Each topic is named with a pair of words that are most strongly associated with the topic in LDA.", "Subplots show examples of relations between topics found in U.S. newspaper articles on immigration from 1980 to 2016, color coded to match the description in text.", "The y-axis represents the proportion of news articles in a year (in our corpus) that contain the corresponding topic.", "All examples are among the top 3 strongest relations in each type except (\"immigrant, undocumented\", \"illegal, alien\"), which corresponds to the two competing narratives.", "We explain the formal definition of strength in §2.", "the U.S. during the cold war.", "To capture these possibilities, we use prevalence correlation over time.", "Building on these insights, we propose a framework that combines cooccurrence within documents and prevalence correlation over time.", "This framework gives rise to four possible types of relation that correspond to the four quadrants in Fig.", "1 .", "We explain each type using examples from news articles in U.S. newspapers on immigration from 1980 to 2016.", "Here, we have used LDA to identify ideas in the form of topics, and we denote each idea with a pair of words most strongly associated with the corresponding topic.", "Friendship (correlated over time, likely to cooccur).", "The \"immigrant, undocumented\" topic tends to cooccur with \"obama, president\" and both topics have been rising during the period of our dataset, likely because the \"undocumented immigrants\" narrative was an important part of Obama's framing of the immigration issue (Haynes et al., 2016) .", "Head-to-head (anti-correlated over time, unlikely to cooccur).", "\"immigrant, undocumented\" and \"illegal, alien\" are in a head-to-head competition: these two topics rarely cooccur, and \"immigrant, undocu-mented\" has been growing in prevalence, while the usage of \"illegal, alien\" in newspapers has been declining.", "This observation agrees with a report from Pew Research Center (Guskin, 2013) .", "Tryst (anti-correlated over time, likely to cooccur).", "The two off-diagonal examples use topics related to law enforcement.", "Overall, \"immigration, deportation\" and \"detention, jail\" often cooccur but \"detention, jail\" has been declining, while \"immigration, deportation\" has been rising.", "This possibly relates to the promises to overhaul the immigration detention system (Kalhan, 2010).", "2 Arms-race (correlated over time, unlikely to cooccur).", "One of the above law enforcement topics (\"immigration, deportation\") and a topic on the Republican party (\"republican, party\") hold an armsrace relation: they are both growing in prevalence over time, but rarely cooccur, perhaps suggesting an underlying common cause.", "Note that our terminology describes the relations between ideas in texts, not necessarily between the entities to which the ideas refer.", "For example, we find that the relation between \"Israel\" and \"Palestine\" is \"friendship\" in news articles on terrorism, based on their prevalence correlation and cooccurrence in that corpus.", "We introduce the formal definition of our framework in §2 and apply it to news articles on five issues and research papers from ACL Anthology and NIPS as testbeds.", "We operationalize ideas using topics (Blei et al., 2003) and keywords (Monroe et al., 2008) .", "To explore whether the four relation types exist and how strong these relations are, we first examine the marginal and joint distributions of cooccurrence and prevalence correlation ( §3).", "We find that cooccurrence shows a unimodal normal-shaped distribution but prevalence correlation demonstrates more diverse distributions across corpora.", "As we would expect, there are, in general, more and stronger friendship and head-to-head relations than arms-race and tryst relations.", "Second, we demonstrate the effectiveness of our framework through in-depth case studies ( §4).", "We not only validate existing knowledge about some news issues and research areas, but also identify hypotheses that require further investigation.", "For example, using keywords to represent ideas, a top pair with the tryst relation in news articles on terrorism is \"arab\" and \"islam\"; they are likely to cooccur, but \"islam\" is rising in relative prevalence while \"arab\" is declining.", "This suggests a conjecture that the news media have increasingly linked terrorism to a religious group rather than an ethnic group.", "We also show relations between topics in ACL that center around machine translation.", "Our work is a first step towards understanding relations between ideas from text corpora, a complex and important research question.", "We provide some concluding thoughts in §6.", "Computational Framework The aim of our computational framework is to explore relations between ideas.", "We thus assume that the set of relevant ideas has been identified, and those expressed in each document have been tabulated.", "Our open-source implementation is at https://github.com/Noahs-ARK/ idea_relations/.", "In the following, we introduce our formal definitions and datasets.", "∀x, y ∈ I, PMI(x, y) = logP (x, y) P (x)P (y) = C + log 1+ t k 1{x∈dt k }·1{y∈dt k } (1+ t k 1{x∈dt k })·(1+ t k 1{y∈dt k }) (1) r(x, y) = t P (x|t)−P (x|t) P (y|t)−P (y|t) t P (x|t)−P (x|t) 2 t P (y|t)−P (y|t) 2 (2) Figure 2 : Eq.", "1 is the empirical pointwise mutual information for two ideas, our measure of cooccurrence of ideas; note that we use add-one smoothing in estimating PMI.", "Eq.", "2 is the Pearson correlation between two ideas' prevalence over time.", "Cooccurrence and Prevalence Correlation As discussed in the introduction, we focus on two dimensions to quantify relations between ideas: 1. cooccurrence reveals to what extent two ideas tend to occur in the same contexts; 2. similarity between the relative prevalence of ideas over time reveals how two ideas relate in terms of popularity or coverage.", "Our input is a collection of documents, each represented by a set of ideas and indexed by time.", "We denote a static set of ideas as I and a text corpus that consists of these ideas as C = {D 1 , .", ".", ".", ", D T }, where D t = {d t 1 , .", ".", ".", ", d t N t } gives the collection of documents at timestep t, and each document, d t k , is represented as a subset of ideas in I.", "Here T is the total number of timesteps, and N t is the number of documents at timestep t. It follows that the total number of documents N = T t=1 N t .", "In order to formally capture the two dimensions above, we employ two commonly-used statistics.", "First, we use empirical pointwise mutual information (PMI) to capture the cooccurrence of ideas within the same document (Church and Hanks, 1990); see Eq.", "1 in Fig.", "2 .", "Positive PMI indicates that ideas occur together more frequently than would be expected if they were independent, while negative PMI indicates the opposite.", "Second, we compute the correlation between normalized document frequency of ideas to capture the relation between the relative prevalence of ideas across documents over time; see Eq.", "2 in Fig.", "2 .", "Positiver indicates that two ideas have similar prevalence over time, while negativer sug-gests two anti-correlated ideas (i.e., when one goes up, the other goes down).", "The four types of relations in the introduction can now be obtained using PMI andr, which capture cooccurrence and prevalence correlation respectively.", "We further define the strength of the relation between two ideas as the absolute value of the product of their PMI andr scores: ∀x, y ∈ I, strength(x, y) = | PMI(x, y)×r(x, y)|.", "(3) Datasets and Representation of Ideas We use two types of datasets to validate our framework: news articles and research papers.", "We choose these two domains because competition between ideas has received significant interest in history of science (Kuhn, 1996) and research on framing (Chong and Druckman, 2007; Entman, 1993; Gitlin, 1980; Lakoff, 2014) .", "Furthermore, interesting differences may exist in these two domains as news evolves with external events and scientific research progresses through innovations.", "• News articles.", "We follow the strategy in Card et al.", "(2015) to obtain news articles from Lex-isNexis on five issues: abortion, immigration, same-sex marriage, smoking, and terrorism.", "We search for relevant articles using LexisNexis subject terms in U.S. newspapers from 1980 to 2016.", "Each of these corpora contains more than 25,000 articles.", "Please refer to the supplementary material for details.", "• Research papers.", "We consider full texts of papers from two communities: our own ACL community captured by papers from ACL, NAACL, EMNLP, and TACL from 1980 to 2014 (Radev et al., 2009 ; and the NIPS community from 1987 to 2016.", "3 There are 4.8K papers from the ACL community and 6.6K papers from the NIPS community.", "The processed datasets are available at https://chenhaot.com/ pages/idea-relations.html.", "In order to operationalize ideas in a text corpus, we consider two ways to represent ideas.", "• Topics.", "We extract topics from each document by running LDA (Blei et al., 2003) on each corpus C. In all datasets, we set the number of topics to 50.", "4 Formally, I is the 50 topics learned from the corpus, and each document is represented as the set of topics that are present with greater than 0.01 probability in the topic distribution for that document.", "• Keywords.", "We identify a list of distinguishing keywords for each corpus by comparing its word frequencies to the background frequencies found in other corpora using the informative Dirichlet prior model in Monroe et al.", "(2008) .", "We set the number of keywords to 100 for all corpora.", "For news articles, the background corpus for each issue is comprised of all articles from the other four issues.", "For research papers, we use NIPS as the background corpus for ACL and vice versa to identify what are the core concepts for each of these research areas.", "Formally, I is the 100 top distinguishing keywords in the corpus and each document is represented as the set of keywords within I that are present in the document.", "Refer to the supplementary material for a list of example keywords in each corpus.", "In both procedures, we lemmatize all words and add common bigram phrases to the vocabulary following Mikolov et al.", "(2013) .", "Note that in our analysis, ideas are only present or absent in a document, and a document can in principle be mapped to any subset of ideas in I.", "In our experiments 90% of documents are marked as containing between 7 and 14 ideas using topics, 8 and 33 ideas using keywords.", "Characterizing the Space of Relations To provide an overview of the four relation types in Fig.", "1 , we first examine the empirical distributions of the two statistics of interest across pairs of ideas.", "In most exploratory studies, however, we are most interested in pairs that exemplify each type of relation, i.e., the most extreme points in each quadrant.", "We thus look at these pairs in each corpus to observe how the four types differ in salience across datasets.", "Empirical Distribution Properties To the best of our knowledge, the distributions of pairwise cooccurrence and prevalence correlation have not been examined in previous literature.", "We thus first investigate the marginal distributions of cooccurrence and prevalence correlation and then our framework is to analyze relations between ideas, so this choice is not essential in this work.", "(Scott, 2015) .", "The plots along the axes show the marginal distribution of the corresponding dimension.", "In each plot, we give the Pearson correlation, and all Pearson correlations' p-values are less than 10 −40 .", "In these plots, we use topics to represent ideas.", "their joint distribution.", "Fig.", "3 shows three examples: two from news articles and one from research papers.", "We will also focus our case studies on these three corpora in §4.", "The corresponding plots for keywords have been relegated to supplementary material due to space limitations.", "Cooccurrence tends to be unimodal but not normal.", "In all of our datasets, pairwise cooccurrence ( PMI) presents a unimodal distribution that somewhat resembles a normal distribution, but it is rarely precisely normal.", "We cannot reject the hypothesis that it is unimodal for any dataset (using topics or keywords) using the dip test (Hartigan and Hartigan, 1985) , though D'Agostino's K 2 test (D'Agostino et al., 1990) rejects normality in almost all cases.", "Prevalence correlation exhibits diverse distributions.", "Pairwise prevalence correlation follows different distributions in news articles compared to research papers: they are unimodal in news articles, but not in ACL or NIPS.", "The dip test only rejects the unimodality hypothesis in NIPS.", "None follow normal distributions based on D'Agostino's K 2 test.", "Cooccurrence is positively correlated with prevalence correlation.", "In all of our datasets, cooccurrence is positively correlated with prevalence correlation whether we use topics or keywords to represent ideas, although the Pearson correlation coefficients vary.", "This suggests that there are more friendship and head-to-head relations than tryst and arms-race relations.", "Based on the results of kernel density estimation, we also observe that this correlation is often loose, e.g., in ACL topics, cooccurrence spreads widely at each mode of prevalence correlation.", "776 Relative Strength of Extreme Pairs We are interested in how our framework can identify intriguing relations between ideas.", "These potentially interesting pairs likely correspond to the extreme points in each quadrant instead of the ones around the origin, where PMI and prevalence correlation are both close to zero.", "Here we compare the relative strength of extreme pairs in each dataset.", "We will discuss how these extreme pairs confirm existing knowledge and suggest new hypotheses via case studies in §4.", "For each relation type, we average the strengths of the 25 pairs with the strongest relations in that type, with strength defined in Eq.", "3.", "This heuristic (henceforth collective strength) allows us to collectively compare the strengths of the most prominent friendship, tryst, arms-race, and head-to-head relations.", "The results are not sensitive to the choice of 25.", "Fig.", "4 shows the collective strength of the four types in all of our datasets.", "The most common ordering is: friendship > head-to-head > arms-race > tryst.", "The fact that friendship and head-to-head relations are strong is consistent with the positive correlation between cooccurrence and prevalence correlation.", "In news, friendship is the strongest relation type, but head-to-head is the strongest in ACL topics and NIPS topics.", "This suggests, unsurprisingly, that there are stronger head-to-head competitions (i.e., one idea takes over another) between ideas in scientific research than in news.", "We also see that topics show greater strength in our scientific article collections, while keywords dominate in news, especially in friendship.", "We conjecture that terms in scientific literature are often overloaded (e.g., a tree could be a parse tree or a decision tree), necessitating some abstraction when representing ideas.", "In contrast, news stories are more self-contained and seek to employ consistent usage.", "Exploratory Studies We present case studies based on strongly related pairs of ideas in the four types of relation.", "Throughout this section, \"rank\" refers to the rank of the relation strength between a pair of ideas in its corresponding relation type.", "International Relations in Terrorism Following a decade of declining violence in the 90s, the events of September 11, 2001 precipitated a dramatic increase in concern about terrorism, and a major shift in how it was framed (Kern et al., 2003) .", "As a showcase, we consider a topic which encompasses much of the U.S. government's response to terrorism: \"federal, state\".", "5 We observe two topics engaging in an \"arms race\" with this one: \"afghanistan, taliban\" and \"pakistan, india\".", "These correspond to two geopolitical regions closely linked to the U.S. government's concern with terrorism, and both were sites of U.S. military action during the period of our dataset.", "Events abroad and the U.S. government's responses follow the arms-race pattern, each holding increasing 5 As in §1, we summarize each topic using a pair of strongly associated words, instead of assigning a name.", "Figure 6 : Tryst relation between arab and islam using keywords to represent ideas (#2 in tryst): these two words tend to cooccur but are anti-correlated in prevalence over time.", "In particular, islam was rarely used in coverage of terrorism in the 1980s.", "attention with the other, likely because they share the same underlying cause.", "We also observe two head-to-head rivals to the \"federal, state\" topic: \"iran, libya\" and \"israel, palestinian\".", "While these topics correspond to regions that are hotly debated in the U.S., their coverage in news tends not to correlate temporally with the U.S. government's responses to terrorism, at least during the time period of our corpus.", "Discussion of these regions was more prevalent in the 80s and 90s, with declining media coverage since then (Kern et al., 2003) .", "The relations between these topics are consistent with structural balance theory (Cartwright and Harary, 1956; Heider, 1946) , which suggests that the enemy of an enemy is a friend.", "The \"afghanistan, taliban\" topic has the strongest friendship relation with the \"pakistan, india\" topic, i.e., they are likely to cooccur and are positively correlated in prevalence.", "Similarly, the \"iran, libya\" topic is a close \"friend\" with the \"israel, palestinian\" topic (ranked 8th in friendship).", "Fig.", "5a shows the relations between the \"federal, state\" topic and four international topics.", "Edge colors indicate relation types and the number in an edge label presents the ranking of its strength in the corresponding relation type.", "Fig.", "5b and Fig.", "5c represent concrete examples in Fig.", "5a : \"federal, state\" and \"afghanistan, taliban\" follow similar trends, although \"afghanistan, taliban\" fluctuates over time due to significant events such as the September 11 attacks in 2001 and the death of Bin Laden in 2011; while \"iran, lybia\" is negatively correlated with \"federal, state\".", "In fact, more than 70% of terrorism news in the 80s contained the \"iran, lybia\" topic.", "When using keywords to represent ideas, we observe similar relations between the term homeland security and terms related to the above foreign countries.", "In addition, we highlight an interesting but unexpected tryst relation between arab and islam (Fig.", "6) .", "It is not surprising that these two words tend to cooccur in the same news articles, but the usage of islam in the news is increasing while arab is declining.", "The increasing prevalence of islam and decreasing prevalence of arab over this time period can also be seen, for example, using Google's n-gram viewer, but it of course provides no information about cooccurrence.", "This trend has not been previously noted to the best of our knowledge, although an article in the Huffington Post called for news editors to distinguish Muslim from Arab.", "6 Our observation suggests a conjecture that the news media have increasingly linked terrorism to a religious group rather than an ethnic group, perhaps in part due to the tie between the events of 9/11 and Afghanistan, which is not an Arab or Arabic-speaking country.", "We leave it to further investigation to confirm or reject this hypothesis.", "To further demonstrate the effectiveness of our approach, we compare a pair's rank using only cooccurrence or prevalence correlation with its rank in our framework.", "Table 1 shows the results for three pairs above.", "If we had looked at only cooccurrence or prevalence correlation, we would probably have missed these interesting pairs.", "PMI Corr \"federal, state\", \"afghanistan, taliban\" (#2 in arms-race) 43 99 \"federal, state\", \"iran, lybia\" (#2 in head-to-head) 36 56 arab, islam (#2 in tryst) 106 1,494 Ethnicity Keywords in Immigration In addition to results on topics in §1, we observe unexpected patterns about ethnicity keywords in immigration news.", "Our observation starts with a top tryst relation between latino and asian.", "Although these words are likely to cooccur, their prevalence trajectories differ, with the discussion of Asian immigrants in the 1990s giving way to a focus on the word latino from 2000 onward.", "Possible theories to explain this observation include that undocumented immigrants are generally perceived as a Latino issue, or that Latino voters are increasingly influential in U.S. elections.", "Furthermore, latino holds head-to-head relations with two subgroups of Latin American immigrants: haitian and cuban.", "In particular, the strength of the relation with haitian is ranked #18 in headto-head relations.", "Meanwhile, haitian and cuban have a friendship relation, which is again consistent with structural balance theory.", "The decreasing prevalence of haitian and cuban perhaps speaks to the shifting geographical focus of recent immigration to the U.S., and issues of the Latino panethnicity.", "In fact, a majority of Latinos prefer to identify with their national origin relative to the pan-ethnic terms (Taylor et al., 2012) .", "However, we should also note that much of this coverage relates to a set of specific refugee crises, temporarily elevating the political importance of these nations in the U.S.", "Nevertheless, the underlying social and political reasons behind these head-to-head relations are worth further investigation.", "Relations between Topics in ACL Finally, we analyze relations between topics in the ACL Anthology.", "It turns out that \"machine translation\" is at a central position among top ranked relations in all the four types (Fig.", "8) .", "7 It is part of the strongest relation in all four types except tryst (ranked #5).", "The full relation graph presents further patterns.", "Friendship demonstrates transitivity: both \"machine translation\" and \"word alignment\" have similar relations with other topics.", "But such transitivity does not hold for tryst: although the prevalence of \"rule, forest methods\" is anti-correlated with both \"machine translation\" and \"sentiment analysis\", \"sentiment analysis\" seldom cooccurs with \"rule, for-est methods\" because \"sentiment analysis\" is seldom built on parsing algorithms.", "Similarly, \"rule, forest methods\" and \"discourse (coherence)\" hold an armsrace relation: they do not tend to cooccur and both decline in relative prevalence as \"machine translation\" rises.", "The prevalence of each of these ideas in comparison to machine translation is shown in in Fig.", "9 , which reveals additional detail.", "Figure 9 : Relations between topics in ACL Anthology in the space of cooccurrence and prevalence correlation (prevalence correlation is shown explicitly and cooccurrence is encoded in row captions), color coded to match the text.", "The y-axis represents the relative proportion of papers in a year that contain the corresponding topic.", "The top 10 words for the rule, forest methods topic are rule, grammar, derivation, span, algorithm, forest, parsing, figure, set, string.", "Concluding Discussion We proposed a method to characterize relations between ideas in texts through the lens of cooccurrence within documents and prevalence correlation over time.", "For the first time, we observe that the distribution of pairwise cooccurrence is unimodal, while the distribution of pairwise prevalence correlation is not always unimodal, and show that they are positively correlated.", "This combination suggests four types of relations between ideas, and these four types are all found to varying extents in our experiments.", "We illustrate our computational method by exploratory studies on news corpora and scientific research papers.", "We not only confirm existing knowledge but also suggest hypotheses around the usage of arab and islam in terrorism and latino and asian in immigration.", "It is important to note that the relations found using our approach depend on the nature of the representation of ideas and the source of texts.", "For instance, we cannot expect relations found in news articles to reflect shifts in public opinion if news articles do not effectively track public opinion.", "Our method is entirely observational.", "It remains as a further stage of analysis to understand the underlying reasons that lead to these relations be-tween ideas.", "In scientific research, for example, it could simply be the progress of science, i.e., newer ideas overtake older ones deemed less valuable at a given time; on the other hand, history suggests that it is not always the correct ideas that are most expressed, and many other factors may be important.", "Similarly, in news coverage, underlying sociological and political situations have significant impact on which ideas are presented, and how.", "There are many potential directions to improve our method to account for complex relations between ideas.", "For instance, we assume that both ideas and relations are statically grounded in keywords or topics.", "In reality, ideas and relations both evolve over time: a tryst relation might appear as friendship if we focus on a narrower time period.", "Similarly, new ideas show up and even the same idea may change over time and be represented by different words." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "3.1", "3.2", "4", "4.1", "4.2", "4.3", "6" ], "paper_header_content": [ "Introduction", "Computational Framework", "Cooccurrence and Prevalence Correlation", "Datasets and Representation of Ideas", "Characterizing the Space of Relations", "Empirical Distribution Properties", "Relative Strength of Extreme Pairs", "Exploratory Studies", "International Relations in Terrorism", "Ethnicity Keywords in Immigration", "Relations between Topics in ACL", "Concluding Discussion" ] }
GEM-SciDuet-train-93#paper-1238#slide-4
Head to head
immigrant, undocumented illegal, alien
immigrant, undocumented illegal, alien
[]
GEM-SciDuet-train-93#paper-1238#slide-5
1238
Friendships, Rivalries, and Trysts: Characterizing Relations between Ideas in Texts
Understanding how ideas relate to each other is a fundamental question in many domains, ranging from intellectual history to public communication. Because ideas are naturally embedded in texts, we propose the first framework to systematically characterize the relations between ideas based on their occurrence in a corpus of documents, independent of how these ideas are represented. Combining two statistics-cooccurrence within documents and prevalence correlation over time-our approach reveals a number of different ways in which ideas can cooperate and compete. For instance, two ideas can closely track each other's prevalence over time, and yet rarely cooccur, almost like a "cold war" scenario. We observe that pairwise cooccurrence and prevalence correlation exhibit different distributions. We further demonstrate that our approach is able to uncover intriguing relations between ideas through in-depth case studies on news articles and research papers.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228 ], "paper_content_text": [ "Introduction Ideas exist in the mind, but are made manifest in language, where they compete with each other for the scarce resource of human attention.", "Milton (1644) used the \"marketplace of ideas\" metaphor to argue that the truth will win out when ideas freely compete; Dawkins (1976) similarly likened the evolution of ideas to natural selection of genes.", "We propose a framework to quantitatively characterize competition and cooperation between ideas in texts, independent of how they might be represented.", "By \"ideas\", we mean any discrete conceptual units that can be identified as being present or absent in a document.", "In this work, we consider representing ideas using keywords and topics obtained in an unsupervised fashion, but our way of characterizing the relations between ideas could be applied to many other types of textual representations, such as frames (Card et al., 2015) and hashtags.", "What does it mean for two ideas to compete in texts, quantitatively?", "Consider, for example, the issue of immigration.", "There are two strongly competing narratives about the roughly 11 million people 1 who are residing in the United States without permission.", "One is \"illegal aliens\", who \"steal\" jobs and deny opportunities to legal immigrants; the other is \"undocumented immigrants\", who are already part of the fabric of society and deserve a path to citizenship (Merolla et al., 2013) .", "Although prior knowledge suggests that these two narratives compete, it is not immediately obvious what measures might reveal this competition in a corpus of writing about immigration.", "One question is whether or not these two ideas cooccur in the same documents.", "In the example above, these narratives are used by distinct groups of people with different ideologies.", "The fact that they don't cooccur is one clue that they may be in competition with each other.", "However, cooccurrence is insufficient to express the selection process of ideas, i.e., some ideas fade out over time, while others rise in popularity, analogous to the populations of species in nature.", "Of the two narratives on immigration, we may expect one to win out at the expense of another as public opinion shifts.", "Alternatively, we might expect to see these narratives reinforcing each other, as both sides intensify their messaging in response to growing opposition, much like the U.S.S.R. and immigration, deportation republican, party Figure 1 : Relations between ideas in the space of cooccurrence and prevalence correlation (prevalence correlation is shown explicitly and cooccurrence is encoded in row captions).", "We use topics from LDA (Blei et al., 2003) to represent ideas.", "Each topic is named with a pair of words that are most strongly associated with the topic in LDA.", "Subplots show examples of relations between topics found in U.S. newspaper articles on immigration from 1980 to 2016, color coded to match the description in text.", "The y-axis represents the proportion of news articles in a year (in our corpus) that contain the corresponding topic.", "All examples are among the top 3 strongest relations in each type except (\"immigrant, undocumented\", \"illegal, alien\"), which corresponds to the two competing narratives.", "We explain the formal definition of strength in §2.", "the U.S. during the cold war.", "To capture these possibilities, we use prevalence correlation over time.", "Building on these insights, we propose a framework that combines cooccurrence within documents and prevalence correlation over time.", "This framework gives rise to four possible types of relation that correspond to the four quadrants in Fig.", "1 .", "We explain each type using examples from news articles in U.S. newspapers on immigration from 1980 to 2016.", "Here, we have used LDA to identify ideas in the form of topics, and we denote each idea with a pair of words most strongly associated with the corresponding topic.", "Friendship (correlated over time, likely to cooccur).", "The \"immigrant, undocumented\" topic tends to cooccur with \"obama, president\" and both topics have been rising during the period of our dataset, likely because the \"undocumented immigrants\" narrative was an important part of Obama's framing of the immigration issue (Haynes et al., 2016) .", "Head-to-head (anti-correlated over time, unlikely to cooccur).", "\"immigrant, undocumented\" and \"illegal, alien\" are in a head-to-head competition: these two topics rarely cooccur, and \"immigrant, undocu-mented\" has been growing in prevalence, while the usage of \"illegal, alien\" in newspapers has been declining.", "This observation agrees with a report from Pew Research Center (Guskin, 2013) .", "Tryst (anti-correlated over time, likely to cooccur).", "The two off-diagonal examples use topics related to law enforcement.", "Overall, \"immigration, deportation\" and \"detention, jail\" often cooccur but \"detention, jail\" has been declining, while \"immigration, deportation\" has been rising.", "This possibly relates to the promises to overhaul the immigration detention system (Kalhan, 2010).", "2 Arms-race (correlated over time, unlikely to cooccur).", "One of the above law enforcement topics (\"immigration, deportation\") and a topic on the Republican party (\"republican, party\") hold an armsrace relation: they are both growing in prevalence over time, but rarely cooccur, perhaps suggesting an underlying common cause.", "Note that our terminology describes the relations between ideas in texts, not necessarily between the entities to which the ideas refer.", "For example, we find that the relation between \"Israel\" and \"Palestine\" is \"friendship\" in news articles on terrorism, based on their prevalence correlation and cooccurrence in that corpus.", "We introduce the formal definition of our framework in §2 and apply it to news articles on five issues and research papers from ACL Anthology and NIPS as testbeds.", "We operationalize ideas using topics (Blei et al., 2003) and keywords (Monroe et al., 2008) .", "To explore whether the four relation types exist and how strong these relations are, we first examine the marginal and joint distributions of cooccurrence and prevalence correlation ( §3).", "We find that cooccurrence shows a unimodal normal-shaped distribution but prevalence correlation demonstrates more diverse distributions across corpora.", "As we would expect, there are, in general, more and stronger friendship and head-to-head relations than arms-race and tryst relations.", "Second, we demonstrate the effectiveness of our framework through in-depth case studies ( §4).", "We not only validate existing knowledge about some news issues and research areas, but also identify hypotheses that require further investigation.", "For example, using keywords to represent ideas, a top pair with the tryst relation in news articles on terrorism is \"arab\" and \"islam\"; they are likely to cooccur, but \"islam\" is rising in relative prevalence while \"arab\" is declining.", "This suggests a conjecture that the news media have increasingly linked terrorism to a religious group rather than an ethnic group.", "We also show relations between topics in ACL that center around machine translation.", "Our work is a first step towards understanding relations between ideas from text corpora, a complex and important research question.", "We provide some concluding thoughts in §6.", "Computational Framework The aim of our computational framework is to explore relations between ideas.", "We thus assume that the set of relevant ideas has been identified, and those expressed in each document have been tabulated.", "Our open-source implementation is at https://github.com/Noahs-ARK/ idea_relations/.", "In the following, we introduce our formal definitions and datasets.", "∀x, y ∈ I, PMI(x, y) = logP (x, y) P (x)P (y) = C + log 1+ t k 1{x∈dt k }·1{y∈dt k } (1+ t k 1{x∈dt k })·(1+ t k 1{y∈dt k }) (1) r(x, y) = t P (x|t)−P (x|t) P (y|t)−P (y|t) t P (x|t)−P (x|t) 2 t P (y|t)−P (y|t) 2 (2) Figure 2 : Eq.", "1 is the empirical pointwise mutual information for two ideas, our measure of cooccurrence of ideas; note that we use add-one smoothing in estimating PMI.", "Eq.", "2 is the Pearson correlation between two ideas' prevalence over time.", "Cooccurrence and Prevalence Correlation As discussed in the introduction, we focus on two dimensions to quantify relations between ideas: 1. cooccurrence reveals to what extent two ideas tend to occur in the same contexts; 2. similarity between the relative prevalence of ideas over time reveals how two ideas relate in terms of popularity or coverage.", "Our input is a collection of documents, each represented by a set of ideas and indexed by time.", "We denote a static set of ideas as I and a text corpus that consists of these ideas as C = {D 1 , .", ".", ".", ", D T }, where D t = {d t 1 , .", ".", ".", ", d t N t } gives the collection of documents at timestep t, and each document, d t k , is represented as a subset of ideas in I.", "Here T is the total number of timesteps, and N t is the number of documents at timestep t. It follows that the total number of documents N = T t=1 N t .", "In order to formally capture the two dimensions above, we employ two commonly-used statistics.", "First, we use empirical pointwise mutual information (PMI) to capture the cooccurrence of ideas within the same document (Church and Hanks, 1990); see Eq.", "1 in Fig.", "2 .", "Positive PMI indicates that ideas occur together more frequently than would be expected if they were independent, while negative PMI indicates the opposite.", "Second, we compute the correlation between normalized document frequency of ideas to capture the relation between the relative prevalence of ideas across documents over time; see Eq.", "2 in Fig.", "2 .", "Positiver indicates that two ideas have similar prevalence over time, while negativer sug-gests two anti-correlated ideas (i.e., when one goes up, the other goes down).", "The four types of relations in the introduction can now be obtained using PMI andr, which capture cooccurrence and prevalence correlation respectively.", "We further define the strength of the relation between two ideas as the absolute value of the product of their PMI andr scores: ∀x, y ∈ I, strength(x, y) = | PMI(x, y)×r(x, y)|.", "(3) Datasets and Representation of Ideas We use two types of datasets to validate our framework: news articles and research papers.", "We choose these two domains because competition between ideas has received significant interest in history of science (Kuhn, 1996) and research on framing (Chong and Druckman, 2007; Entman, 1993; Gitlin, 1980; Lakoff, 2014) .", "Furthermore, interesting differences may exist in these two domains as news evolves with external events and scientific research progresses through innovations.", "• News articles.", "We follow the strategy in Card et al.", "(2015) to obtain news articles from Lex-isNexis on five issues: abortion, immigration, same-sex marriage, smoking, and terrorism.", "We search for relevant articles using LexisNexis subject terms in U.S. newspapers from 1980 to 2016.", "Each of these corpora contains more than 25,000 articles.", "Please refer to the supplementary material for details.", "• Research papers.", "We consider full texts of papers from two communities: our own ACL community captured by papers from ACL, NAACL, EMNLP, and TACL from 1980 to 2014 (Radev et al., 2009 ; and the NIPS community from 1987 to 2016.", "3 There are 4.8K papers from the ACL community and 6.6K papers from the NIPS community.", "The processed datasets are available at https://chenhaot.com/ pages/idea-relations.html.", "In order to operationalize ideas in a text corpus, we consider two ways to represent ideas.", "• Topics.", "We extract topics from each document by running LDA (Blei et al., 2003) on each corpus C. In all datasets, we set the number of topics to 50.", "4 Formally, I is the 50 topics learned from the corpus, and each document is represented as the set of topics that are present with greater than 0.01 probability in the topic distribution for that document.", "• Keywords.", "We identify a list of distinguishing keywords for each corpus by comparing its word frequencies to the background frequencies found in other corpora using the informative Dirichlet prior model in Monroe et al.", "(2008) .", "We set the number of keywords to 100 for all corpora.", "For news articles, the background corpus for each issue is comprised of all articles from the other four issues.", "For research papers, we use NIPS as the background corpus for ACL and vice versa to identify what are the core concepts for each of these research areas.", "Formally, I is the 100 top distinguishing keywords in the corpus and each document is represented as the set of keywords within I that are present in the document.", "Refer to the supplementary material for a list of example keywords in each corpus.", "In both procedures, we lemmatize all words and add common bigram phrases to the vocabulary following Mikolov et al.", "(2013) .", "Note that in our analysis, ideas are only present or absent in a document, and a document can in principle be mapped to any subset of ideas in I.", "In our experiments 90% of documents are marked as containing between 7 and 14 ideas using topics, 8 and 33 ideas using keywords.", "Characterizing the Space of Relations To provide an overview of the four relation types in Fig.", "1 , we first examine the empirical distributions of the two statistics of interest across pairs of ideas.", "In most exploratory studies, however, we are most interested in pairs that exemplify each type of relation, i.e., the most extreme points in each quadrant.", "We thus look at these pairs in each corpus to observe how the four types differ in salience across datasets.", "Empirical Distribution Properties To the best of our knowledge, the distributions of pairwise cooccurrence and prevalence correlation have not been examined in previous literature.", "We thus first investigate the marginal distributions of cooccurrence and prevalence correlation and then our framework is to analyze relations between ideas, so this choice is not essential in this work.", "(Scott, 2015) .", "The plots along the axes show the marginal distribution of the corresponding dimension.", "In each plot, we give the Pearson correlation, and all Pearson correlations' p-values are less than 10 −40 .", "In these plots, we use topics to represent ideas.", "their joint distribution.", "Fig.", "3 shows three examples: two from news articles and one from research papers.", "We will also focus our case studies on these three corpora in §4.", "The corresponding plots for keywords have been relegated to supplementary material due to space limitations.", "Cooccurrence tends to be unimodal but not normal.", "In all of our datasets, pairwise cooccurrence ( PMI) presents a unimodal distribution that somewhat resembles a normal distribution, but it is rarely precisely normal.", "We cannot reject the hypothesis that it is unimodal for any dataset (using topics or keywords) using the dip test (Hartigan and Hartigan, 1985) , though D'Agostino's K 2 test (D'Agostino et al., 1990) rejects normality in almost all cases.", "Prevalence correlation exhibits diverse distributions.", "Pairwise prevalence correlation follows different distributions in news articles compared to research papers: they are unimodal in news articles, but not in ACL or NIPS.", "The dip test only rejects the unimodality hypothesis in NIPS.", "None follow normal distributions based on D'Agostino's K 2 test.", "Cooccurrence is positively correlated with prevalence correlation.", "In all of our datasets, cooccurrence is positively correlated with prevalence correlation whether we use topics or keywords to represent ideas, although the Pearson correlation coefficients vary.", "This suggests that there are more friendship and head-to-head relations than tryst and arms-race relations.", "Based on the results of kernel density estimation, we also observe that this correlation is often loose, e.g., in ACL topics, cooccurrence spreads widely at each mode of prevalence correlation.", "776 Relative Strength of Extreme Pairs We are interested in how our framework can identify intriguing relations between ideas.", "These potentially interesting pairs likely correspond to the extreme points in each quadrant instead of the ones around the origin, where PMI and prevalence correlation are both close to zero.", "Here we compare the relative strength of extreme pairs in each dataset.", "We will discuss how these extreme pairs confirm existing knowledge and suggest new hypotheses via case studies in §4.", "For each relation type, we average the strengths of the 25 pairs with the strongest relations in that type, with strength defined in Eq.", "3.", "This heuristic (henceforth collective strength) allows us to collectively compare the strengths of the most prominent friendship, tryst, arms-race, and head-to-head relations.", "The results are not sensitive to the choice of 25.", "Fig.", "4 shows the collective strength of the four types in all of our datasets.", "The most common ordering is: friendship > head-to-head > arms-race > tryst.", "The fact that friendship and head-to-head relations are strong is consistent with the positive correlation between cooccurrence and prevalence correlation.", "In news, friendship is the strongest relation type, but head-to-head is the strongest in ACL topics and NIPS topics.", "This suggests, unsurprisingly, that there are stronger head-to-head competitions (i.e., one idea takes over another) between ideas in scientific research than in news.", "We also see that topics show greater strength in our scientific article collections, while keywords dominate in news, especially in friendship.", "We conjecture that terms in scientific literature are often overloaded (e.g., a tree could be a parse tree or a decision tree), necessitating some abstraction when representing ideas.", "In contrast, news stories are more self-contained and seek to employ consistent usage.", "Exploratory Studies We present case studies based on strongly related pairs of ideas in the four types of relation.", "Throughout this section, \"rank\" refers to the rank of the relation strength between a pair of ideas in its corresponding relation type.", "International Relations in Terrorism Following a decade of declining violence in the 90s, the events of September 11, 2001 precipitated a dramatic increase in concern about terrorism, and a major shift in how it was framed (Kern et al., 2003) .", "As a showcase, we consider a topic which encompasses much of the U.S. government's response to terrorism: \"federal, state\".", "5 We observe two topics engaging in an \"arms race\" with this one: \"afghanistan, taliban\" and \"pakistan, india\".", "These correspond to two geopolitical regions closely linked to the U.S. government's concern with terrorism, and both were sites of U.S. military action during the period of our dataset.", "Events abroad and the U.S. government's responses follow the arms-race pattern, each holding increasing 5 As in §1, we summarize each topic using a pair of strongly associated words, instead of assigning a name.", "Figure 6 : Tryst relation between arab and islam using keywords to represent ideas (#2 in tryst): these two words tend to cooccur but are anti-correlated in prevalence over time.", "In particular, islam was rarely used in coverage of terrorism in the 1980s.", "attention with the other, likely because they share the same underlying cause.", "We also observe two head-to-head rivals to the \"federal, state\" topic: \"iran, libya\" and \"israel, palestinian\".", "While these topics correspond to regions that are hotly debated in the U.S., their coverage in news tends not to correlate temporally with the U.S. government's responses to terrorism, at least during the time period of our corpus.", "Discussion of these regions was more prevalent in the 80s and 90s, with declining media coverage since then (Kern et al., 2003) .", "The relations between these topics are consistent with structural balance theory (Cartwright and Harary, 1956; Heider, 1946) , which suggests that the enemy of an enemy is a friend.", "The \"afghanistan, taliban\" topic has the strongest friendship relation with the \"pakistan, india\" topic, i.e., they are likely to cooccur and are positively correlated in prevalence.", "Similarly, the \"iran, libya\" topic is a close \"friend\" with the \"israel, palestinian\" topic (ranked 8th in friendship).", "Fig.", "5a shows the relations between the \"federal, state\" topic and four international topics.", "Edge colors indicate relation types and the number in an edge label presents the ranking of its strength in the corresponding relation type.", "Fig.", "5b and Fig.", "5c represent concrete examples in Fig.", "5a : \"federal, state\" and \"afghanistan, taliban\" follow similar trends, although \"afghanistan, taliban\" fluctuates over time due to significant events such as the September 11 attacks in 2001 and the death of Bin Laden in 2011; while \"iran, lybia\" is negatively correlated with \"federal, state\".", "In fact, more than 70% of terrorism news in the 80s contained the \"iran, lybia\" topic.", "When using keywords to represent ideas, we observe similar relations between the term homeland security and terms related to the above foreign countries.", "In addition, we highlight an interesting but unexpected tryst relation between arab and islam (Fig.", "6) .", "It is not surprising that these two words tend to cooccur in the same news articles, but the usage of islam in the news is increasing while arab is declining.", "The increasing prevalence of islam and decreasing prevalence of arab over this time period can also be seen, for example, using Google's n-gram viewer, but it of course provides no information about cooccurrence.", "This trend has not been previously noted to the best of our knowledge, although an article in the Huffington Post called for news editors to distinguish Muslim from Arab.", "6 Our observation suggests a conjecture that the news media have increasingly linked terrorism to a religious group rather than an ethnic group, perhaps in part due to the tie between the events of 9/11 and Afghanistan, which is not an Arab or Arabic-speaking country.", "We leave it to further investigation to confirm or reject this hypothesis.", "To further demonstrate the effectiveness of our approach, we compare a pair's rank using only cooccurrence or prevalence correlation with its rank in our framework.", "Table 1 shows the results for three pairs above.", "If we had looked at only cooccurrence or prevalence correlation, we would probably have missed these interesting pairs.", "PMI Corr \"federal, state\", \"afghanistan, taliban\" (#2 in arms-race) 43 99 \"federal, state\", \"iran, lybia\" (#2 in head-to-head) 36 56 arab, islam (#2 in tryst) 106 1,494 Ethnicity Keywords in Immigration In addition to results on topics in §1, we observe unexpected patterns about ethnicity keywords in immigration news.", "Our observation starts with a top tryst relation between latino and asian.", "Although these words are likely to cooccur, their prevalence trajectories differ, with the discussion of Asian immigrants in the 1990s giving way to a focus on the word latino from 2000 onward.", "Possible theories to explain this observation include that undocumented immigrants are generally perceived as a Latino issue, or that Latino voters are increasingly influential in U.S. elections.", "Furthermore, latino holds head-to-head relations with two subgroups of Latin American immigrants: haitian and cuban.", "In particular, the strength of the relation with haitian is ranked #18 in headto-head relations.", "Meanwhile, haitian and cuban have a friendship relation, which is again consistent with structural balance theory.", "The decreasing prevalence of haitian and cuban perhaps speaks to the shifting geographical focus of recent immigration to the U.S., and issues of the Latino panethnicity.", "In fact, a majority of Latinos prefer to identify with their national origin relative to the pan-ethnic terms (Taylor et al., 2012) .", "However, we should also note that much of this coverage relates to a set of specific refugee crises, temporarily elevating the political importance of these nations in the U.S.", "Nevertheless, the underlying social and political reasons behind these head-to-head relations are worth further investigation.", "Relations between Topics in ACL Finally, we analyze relations between topics in the ACL Anthology.", "It turns out that \"machine translation\" is at a central position among top ranked relations in all the four types (Fig.", "8) .", "7 It is part of the strongest relation in all four types except tryst (ranked #5).", "The full relation graph presents further patterns.", "Friendship demonstrates transitivity: both \"machine translation\" and \"word alignment\" have similar relations with other topics.", "But such transitivity does not hold for tryst: although the prevalence of \"rule, forest methods\" is anti-correlated with both \"machine translation\" and \"sentiment analysis\", \"sentiment analysis\" seldom cooccurs with \"rule, for-est methods\" because \"sentiment analysis\" is seldom built on parsing algorithms.", "Similarly, \"rule, forest methods\" and \"discourse (coherence)\" hold an armsrace relation: they do not tend to cooccur and both decline in relative prevalence as \"machine translation\" rises.", "The prevalence of each of these ideas in comparison to machine translation is shown in in Fig.", "9 , which reveals additional detail.", "Figure 9 : Relations between topics in ACL Anthology in the space of cooccurrence and prevalence correlation (prevalence correlation is shown explicitly and cooccurrence is encoded in row captions), color coded to match the text.", "The y-axis represents the relative proportion of papers in a year that contain the corresponding topic.", "The top 10 words for the rule, forest methods topic are rule, grammar, derivation, span, algorithm, forest, parsing, figure, set, string.", "Concluding Discussion We proposed a method to characterize relations between ideas in texts through the lens of cooccurrence within documents and prevalence correlation over time.", "For the first time, we observe that the distribution of pairwise cooccurrence is unimodal, while the distribution of pairwise prevalence correlation is not always unimodal, and show that they are positively correlated.", "This combination suggests four types of relations between ideas, and these four types are all found to varying extents in our experiments.", "We illustrate our computational method by exploratory studies on news corpora and scientific research papers.", "We not only confirm existing knowledge but also suggest hypotheses around the usage of arab and islam in terrorism and latino and asian in immigration.", "It is important to note that the relations found using our approach depend on the nature of the representation of ideas and the source of texts.", "For instance, we cannot expect relations found in news articles to reflect shifts in public opinion if news articles do not effectively track public opinion.", "Our method is entirely observational.", "It remains as a further stage of analysis to understand the underlying reasons that lead to these relations be-tween ideas.", "In scientific research, for example, it could simply be the progress of science, i.e., newer ideas overtake older ones deemed less valuable at a given time; on the other hand, history suggests that it is not always the correct ideas that are most expressed, and many other factors may be important.", "Similarly, in news coverage, underlying sociological and political situations have significant impact on which ideas are presented, and how.", "There are many potential directions to improve our method to account for complex relations between ideas.", "For instance, we assume that both ideas and relations are statically grounded in keywords or topics.", "In reality, ideas and relations both evolve over time: a tryst relation might appear as friendship if we focus on a narrower time period.", "Similarly, new ideas show up and even the same idea may change over time and be represented by different words." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "3.1", "3.2", "4", "4.1", "4.2", "4.3", "6" ], "paper_header_content": [ "Introduction", "Computational Framework", "Cooccurrence and Prevalence Correlation", "Datasets and Representation of Ideas", "Characterizing the Space of Relations", "Empirical Distribution Properties", "Relative Strength of Extreme Pairs", "Exploratory Studies", "International Relations in Terrorism", "Ethnicity Keywords in Immigration", "Relations between Topics in ACL", "Concluding Discussion" ] }
GEM-SciDuet-train-93#paper-1238#slide-5
Friendship
immigrant, undocumented obama, president
immigrant, undocumented obama, president
[]
GEM-SciDuet-train-93#paper-1238#slide-6
1238
Friendships, Rivalries, and Trysts: Characterizing Relations between Ideas in Texts
Understanding how ideas relate to each other is a fundamental question in many domains, ranging from intellectual history to public communication. Because ideas are naturally embedded in texts, we propose the first framework to systematically characterize the relations between ideas based on their occurrence in a corpus of documents, independent of how these ideas are represented. Combining two statistics-cooccurrence within documents and prevalence correlation over time-our approach reveals a number of different ways in which ideas can cooperate and compete. For instance, two ideas can closely track each other's prevalence over time, and yet rarely cooccur, almost like a "cold war" scenario. We observe that pairwise cooccurrence and prevalence correlation exhibit different distributions. We further demonstrate that our approach is able to uncover intriguing relations between ideas through in-depth case studies on news articles and research papers.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228 ], "paper_content_text": [ "Introduction Ideas exist in the mind, but are made manifest in language, where they compete with each other for the scarce resource of human attention.", "Milton (1644) used the \"marketplace of ideas\" metaphor to argue that the truth will win out when ideas freely compete; Dawkins (1976) similarly likened the evolution of ideas to natural selection of genes.", "We propose a framework to quantitatively characterize competition and cooperation between ideas in texts, independent of how they might be represented.", "By \"ideas\", we mean any discrete conceptual units that can be identified as being present or absent in a document.", "In this work, we consider representing ideas using keywords and topics obtained in an unsupervised fashion, but our way of characterizing the relations between ideas could be applied to many other types of textual representations, such as frames (Card et al., 2015) and hashtags.", "What does it mean for two ideas to compete in texts, quantitatively?", "Consider, for example, the issue of immigration.", "There are two strongly competing narratives about the roughly 11 million people 1 who are residing in the United States without permission.", "One is \"illegal aliens\", who \"steal\" jobs and deny opportunities to legal immigrants; the other is \"undocumented immigrants\", who are already part of the fabric of society and deserve a path to citizenship (Merolla et al., 2013) .", "Although prior knowledge suggests that these two narratives compete, it is not immediately obvious what measures might reveal this competition in a corpus of writing about immigration.", "One question is whether or not these two ideas cooccur in the same documents.", "In the example above, these narratives are used by distinct groups of people with different ideologies.", "The fact that they don't cooccur is one clue that they may be in competition with each other.", "However, cooccurrence is insufficient to express the selection process of ideas, i.e., some ideas fade out over time, while others rise in popularity, analogous to the populations of species in nature.", "Of the two narratives on immigration, we may expect one to win out at the expense of another as public opinion shifts.", "Alternatively, we might expect to see these narratives reinforcing each other, as both sides intensify their messaging in response to growing opposition, much like the U.S.S.R. and immigration, deportation republican, party Figure 1 : Relations between ideas in the space of cooccurrence and prevalence correlation (prevalence correlation is shown explicitly and cooccurrence is encoded in row captions).", "We use topics from LDA (Blei et al., 2003) to represent ideas.", "Each topic is named with a pair of words that are most strongly associated with the topic in LDA.", "Subplots show examples of relations between topics found in U.S. newspaper articles on immigration from 1980 to 2016, color coded to match the description in text.", "The y-axis represents the proportion of news articles in a year (in our corpus) that contain the corresponding topic.", "All examples are among the top 3 strongest relations in each type except (\"immigrant, undocumented\", \"illegal, alien\"), which corresponds to the two competing narratives.", "We explain the formal definition of strength in §2.", "the U.S. during the cold war.", "To capture these possibilities, we use prevalence correlation over time.", "Building on these insights, we propose a framework that combines cooccurrence within documents and prevalence correlation over time.", "This framework gives rise to four possible types of relation that correspond to the four quadrants in Fig.", "1 .", "We explain each type using examples from news articles in U.S. newspapers on immigration from 1980 to 2016.", "Here, we have used LDA to identify ideas in the form of topics, and we denote each idea with a pair of words most strongly associated with the corresponding topic.", "Friendship (correlated over time, likely to cooccur).", "The \"immigrant, undocumented\" topic tends to cooccur with \"obama, president\" and both topics have been rising during the period of our dataset, likely because the \"undocumented immigrants\" narrative was an important part of Obama's framing of the immigration issue (Haynes et al., 2016) .", "Head-to-head (anti-correlated over time, unlikely to cooccur).", "\"immigrant, undocumented\" and \"illegal, alien\" are in a head-to-head competition: these two topics rarely cooccur, and \"immigrant, undocu-mented\" has been growing in prevalence, while the usage of \"illegal, alien\" in newspapers has been declining.", "This observation agrees with a report from Pew Research Center (Guskin, 2013) .", "Tryst (anti-correlated over time, likely to cooccur).", "The two off-diagonal examples use topics related to law enforcement.", "Overall, \"immigration, deportation\" and \"detention, jail\" often cooccur but \"detention, jail\" has been declining, while \"immigration, deportation\" has been rising.", "This possibly relates to the promises to overhaul the immigration detention system (Kalhan, 2010).", "2 Arms-race (correlated over time, unlikely to cooccur).", "One of the above law enforcement topics (\"immigration, deportation\") and a topic on the Republican party (\"republican, party\") hold an armsrace relation: they are both growing in prevalence over time, but rarely cooccur, perhaps suggesting an underlying common cause.", "Note that our terminology describes the relations between ideas in texts, not necessarily between the entities to which the ideas refer.", "For example, we find that the relation between \"Israel\" and \"Palestine\" is \"friendship\" in news articles on terrorism, based on their prevalence correlation and cooccurrence in that corpus.", "We introduce the formal definition of our framework in §2 and apply it to news articles on five issues and research papers from ACL Anthology and NIPS as testbeds.", "We operationalize ideas using topics (Blei et al., 2003) and keywords (Monroe et al., 2008) .", "To explore whether the four relation types exist and how strong these relations are, we first examine the marginal and joint distributions of cooccurrence and prevalence correlation ( §3).", "We find that cooccurrence shows a unimodal normal-shaped distribution but prevalence correlation demonstrates more diverse distributions across corpora.", "As we would expect, there are, in general, more and stronger friendship and head-to-head relations than arms-race and tryst relations.", "Second, we demonstrate the effectiveness of our framework through in-depth case studies ( §4).", "We not only validate existing knowledge about some news issues and research areas, but also identify hypotheses that require further investigation.", "For example, using keywords to represent ideas, a top pair with the tryst relation in news articles on terrorism is \"arab\" and \"islam\"; they are likely to cooccur, but \"islam\" is rising in relative prevalence while \"arab\" is declining.", "This suggests a conjecture that the news media have increasingly linked terrorism to a religious group rather than an ethnic group.", "We also show relations between topics in ACL that center around machine translation.", "Our work is a first step towards understanding relations between ideas from text corpora, a complex and important research question.", "We provide some concluding thoughts in §6.", "Computational Framework The aim of our computational framework is to explore relations between ideas.", "We thus assume that the set of relevant ideas has been identified, and those expressed in each document have been tabulated.", "Our open-source implementation is at https://github.com/Noahs-ARK/ idea_relations/.", "In the following, we introduce our formal definitions and datasets.", "∀x, y ∈ I, PMI(x, y) = logP (x, y) P (x)P (y) = C + log 1+ t k 1{x∈dt k }·1{y∈dt k } (1+ t k 1{x∈dt k })·(1+ t k 1{y∈dt k }) (1) r(x, y) = t P (x|t)−P (x|t) P (y|t)−P (y|t) t P (x|t)−P (x|t) 2 t P (y|t)−P (y|t) 2 (2) Figure 2 : Eq.", "1 is the empirical pointwise mutual information for two ideas, our measure of cooccurrence of ideas; note that we use add-one smoothing in estimating PMI.", "Eq.", "2 is the Pearson correlation between two ideas' prevalence over time.", "Cooccurrence and Prevalence Correlation As discussed in the introduction, we focus on two dimensions to quantify relations between ideas: 1. cooccurrence reveals to what extent two ideas tend to occur in the same contexts; 2. similarity between the relative prevalence of ideas over time reveals how two ideas relate in terms of popularity or coverage.", "Our input is a collection of documents, each represented by a set of ideas and indexed by time.", "We denote a static set of ideas as I and a text corpus that consists of these ideas as C = {D 1 , .", ".", ".", ", D T }, where D t = {d t 1 , .", ".", ".", ", d t N t } gives the collection of documents at timestep t, and each document, d t k , is represented as a subset of ideas in I.", "Here T is the total number of timesteps, and N t is the number of documents at timestep t. It follows that the total number of documents N = T t=1 N t .", "In order to formally capture the two dimensions above, we employ two commonly-used statistics.", "First, we use empirical pointwise mutual information (PMI) to capture the cooccurrence of ideas within the same document (Church and Hanks, 1990); see Eq.", "1 in Fig.", "2 .", "Positive PMI indicates that ideas occur together more frequently than would be expected if they were independent, while negative PMI indicates the opposite.", "Second, we compute the correlation between normalized document frequency of ideas to capture the relation between the relative prevalence of ideas across documents over time; see Eq.", "2 in Fig.", "2 .", "Positiver indicates that two ideas have similar prevalence over time, while negativer sug-gests two anti-correlated ideas (i.e., when one goes up, the other goes down).", "The four types of relations in the introduction can now be obtained using PMI andr, which capture cooccurrence and prevalence correlation respectively.", "We further define the strength of the relation between two ideas as the absolute value of the product of their PMI andr scores: ∀x, y ∈ I, strength(x, y) = | PMI(x, y)×r(x, y)|.", "(3) Datasets and Representation of Ideas We use two types of datasets to validate our framework: news articles and research papers.", "We choose these two domains because competition between ideas has received significant interest in history of science (Kuhn, 1996) and research on framing (Chong and Druckman, 2007; Entman, 1993; Gitlin, 1980; Lakoff, 2014) .", "Furthermore, interesting differences may exist in these two domains as news evolves with external events and scientific research progresses through innovations.", "• News articles.", "We follow the strategy in Card et al.", "(2015) to obtain news articles from Lex-isNexis on five issues: abortion, immigration, same-sex marriage, smoking, and terrorism.", "We search for relevant articles using LexisNexis subject terms in U.S. newspapers from 1980 to 2016.", "Each of these corpora contains more than 25,000 articles.", "Please refer to the supplementary material for details.", "• Research papers.", "We consider full texts of papers from two communities: our own ACL community captured by papers from ACL, NAACL, EMNLP, and TACL from 1980 to 2014 (Radev et al., 2009 ; and the NIPS community from 1987 to 2016.", "3 There are 4.8K papers from the ACL community and 6.6K papers from the NIPS community.", "The processed datasets are available at https://chenhaot.com/ pages/idea-relations.html.", "In order to operationalize ideas in a text corpus, we consider two ways to represent ideas.", "• Topics.", "We extract topics from each document by running LDA (Blei et al., 2003) on each corpus C. In all datasets, we set the number of topics to 50.", "4 Formally, I is the 50 topics learned from the corpus, and each document is represented as the set of topics that are present with greater than 0.01 probability in the topic distribution for that document.", "• Keywords.", "We identify a list of distinguishing keywords for each corpus by comparing its word frequencies to the background frequencies found in other corpora using the informative Dirichlet prior model in Monroe et al.", "(2008) .", "We set the number of keywords to 100 for all corpora.", "For news articles, the background corpus for each issue is comprised of all articles from the other four issues.", "For research papers, we use NIPS as the background corpus for ACL and vice versa to identify what are the core concepts for each of these research areas.", "Formally, I is the 100 top distinguishing keywords in the corpus and each document is represented as the set of keywords within I that are present in the document.", "Refer to the supplementary material for a list of example keywords in each corpus.", "In both procedures, we lemmatize all words and add common bigram phrases to the vocabulary following Mikolov et al.", "(2013) .", "Note that in our analysis, ideas are only present or absent in a document, and a document can in principle be mapped to any subset of ideas in I.", "In our experiments 90% of documents are marked as containing between 7 and 14 ideas using topics, 8 and 33 ideas using keywords.", "Characterizing the Space of Relations To provide an overview of the four relation types in Fig.", "1 , we first examine the empirical distributions of the two statistics of interest across pairs of ideas.", "In most exploratory studies, however, we are most interested in pairs that exemplify each type of relation, i.e., the most extreme points in each quadrant.", "We thus look at these pairs in each corpus to observe how the four types differ in salience across datasets.", "Empirical Distribution Properties To the best of our knowledge, the distributions of pairwise cooccurrence and prevalence correlation have not been examined in previous literature.", "We thus first investigate the marginal distributions of cooccurrence and prevalence correlation and then our framework is to analyze relations between ideas, so this choice is not essential in this work.", "(Scott, 2015) .", "The plots along the axes show the marginal distribution of the corresponding dimension.", "In each plot, we give the Pearson correlation, and all Pearson correlations' p-values are less than 10 −40 .", "In these plots, we use topics to represent ideas.", "their joint distribution.", "Fig.", "3 shows three examples: two from news articles and one from research papers.", "We will also focus our case studies on these three corpora in §4.", "The corresponding plots for keywords have been relegated to supplementary material due to space limitations.", "Cooccurrence tends to be unimodal but not normal.", "In all of our datasets, pairwise cooccurrence ( PMI) presents a unimodal distribution that somewhat resembles a normal distribution, but it is rarely precisely normal.", "We cannot reject the hypothesis that it is unimodal for any dataset (using topics or keywords) using the dip test (Hartigan and Hartigan, 1985) , though D'Agostino's K 2 test (D'Agostino et al., 1990) rejects normality in almost all cases.", "Prevalence correlation exhibits diverse distributions.", "Pairwise prevalence correlation follows different distributions in news articles compared to research papers: they are unimodal in news articles, but not in ACL or NIPS.", "The dip test only rejects the unimodality hypothesis in NIPS.", "None follow normal distributions based on D'Agostino's K 2 test.", "Cooccurrence is positively correlated with prevalence correlation.", "In all of our datasets, cooccurrence is positively correlated with prevalence correlation whether we use topics or keywords to represent ideas, although the Pearson correlation coefficients vary.", "This suggests that there are more friendship and head-to-head relations than tryst and arms-race relations.", "Based on the results of kernel density estimation, we also observe that this correlation is often loose, e.g., in ACL topics, cooccurrence spreads widely at each mode of prevalence correlation.", "776 Relative Strength of Extreme Pairs We are interested in how our framework can identify intriguing relations between ideas.", "These potentially interesting pairs likely correspond to the extreme points in each quadrant instead of the ones around the origin, where PMI and prevalence correlation are both close to zero.", "Here we compare the relative strength of extreme pairs in each dataset.", "We will discuss how these extreme pairs confirm existing knowledge and suggest new hypotheses via case studies in §4.", "For each relation type, we average the strengths of the 25 pairs with the strongest relations in that type, with strength defined in Eq.", "3.", "This heuristic (henceforth collective strength) allows us to collectively compare the strengths of the most prominent friendship, tryst, arms-race, and head-to-head relations.", "The results are not sensitive to the choice of 25.", "Fig.", "4 shows the collective strength of the four types in all of our datasets.", "The most common ordering is: friendship > head-to-head > arms-race > tryst.", "The fact that friendship and head-to-head relations are strong is consistent with the positive correlation between cooccurrence and prevalence correlation.", "In news, friendship is the strongest relation type, but head-to-head is the strongest in ACL topics and NIPS topics.", "This suggests, unsurprisingly, that there are stronger head-to-head competitions (i.e., one idea takes over another) between ideas in scientific research than in news.", "We also see that topics show greater strength in our scientific article collections, while keywords dominate in news, especially in friendship.", "We conjecture that terms in scientific literature are often overloaded (e.g., a tree could be a parse tree or a decision tree), necessitating some abstraction when representing ideas.", "In contrast, news stories are more self-contained and seek to employ consistent usage.", "Exploratory Studies We present case studies based on strongly related pairs of ideas in the four types of relation.", "Throughout this section, \"rank\" refers to the rank of the relation strength between a pair of ideas in its corresponding relation type.", "International Relations in Terrorism Following a decade of declining violence in the 90s, the events of September 11, 2001 precipitated a dramatic increase in concern about terrorism, and a major shift in how it was framed (Kern et al., 2003) .", "As a showcase, we consider a topic which encompasses much of the U.S. government's response to terrorism: \"federal, state\".", "5 We observe two topics engaging in an \"arms race\" with this one: \"afghanistan, taliban\" and \"pakistan, india\".", "These correspond to two geopolitical regions closely linked to the U.S. government's concern with terrorism, and both were sites of U.S. military action during the period of our dataset.", "Events abroad and the U.S. government's responses follow the arms-race pattern, each holding increasing 5 As in §1, we summarize each topic using a pair of strongly associated words, instead of assigning a name.", "Figure 6 : Tryst relation between arab and islam using keywords to represent ideas (#2 in tryst): these two words tend to cooccur but are anti-correlated in prevalence over time.", "In particular, islam was rarely used in coverage of terrorism in the 1980s.", "attention with the other, likely because they share the same underlying cause.", "We also observe two head-to-head rivals to the \"federal, state\" topic: \"iran, libya\" and \"israel, palestinian\".", "While these topics correspond to regions that are hotly debated in the U.S., their coverage in news tends not to correlate temporally with the U.S. government's responses to terrorism, at least during the time period of our corpus.", "Discussion of these regions was more prevalent in the 80s and 90s, with declining media coverage since then (Kern et al., 2003) .", "The relations between these topics are consistent with structural balance theory (Cartwright and Harary, 1956; Heider, 1946) , which suggests that the enemy of an enemy is a friend.", "The \"afghanistan, taliban\" topic has the strongest friendship relation with the \"pakistan, india\" topic, i.e., they are likely to cooccur and are positively correlated in prevalence.", "Similarly, the \"iran, libya\" topic is a close \"friend\" with the \"israel, palestinian\" topic (ranked 8th in friendship).", "Fig.", "5a shows the relations between the \"federal, state\" topic and four international topics.", "Edge colors indicate relation types and the number in an edge label presents the ranking of its strength in the corresponding relation type.", "Fig.", "5b and Fig.", "5c represent concrete examples in Fig.", "5a : \"federal, state\" and \"afghanistan, taliban\" follow similar trends, although \"afghanistan, taliban\" fluctuates over time due to significant events such as the September 11 attacks in 2001 and the death of Bin Laden in 2011; while \"iran, lybia\" is negatively correlated with \"federal, state\".", "In fact, more than 70% of terrorism news in the 80s contained the \"iran, lybia\" topic.", "When using keywords to represent ideas, we observe similar relations between the term homeland security and terms related to the above foreign countries.", "In addition, we highlight an interesting but unexpected tryst relation between arab and islam (Fig.", "6) .", "It is not surprising that these two words tend to cooccur in the same news articles, but the usage of islam in the news is increasing while arab is declining.", "The increasing prevalence of islam and decreasing prevalence of arab over this time period can also be seen, for example, using Google's n-gram viewer, but it of course provides no information about cooccurrence.", "This trend has not been previously noted to the best of our knowledge, although an article in the Huffington Post called for news editors to distinguish Muslim from Arab.", "6 Our observation suggests a conjecture that the news media have increasingly linked terrorism to a religious group rather than an ethnic group, perhaps in part due to the tie between the events of 9/11 and Afghanistan, which is not an Arab or Arabic-speaking country.", "We leave it to further investigation to confirm or reject this hypothesis.", "To further demonstrate the effectiveness of our approach, we compare a pair's rank using only cooccurrence or prevalence correlation with its rank in our framework.", "Table 1 shows the results for three pairs above.", "If we had looked at only cooccurrence or prevalence correlation, we would probably have missed these interesting pairs.", "PMI Corr \"federal, state\", \"afghanistan, taliban\" (#2 in arms-race) 43 99 \"federal, state\", \"iran, lybia\" (#2 in head-to-head) 36 56 arab, islam (#2 in tryst) 106 1,494 Ethnicity Keywords in Immigration In addition to results on topics in §1, we observe unexpected patterns about ethnicity keywords in immigration news.", "Our observation starts with a top tryst relation between latino and asian.", "Although these words are likely to cooccur, their prevalence trajectories differ, with the discussion of Asian immigrants in the 1990s giving way to a focus on the word latino from 2000 onward.", "Possible theories to explain this observation include that undocumented immigrants are generally perceived as a Latino issue, or that Latino voters are increasingly influential in U.S. elections.", "Furthermore, latino holds head-to-head relations with two subgroups of Latin American immigrants: haitian and cuban.", "In particular, the strength of the relation with haitian is ranked #18 in headto-head relations.", "Meanwhile, haitian and cuban have a friendship relation, which is again consistent with structural balance theory.", "The decreasing prevalence of haitian and cuban perhaps speaks to the shifting geographical focus of recent immigration to the U.S., and issues of the Latino panethnicity.", "In fact, a majority of Latinos prefer to identify with their national origin relative to the pan-ethnic terms (Taylor et al., 2012) .", "However, we should also note that much of this coverage relates to a set of specific refugee crises, temporarily elevating the political importance of these nations in the U.S.", "Nevertheless, the underlying social and political reasons behind these head-to-head relations are worth further investigation.", "Relations between Topics in ACL Finally, we analyze relations between topics in the ACL Anthology.", "It turns out that \"machine translation\" is at a central position among top ranked relations in all the four types (Fig.", "8) .", "7 It is part of the strongest relation in all four types except tryst (ranked #5).", "The full relation graph presents further patterns.", "Friendship demonstrates transitivity: both \"machine translation\" and \"word alignment\" have similar relations with other topics.", "But such transitivity does not hold for tryst: although the prevalence of \"rule, forest methods\" is anti-correlated with both \"machine translation\" and \"sentiment analysis\", \"sentiment analysis\" seldom cooccurs with \"rule, for-est methods\" because \"sentiment analysis\" is seldom built on parsing algorithms.", "Similarly, \"rule, forest methods\" and \"discourse (coherence)\" hold an armsrace relation: they do not tend to cooccur and both decline in relative prevalence as \"machine translation\" rises.", "The prevalence of each of these ideas in comparison to machine translation is shown in in Fig.", "9 , which reveals additional detail.", "Figure 9 : Relations between topics in ACL Anthology in the space of cooccurrence and prevalence correlation (prevalence correlation is shown explicitly and cooccurrence is encoded in row captions), color coded to match the text.", "The y-axis represents the relative proportion of papers in a year that contain the corresponding topic.", "The top 10 words for the rule, forest methods topic are rule, grammar, derivation, span, algorithm, forest, parsing, figure, set, string.", "Concluding Discussion We proposed a method to characterize relations between ideas in texts through the lens of cooccurrence within documents and prevalence correlation over time.", "For the first time, we observe that the distribution of pairwise cooccurrence is unimodal, while the distribution of pairwise prevalence correlation is not always unimodal, and show that they are positively correlated.", "This combination suggests four types of relations between ideas, and these four types are all found to varying extents in our experiments.", "We illustrate our computational method by exploratory studies on news corpora and scientific research papers.", "We not only confirm existing knowledge but also suggest hypotheses around the usage of arab and islam in terrorism and latino and asian in immigration.", "It is important to note that the relations found using our approach depend on the nature of the representation of ideas and the source of texts.", "For instance, we cannot expect relations found in news articles to reflect shifts in public opinion if news articles do not effectively track public opinion.", "Our method is entirely observational.", "It remains as a further stage of analysis to understand the underlying reasons that lead to these relations be-tween ideas.", "In scientific research, for example, it could simply be the progress of science, i.e., newer ideas overtake older ones deemed less valuable at a given time; on the other hand, history suggests that it is not always the correct ideas that are most expressed, and many other factors may be important.", "Similarly, in news coverage, underlying sociological and political situations have significant impact on which ideas are presented, and how.", "There are many potential directions to improve our method to account for complex relations between ideas.", "For instance, we assume that both ideas and relations are statically grounded in keywords or topics.", "In reality, ideas and relations both evolve over time: a tryst relation might appear as friendship if we focus on a narrower time period.", "Similarly, new ideas show up and even the same idea may change over time and be represented by different words." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "3.1", "3.2", "4", "4.1", "4.2", "4.3", "6" ], "paper_header_content": [ "Introduction", "Computational Framework", "Cooccurrence and Prevalence Correlation", "Datasets and Representation of Ideas", "Characterizing the Space of Relations", "Empirical Distribution Properties", "Relative Strength of Extreme Pairs", "Exploratory Studies", "International Relations in Terrorism", "Ethnicity Keywords in Immigration", "Relations between Topics in ACL", "Concluding Discussion" ] }
GEM-SciDuet-train-93#paper-1238#slide-6
Arms race
immigration, deportation republican, party
immigration, deportation republican, party
[]
GEM-SciDuet-train-93#paper-1238#slide-7
1238
Friendships, Rivalries, and Trysts: Characterizing Relations between Ideas in Texts
Understanding how ideas relate to each other is a fundamental question in many domains, ranging from intellectual history to public communication. Because ideas are naturally embedded in texts, we propose the first framework to systematically characterize the relations between ideas based on their occurrence in a corpus of documents, independent of how these ideas are represented. Combining two statistics-cooccurrence within documents and prevalence correlation over time-our approach reveals a number of different ways in which ideas can cooperate and compete. For instance, two ideas can closely track each other's prevalence over time, and yet rarely cooccur, almost like a "cold war" scenario. We observe that pairwise cooccurrence and prevalence correlation exhibit different distributions. We further demonstrate that our approach is able to uncover intriguing relations between ideas through in-depth case studies on news articles and research papers.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228 ], "paper_content_text": [ "Introduction Ideas exist in the mind, but are made manifest in language, where they compete with each other for the scarce resource of human attention.", "Milton (1644) used the \"marketplace of ideas\" metaphor to argue that the truth will win out when ideas freely compete; Dawkins (1976) similarly likened the evolution of ideas to natural selection of genes.", "We propose a framework to quantitatively characterize competition and cooperation between ideas in texts, independent of how they might be represented.", "By \"ideas\", we mean any discrete conceptual units that can be identified as being present or absent in a document.", "In this work, we consider representing ideas using keywords and topics obtained in an unsupervised fashion, but our way of characterizing the relations between ideas could be applied to many other types of textual representations, such as frames (Card et al., 2015) and hashtags.", "What does it mean for two ideas to compete in texts, quantitatively?", "Consider, for example, the issue of immigration.", "There are two strongly competing narratives about the roughly 11 million people 1 who are residing in the United States without permission.", "One is \"illegal aliens\", who \"steal\" jobs and deny opportunities to legal immigrants; the other is \"undocumented immigrants\", who are already part of the fabric of society and deserve a path to citizenship (Merolla et al., 2013) .", "Although prior knowledge suggests that these two narratives compete, it is not immediately obvious what measures might reveal this competition in a corpus of writing about immigration.", "One question is whether or not these two ideas cooccur in the same documents.", "In the example above, these narratives are used by distinct groups of people with different ideologies.", "The fact that they don't cooccur is one clue that they may be in competition with each other.", "However, cooccurrence is insufficient to express the selection process of ideas, i.e., some ideas fade out over time, while others rise in popularity, analogous to the populations of species in nature.", "Of the two narratives on immigration, we may expect one to win out at the expense of another as public opinion shifts.", "Alternatively, we might expect to see these narratives reinforcing each other, as both sides intensify their messaging in response to growing opposition, much like the U.S.S.R. and immigration, deportation republican, party Figure 1 : Relations between ideas in the space of cooccurrence and prevalence correlation (prevalence correlation is shown explicitly and cooccurrence is encoded in row captions).", "We use topics from LDA (Blei et al., 2003) to represent ideas.", "Each topic is named with a pair of words that are most strongly associated with the topic in LDA.", "Subplots show examples of relations between topics found in U.S. newspaper articles on immigration from 1980 to 2016, color coded to match the description in text.", "The y-axis represents the proportion of news articles in a year (in our corpus) that contain the corresponding topic.", "All examples are among the top 3 strongest relations in each type except (\"immigrant, undocumented\", \"illegal, alien\"), which corresponds to the two competing narratives.", "We explain the formal definition of strength in §2.", "the U.S. during the cold war.", "To capture these possibilities, we use prevalence correlation over time.", "Building on these insights, we propose a framework that combines cooccurrence within documents and prevalence correlation over time.", "This framework gives rise to four possible types of relation that correspond to the four quadrants in Fig.", "1 .", "We explain each type using examples from news articles in U.S. newspapers on immigration from 1980 to 2016.", "Here, we have used LDA to identify ideas in the form of topics, and we denote each idea with a pair of words most strongly associated with the corresponding topic.", "Friendship (correlated over time, likely to cooccur).", "The \"immigrant, undocumented\" topic tends to cooccur with \"obama, president\" and both topics have been rising during the period of our dataset, likely because the \"undocumented immigrants\" narrative was an important part of Obama's framing of the immigration issue (Haynes et al., 2016) .", "Head-to-head (anti-correlated over time, unlikely to cooccur).", "\"immigrant, undocumented\" and \"illegal, alien\" are in a head-to-head competition: these two topics rarely cooccur, and \"immigrant, undocu-mented\" has been growing in prevalence, while the usage of \"illegal, alien\" in newspapers has been declining.", "This observation agrees with a report from Pew Research Center (Guskin, 2013) .", "Tryst (anti-correlated over time, likely to cooccur).", "The two off-diagonal examples use topics related to law enforcement.", "Overall, \"immigration, deportation\" and \"detention, jail\" often cooccur but \"detention, jail\" has been declining, while \"immigration, deportation\" has been rising.", "This possibly relates to the promises to overhaul the immigration detention system (Kalhan, 2010).", "2 Arms-race (correlated over time, unlikely to cooccur).", "One of the above law enforcement topics (\"immigration, deportation\") and a topic on the Republican party (\"republican, party\") hold an armsrace relation: they are both growing in prevalence over time, but rarely cooccur, perhaps suggesting an underlying common cause.", "Note that our terminology describes the relations between ideas in texts, not necessarily between the entities to which the ideas refer.", "For example, we find that the relation between \"Israel\" and \"Palestine\" is \"friendship\" in news articles on terrorism, based on their prevalence correlation and cooccurrence in that corpus.", "We introduce the formal definition of our framework in §2 and apply it to news articles on five issues and research papers from ACL Anthology and NIPS as testbeds.", "We operationalize ideas using topics (Blei et al., 2003) and keywords (Monroe et al., 2008) .", "To explore whether the four relation types exist and how strong these relations are, we first examine the marginal and joint distributions of cooccurrence and prevalence correlation ( §3).", "We find that cooccurrence shows a unimodal normal-shaped distribution but prevalence correlation demonstrates more diverse distributions across corpora.", "As we would expect, there are, in general, more and stronger friendship and head-to-head relations than arms-race and tryst relations.", "Second, we demonstrate the effectiveness of our framework through in-depth case studies ( §4).", "We not only validate existing knowledge about some news issues and research areas, but also identify hypotheses that require further investigation.", "For example, using keywords to represent ideas, a top pair with the tryst relation in news articles on terrorism is \"arab\" and \"islam\"; they are likely to cooccur, but \"islam\" is rising in relative prevalence while \"arab\" is declining.", "This suggests a conjecture that the news media have increasingly linked terrorism to a religious group rather than an ethnic group.", "We also show relations between topics in ACL that center around machine translation.", "Our work is a first step towards understanding relations between ideas from text corpora, a complex and important research question.", "We provide some concluding thoughts in §6.", "Computational Framework The aim of our computational framework is to explore relations between ideas.", "We thus assume that the set of relevant ideas has been identified, and those expressed in each document have been tabulated.", "Our open-source implementation is at https://github.com/Noahs-ARK/ idea_relations/.", "In the following, we introduce our formal definitions and datasets.", "∀x, y ∈ I, PMI(x, y) = logP (x, y) P (x)P (y) = C + log 1+ t k 1{x∈dt k }·1{y∈dt k } (1+ t k 1{x∈dt k })·(1+ t k 1{y∈dt k }) (1) r(x, y) = t P (x|t)−P (x|t) P (y|t)−P (y|t) t P (x|t)−P (x|t) 2 t P (y|t)−P (y|t) 2 (2) Figure 2 : Eq.", "1 is the empirical pointwise mutual information for two ideas, our measure of cooccurrence of ideas; note that we use add-one smoothing in estimating PMI.", "Eq.", "2 is the Pearson correlation between two ideas' prevalence over time.", "Cooccurrence and Prevalence Correlation As discussed in the introduction, we focus on two dimensions to quantify relations between ideas: 1. cooccurrence reveals to what extent two ideas tend to occur in the same contexts; 2. similarity between the relative prevalence of ideas over time reveals how two ideas relate in terms of popularity or coverage.", "Our input is a collection of documents, each represented by a set of ideas and indexed by time.", "We denote a static set of ideas as I and a text corpus that consists of these ideas as C = {D 1 , .", ".", ".", ", D T }, where D t = {d t 1 , .", ".", ".", ", d t N t } gives the collection of documents at timestep t, and each document, d t k , is represented as a subset of ideas in I.", "Here T is the total number of timesteps, and N t is the number of documents at timestep t. It follows that the total number of documents N = T t=1 N t .", "In order to formally capture the two dimensions above, we employ two commonly-used statistics.", "First, we use empirical pointwise mutual information (PMI) to capture the cooccurrence of ideas within the same document (Church and Hanks, 1990); see Eq.", "1 in Fig.", "2 .", "Positive PMI indicates that ideas occur together more frequently than would be expected if they were independent, while negative PMI indicates the opposite.", "Second, we compute the correlation between normalized document frequency of ideas to capture the relation between the relative prevalence of ideas across documents over time; see Eq.", "2 in Fig.", "2 .", "Positiver indicates that two ideas have similar prevalence over time, while negativer sug-gests two anti-correlated ideas (i.e., when one goes up, the other goes down).", "The four types of relations in the introduction can now be obtained using PMI andr, which capture cooccurrence and prevalence correlation respectively.", "We further define the strength of the relation between two ideas as the absolute value of the product of their PMI andr scores: ∀x, y ∈ I, strength(x, y) = | PMI(x, y)×r(x, y)|.", "(3) Datasets and Representation of Ideas We use two types of datasets to validate our framework: news articles and research papers.", "We choose these two domains because competition between ideas has received significant interest in history of science (Kuhn, 1996) and research on framing (Chong and Druckman, 2007; Entman, 1993; Gitlin, 1980; Lakoff, 2014) .", "Furthermore, interesting differences may exist in these two domains as news evolves with external events and scientific research progresses through innovations.", "• News articles.", "We follow the strategy in Card et al.", "(2015) to obtain news articles from Lex-isNexis on five issues: abortion, immigration, same-sex marriage, smoking, and terrorism.", "We search for relevant articles using LexisNexis subject terms in U.S. newspapers from 1980 to 2016.", "Each of these corpora contains more than 25,000 articles.", "Please refer to the supplementary material for details.", "• Research papers.", "We consider full texts of papers from two communities: our own ACL community captured by papers from ACL, NAACL, EMNLP, and TACL from 1980 to 2014 (Radev et al., 2009 ; and the NIPS community from 1987 to 2016.", "3 There are 4.8K papers from the ACL community and 6.6K papers from the NIPS community.", "The processed datasets are available at https://chenhaot.com/ pages/idea-relations.html.", "In order to operationalize ideas in a text corpus, we consider two ways to represent ideas.", "• Topics.", "We extract topics from each document by running LDA (Blei et al., 2003) on each corpus C. In all datasets, we set the number of topics to 50.", "4 Formally, I is the 50 topics learned from the corpus, and each document is represented as the set of topics that are present with greater than 0.01 probability in the topic distribution for that document.", "• Keywords.", "We identify a list of distinguishing keywords for each corpus by comparing its word frequencies to the background frequencies found in other corpora using the informative Dirichlet prior model in Monroe et al.", "(2008) .", "We set the number of keywords to 100 for all corpora.", "For news articles, the background corpus for each issue is comprised of all articles from the other four issues.", "For research papers, we use NIPS as the background corpus for ACL and vice versa to identify what are the core concepts for each of these research areas.", "Formally, I is the 100 top distinguishing keywords in the corpus and each document is represented as the set of keywords within I that are present in the document.", "Refer to the supplementary material for a list of example keywords in each corpus.", "In both procedures, we lemmatize all words and add common bigram phrases to the vocabulary following Mikolov et al.", "(2013) .", "Note that in our analysis, ideas are only present or absent in a document, and a document can in principle be mapped to any subset of ideas in I.", "In our experiments 90% of documents are marked as containing between 7 and 14 ideas using topics, 8 and 33 ideas using keywords.", "Characterizing the Space of Relations To provide an overview of the four relation types in Fig.", "1 , we first examine the empirical distributions of the two statistics of interest across pairs of ideas.", "In most exploratory studies, however, we are most interested in pairs that exemplify each type of relation, i.e., the most extreme points in each quadrant.", "We thus look at these pairs in each corpus to observe how the four types differ in salience across datasets.", "Empirical Distribution Properties To the best of our knowledge, the distributions of pairwise cooccurrence and prevalence correlation have not been examined in previous literature.", "We thus first investigate the marginal distributions of cooccurrence and prevalence correlation and then our framework is to analyze relations between ideas, so this choice is not essential in this work.", "(Scott, 2015) .", "The plots along the axes show the marginal distribution of the corresponding dimension.", "In each plot, we give the Pearson correlation, and all Pearson correlations' p-values are less than 10 −40 .", "In these plots, we use topics to represent ideas.", "their joint distribution.", "Fig.", "3 shows three examples: two from news articles and one from research papers.", "We will also focus our case studies on these three corpora in §4.", "The corresponding plots for keywords have been relegated to supplementary material due to space limitations.", "Cooccurrence tends to be unimodal but not normal.", "In all of our datasets, pairwise cooccurrence ( PMI) presents a unimodal distribution that somewhat resembles a normal distribution, but it is rarely precisely normal.", "We cannot reject the hypothesis that it is unimodal for any dataset (using topics or keywords) using the dip test (Hartigan and Hartigan, 1985) , though D'Agostino's K 2 test (D'Agostino et al., 1990) rejects normality in almost all cases.", "Prevalence correlation exhibits diverse distributions.", "Pairwise prevalence correlation follows different distributions in news articles compared to research papers: they are unimodal in news articles, but not in ACL or NIPS.", "The dip test only rejects the unimodality hypothesis in NIPS.", "None follow normal distributions based on D'Agostino's K 2 test.", "Cooccurrence is positively correlated with prevalence correlation.", "In all of our datasets, cooccurrence is positively correlated with prevalence correlation whether we use topics or keywords to represent ideas, although the Pearson correlation coefficients vary.", "This suggests that there are more friendship and head-to-head relations than tryst and arms-race relations.", "Based on the results of kernel density estimation, we also observe that this correlation is often loose, e.g., in ACL topics, cooccurrence spreads widely at each mode of prevalence correlation.", "776 Relative Strength of Extreme Pairs We are interested in how our framework can identify intriguing relations between ideas.", "These potentially interesting pairs likely correspond to the extreme points in each quadrant instead of the ones around the origin, where PMI and prevalence correlation are both close to zero.", "Here we compare the relative strength of extreme pairs in each dataset.", "We will discuss how these extreme pairs confirm existing knowledge and suggest new hypotheses via case studies in §4.", "For each relation type, we average the strengths of the 25 pairs with the strongest relations in that type, with strength defined in Eq.", "3.", "This heuristic (henceforth collective strength) allows us to collectively compare the strengths of the most prominent friendship, tryst, arms-race, and head-to-head relations.", "The results are not sensitive to the choice of 25.", "Fig.", "4 shows the collective strength of the four types in all of our datasets.", "The most common ordering is: friendship > head-to-head > arms-race > tryst.", "The fact that friendship and head-to-head relations are strong is consistent with the positive correlation between cooccurrence and prevalence correlation.", "In news, friendship is the strongest relation type, but head-to-head is the strongest in ACL topics and NIPS topics.", "This suggests, unsurprisingly, that there are stronger head-to-head competitions (i.e., one idea takes over another) between ideas in scientific research than in news.", "We also see that topics show greater strength in our scientific article collections, while keywords dominate in news, especially in friendship.", "We conjecture that terms in scientific literature are often overloaded (e.g., a tree could be a parse tree or a decision tree), necessitating some abstraction when representing ideas.", "In contrast, news stories are more self-contained and seek to employ consistent usage.", "Exploratory Studies We present case studies based on strongly related pairs of ideas in the four types of relation.", "Throughout this section, \"rank\" refers to the rank of the relation strength between a pair of ideas in its corresponding relation type.", "International Relations in Terrorism Following a decade of declining violence in the 90s, the events of September 11, 2001 precipitated a dramatic increase in concern about terrorism, and a major shift in how it was framed (Kern et al., 2003) .", "As a showcase, we consider a topic which encompasses much of the U.S. government's response to terrorism: \"federal, state\".", "5 We observe two topics engaging in an \"arms race\" with this one: \"afghanistan, taliban\" and \"pakistan, india\".", "These correspond to two geopolitical regions closely linked to the U.S. government's concern with terrorism, and both were sites of U.S. military action during the period of our dataset.", "Events abroad and the U.S. government's responses follow the arms-race pattern, each holding increasing 5 As in §1, we summarize each topic using a pair of strongly associated words, instead of assigning a name.", "Figure 6 : Tryst relation between arab and islam using keywords to represent ideas (#2 in tryst): these two words tend to cooccur but are anti-correlated in prevalence over time.", "In particular, islam was rarely used in coverage of terrorism in the 1980s.", "attention with the other, likely because they share the same underlying cause.", "We also observe two head-to-head rivals to the \"federal, state\" topic: \"iran, libya\" and \"israel, palestinian\".", "While these topics correspond to regions that are hotly debated in the U.S., their coverage in news tends not to correlate temporally with the U.S. government's responses to terrorism, at least during the time period of our corpus.", "Discussion of these regions was more prevalent in the 80s and 90s, with declining media coverage since then (Kern et al., 2003) .", "The relations between these topics are consistent with structural balance theory (Cartwright and Harary, 1956; Heider, 1946) , which suggests that the enemy of an enemy is a friend.", "The \"afghanistan, taliban\" topic has the strongest friendship relation with the \"pakistan, india\" topic, i.e., they are likely to cooccur and are positively correlated in prevalence.", "Similarly, the \"iran, libya\" topic is a close \"friend\" with the \"israel, palestinian\" topic (ranked 8th in friendship).", "Fig.", "5a shows the relations between the \"federal, state\" topic and four international topics.", "Edge colors indicate relation types and the number in an edge label presents the ranking of its strength in the corresponding relation type.", "Fig.", "5b and Fig.", "5c represent concrete examples in Fig.", "5a : \"federal, state\" and \"afghanistan, taliban\" follow similar trends, although \"afghanistan, taliban\" fluctuates over time due to significant events such as the September 11 attacks in 2001 and the death of Bin Laden in 2011; while \"iran, lybia\" is negatively correlated with \"federal, state\".", "In fact, more than 70% of terrorism news in the 80s contained the \"iran, lybia\" topic.", "When using keywords to represent ideas, we observe similar relations between the term homeland security and terms related to the above foreign countries.", "In addition, we highlight an interesting but unexpected tryst relation between arab and islam (Fig.", "6) .", "It is not surprising that these two words tend to cooccur in the same news articles, but the usage of islam in the news is increasing while arab is declining.", "The increasing prevalence of islam and decreasing prevalence of arab over this time period can also be seen, for example, using Google's n-gram viewer, but it of course provides no information about cooccurrence.", "This trend has not been previously noted to the best of our knowledge, although an article in the Huffington Post called for news editors to distinguish Muslim from Arab.", "6 Our observation suggests a conjecture that the news media have increasingly linked terrorism to a religious group rather than an ethnic group, perhaps in part due to the tie between the events of 9/11 and Afghanistan, which is not an Arab or Arabic-speaking country.", "We leave it to further investigation to confirm or reject this hypothesis.", "To further demonstrate the effectiveness of our approach, we compare a pair's rank using only cooccurrence or prevalence correlation with its rank in our framework.", "Table 1 shows the results for three pairs above.", "If we had looked at only cooccurrence or prevalence correlation, we would probably have missed these interesting pairs.", "PMI Corr \"federal, state\", \"afghanistan, taliban\" (#2 in arms-race) 43 99 \"federal, state\", \"iran, lybia\" (#2 in head-to-head) 36 56 arab, islam (#2 in tryst) 106 1,494 Ethnicity Keywords in Immigration In addition to results on topics in §1, we observe unexpected patterns about ethnicity keywords in immigration news.", "Our observation starts with a top tryst relation between latino and asian.", "Although these words are likely to cooccur, their prevalence trajectories differ, with the discussion of Asian immigrants in the 1990s giving way to a focus on the word latino from 2000 onward.", "Possible theories to explain this observation include that undocumented immigrants are generally perceived as a Latino issue, or that Latino voters are increasingly influential in U.S. elections.", "Furthermore, latino holds head-to-head relations with two subgroups of Latin American immigrants: haitian and cuban.", "In particular, the strength of the relation with haitian is ranked #18 in headto-head relations.", "Meanwhile, haitian and cuban have a friendship relation, which is again consistent with structural balance theory.", "The decreasing prevalence of haitian and cuban perhaps speaks to the shifting geographical focus of recent immigration to the U.S., and issues of the Latino panethnicity.", "In fact, a majority of Latinos prefer to identify with their national origin relative to the pan-ethnic terms (Taylor et al., 2012) .", "However, we should also note that much of this coverage relates to a set of specific refugee crises, temporarily elevating the political importance of these nations in the U.S.", "Nevertheless, the underlying social and political reasons behind these head-to-head relations are worth further investigation.", "Relations between Topics in ACL Finally, we analyze relations between topics in the ACL Anthology.", "It turns out that \"machine translation\" is at a central position among top ranked relations in all the four types (Fig.", "8) .", "7 It is part of the strongest relation in all four types except tryst (ranked #5).", "The full relation graph presents further patterns.", "Friendship demonstrates transitivity: both \"machine translation\" and \"word alignment\" have similar relations with other topics.", "But such transitivity does not hold for tryst: although the prevalence of \"rule, forest methods\" is anti-correlated with both \"machine translation\" and \"sentiment analysis\", \"sentiment analysis\" seldom cooccurs with \"rule, for-est methods\" because \"sentiment analysis\" is seldom built on parsing algorithms.", "Similarly, \"rule, forest methods\" and \"discourse (coherence)\" hold an armsrace relation: they do not tend to cooccur and both decline in relative prevalence as \"machine translation\" rises.", "The prevalence of each of these ideas in comparison to machine translation is shown in in Fig.", "9 , which reveals additional detail.", "Figure 9 : Relations between topics in ACL Anthology in the space of cooccurrence and prevalence correlation (prevalence correlation is shown explicitly and cooccurrence is encoded in row captions), color coded to match the text.", "The y-axis represents the relative proportion of papers in a year that contain the corresponding topic.", "The top 10 words for the rule, forest methods topic are rule, grammar, derivation, span, algorithm, forest, parsing, figure, set, string.", "Concluding Discussion We proposed a method to characterize relations between ideas in texts through the lens of cooccurrence within documents and prevalence correlation over time.", "For the first time, we observe that the distribution of pairwise cooccurrence is unimodal, while the distribution of pairwise prevalence correlation is not always unimodal, and show that they are positively correlated.", "This combination suggests four types of relations between ideas, and these four types are all found to varying extents in our experiments.", "We illustrate our computational method by exploratory studies on news corpora and scientific research papers.", "We not only confirm existing knowledge but also suggest hypotheses around the usage of arab and islam in terrorism and latino and asian in immigration.", "It is important to note that the relations found using our approach depend on the nature of the representation of ideas and the source of texts.", "For instance, we cannot expect relations found in news articles to reflect shifts in public opinion if news articles do not effectively track public opinion.", "Our method is entirely observational.", "It remains as a further stage of analysis to understand the underlying reasons that lead to these relations be-tween ideas.", "In scientific research, for example, it could simply be the progress of science, i.e., newer ideas overtake older ones deemed less valuable at a given time; on the other hand, history suggests that it is not always the correct ideas that are most expressed, and many other factors may be important.", "Similarly, in news coverage, underlying sociological and political situations have significant impact on which ideas are presented, and how.", "There are many potential directions to improve our method to account for complex relations between ideas.", "For instance, we assume that both ideas and relations are statically grounded in keywords or topics.", "In reality, ideas and relations both evolve over time: a tryst relation might appear as friendship if we focus on a narrower time period.", "Similarly, new ideas show up and even the same idea may change over time and be represented by different words." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "3.1", "3.2", "4", "4.1", "4.2", "4.3", "6" ], "paper_header_content": [ "Introduction", "Computational Framework", "Cooccurrence and Prevalence Correlation", "Datasets and Representation of Ideas", "Characterizing the Space of Relations", "Empirical Distribution Properties", "Relative Strength of Extreme Pairs", "Exploratory Studies", "International Relations in Terrorism", "Ethnicity Keywords in Immigration", "Relations between Topics in ACL", "Concluding Discussion" ] }
GEM-SciDuet-train-93#paper-1238#slide-7
Tryst
immigration, deportation detainee, detention
immigration, deportation detainee, detention
[]
GEM-SciDuet-train-93#paper-1238#slide-8
1238
Friendships, Rivalries, and Trysts: Characterizing Relations between Ideas in Texts
Understanding how ideas relate to each other is a fundamental question in many domains, ranging from intellectual history to public communication. Because ideas are naturally embedded in texts, we propose the first framework to systematically characterize the relations between ideas based on their occurrence in a corpus of documents, independent of how these ideas are represented. Combining two statistics-cooccurrence within documents and prevalence correlation over time-our approach reveals a number of different ways in which ideas can cooperate and compete. For instance, two ideas can closely track each other's prevalence over time, and yet rarely cooccur, almost like a "cold war" scenario. We observe that pairwise cooccurrence and prevalence correlation exhibit different distributions. We further demonstrate that our approach is able to uncover intriguing relations between ideas through in-depth case studies on news articles and research papers.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228 ], "paper_content_text": [ "Introduction Ideas exist in the mind, but are made manifest in language, where they compete with each other for the scarce resource of human attention.", "Milton (1644) used the \"marketplace of ideas\" metaphor to argue that the truth will win out when ideas freely compete; Dawkins (1976) similarly likened the evolution of ideas to natural selection of genes.", "We propose a framework to quantitatively characterize competition and cooperation between ideas in texts, independent of how they might be represented.", "By \"ideas\", we mean any discrete conceptual units that can be identified as being present or absent in a document.", "In this work, we consider representing ideas using keywords and topics obtained in an unsupervised fashion, but our way of characterizing the relations between ideas could be applied to many other types of textual representations, such as frames (Card et al., 2015) and hashtags.", "What does it mean for two ideas to compete in texts, quantitatively?", "Consider, for example, the issue of immigration.", "There are two strongly competing narratives about the roughly 11 million people 1 who are residing in the United States without permission.", "One is \"illegal aliens\", who \"steal\" jobs and deny opportunities to legal immigrants; the other is \"undocumented immigrants\", who are already part of the fabric of society and deserve a path to citizenship (Merolla et al., 2013) .", "Although prior knowledge suggests that these two narratives compete, it is not immediately obvious what measures might reveal this competition in a corpus of writing about immigration.", "One question is whether or not these two ideas cooccur in the same documents.", "In the example above, these narratives are used by distinct groups of people with different ideologies.", "The fact that they don't cooccur is one clue that they may be in competition with each other.", "However, cooccurrence is insufficient to express the selection process of ideas, i.e., some ideas fade out over time, while others rise in popularity, analogous to the populations of species in nature.", "Of the two narratives on immigration, we may expect one to win out at the expense of another as public opinion shifts.", "Alternatively, we might expect to see these narratives reinforcing each other, as both sides intensify their messaging in response to growing opposition, much like the U.S.S.R. and immigration, deportation republican, party Figure 1 : Relations between ideas in the space of cooccurrence and prevalence correlation (prevalence correlation is shown explicitly and cooccurrence is encoded in row captions).", "We use topics from LDA (Blei et al., 2003) to represent ideas.", "Each topic is named with a pair of words that are most strongly associated with the topic in LDA.", "Subplots show examples of relations between topics found in U.S. newspaper articles on immigration from 1980 to 2016, color coded to match the description in text.", "The y-axis represents the proportion of news articles in a year (in our corpus) that contain the corresponding topic.", "All examples are among the top 3 strongest relations in each type except (\"immigrant, undocumented\", \"illegal, alien\"), which corresponds to the two competing narratives.", "We explain the formal definition of strength in §2.", "the U.S. during the cold war.", "To capture these possibilities, we use prevalence correlation over time.", "Building on these insights, we propose a framework that combines cooccurrence within documents and prevalence correlation over time.", "This framework gives rise to four possible types of relation that correspond to the four quadrants in Fig.", "1 .", "We explain each type using examples from news articles in U.S. newspapers on immigration from 1980 to 2016.", "Here, we have used LDA to identify ideas in the form of topics, and we denote each idea with a pair of words most strongly associated with the corresponding topic.", "Friendship (correlated over time, likely to cooccur).", "The \"immigrant, undocumented\" topic tends to cooccur with \"obama, president\" and both topics have been rising during the period of our dataset, likely because the \"undocumented immigrants\" narrative was an important part of Obama's framing of the immigration issue (Haynes et al., 2016) .", "Head-to-head (anti-correlated over time, unlikely to cooccur).", "\"immigrant, undocumented\" and \"illegal, alien\" are in a head-to-head competition: these two topics rarely cooccur, and \"immigrant, undocu-mented\" has been growing in prevalence, while the usage of \"illegal, alien\" in newspapers has been declining.", "This observation agrees with a report from Pew Research Center (Guskin, 2013) .", "Tryst (anti-correlated over time, likely to cooccur).", "The two off-diagonal examples use topics related to law enforcement.", "Overall, \"immigration, deportation\" and \"detention, jail\" often cooccur but \"detention, jail\" has been declining, while \"immigration, deportation\" has been rising.", "This possibly relates to the promises to overhaul the immigration detention system (Kalhan, 2010).", "2 Arms-race (correlated over time, unlikely to cooccur).", "One of the above law enforcement topics (\"immigration, deportation\") and a topic on the Republican party (\"republican, party\") hold an armsrace relation: they are both growing in prevalence over time, but rarely cooccur, perhaps suggesting an underlying common cause.", "Note that our terminology describes the relations between ideas in texts, not necessarily between the entities to which the ideas refer.", "For example, we find that the relation between \"Israel\" and \"Palestine\" is \"friendship\" in news articles on terrorism, based on their prevalence correlation and cooccurrence in that corpus.", "We introduce the formal definition of our framework in §2 and apply it to news articles on five issues and research papers from ACL Anthology and NIPS as testbeds.", "We operationalize ideas using topics (Blei et al., 2003) and keywords (Monroe et al., 2008) .", "To explore whether the four relation types exist and how strong these relations are, we first examine the marginal and joint distributions of cooccurrence and prevalence correlation ( §3).", "We find that cooccurrence shows a unimodal normal-shaped distribution but prevalence correlation demonstrates more diverse distributions across corpora.", "As we would expect, there are, in general, more and stronger friendship and head-to-head relations than arms-race and tryst relations.", "Second, we demonstrate the effectiveness of our framework through in-depth case studies ( §4).", "We not only validate existing knowledge about some news issues and research areas, but also identify hypotheses that require further investigation.", "For example, using keywords to represent ideas, a top pair with the tryst relation in news articles on terrorism is \"arab\" and \"islam\"; they are likely to cooccur, but \"islam\" is rising in relative prevalence while \"arab\" is declining.", "This suggests a conjecture that the news media have increasingly linked terrorism to a religious group rather than an ethnic group.", "We also show relations between topics in ACL that center around machine translation.", "Our work is a first step towards understanding relations between ideas from text corpora, a complex and important research question.", "We provide some concluding thoughts in §6.", "Computational Framework The aim of our computational framework is to explore relations between ideas.", "We thus assume that the set of relevant ideas has been identified, and those expressed in each document have been tabulated.", "Our open-source implementation is at https://github.com/Noahs-ARK/ idea_relations/.", "In the following, we introduce our formal definitions and datasets.", "∀x, y ∈ I, PMI(x, y) = logP (x, y) P (x)P (y) = C + log 1+ t k 1{x∈dt k }·1{y∈dt k } (1+ t k 1{x∈dt k })·(1+ t k 1{y∈dt k }) (1) r(x, y) = t P (x|t)−P (x|t) P (y|t)−P (y|t) t P (x|t)−P (x|t) 2 t P (y|t)−P (y|t) 2 (2) Figure 2 : Eq.", "1 is the empirical pointwise mutual information for two ideas, our measure of cooccurrence of ideas; note that we use add-one smoothing in estimating PMI.", "Eq.", "2 is the Pearson correlation between two ideas' prevalence over time.", "Cooccurrence and Prevalence Correlation As discussed in the introduction, we focus on two dimensions to quantify relations between ideas: 1. cooccurrence reveals to what extent two ideas tend to occur in the same contexts; 2. similarity between the relative prevalence of ideas over time reveals how two ideas relate in terms of popularity or coverage.", "Our input is a collection of documents, each represented by a set of ideas and indexed by time.", "We denote a static set of ideas as I and a text corpus that consists of these ideas as C = {D 1 , .", ".", ".", ", D T }, where D t = {d t 1 , .", ".", ".", ", d t N t } gives the collection of documents at timestep t, and each document, d t k , is represented as a subset of ideas in I.", "Here T is the total number of timesteps, and N t is the number of documents at timestep t. It follows that the total number of documents N = T t=1 N t .", "In order to formally capture the two dimensions above, we employ two commonly-used statistics.", "First, we use empirical pointwise mutual information (PMI) to capture the cooccurrence of ideas within the same document (Church and Hanks, 1990); see Eq.", "1 in Fig.", "2 .", "Positive PMI indicates that ideas occur together more frequently than would be expected if they were independent, while negative PMI indicates the opposite.", "Second, we compute the correlation between normalized document frequency of ideas to capture the relation between the relative prevalence of ideas across documents over time; see Eq.", "2 in Fig.", "2 .", "Positiver indicates that two ideas have similar prevalence over time, while negativer sug-gests two anti-correlated ideas (i.e., when one goes up, the other goes down).", "The four types of relations in the introduction can now be obtained using PMI andr, which capture cooccurrence and prevalence correlation respectively.", "We further define the strength of the relation between two ideas as the absolute value of the product of their PMI andr scores: ∀x, y ∈ I, strength(x, y) = | PMI(x, y)×r(x, y)|.", "(3) Datasets and Representation of Ideas We use two types of datasets to validate our framework: news articles and research papers.", "We choose these two domains because competition between ideas has received significant interest in history of science (Kuhn, 1996) and research on framing (Chong and Druckman, 2007; Entman, 1993; Gitlin, 1980; Lakoff, 2014) .", "Furthermore, interesting differences may exist in these two domains as news evolves with external events and scientific research progresses through innovations.", "• News articles.", "We follow the strategy in Card et al.", "(2015) to obtain news articles from Lex-isNexis on five issues: abortion, immigration, same-sex marriage, smoking, and terrorism.", "We search for relevant articles using LexisNexis subject terms in U.S. newspapers from 1980 to 2016.", "Each of these corpora contains more than 25,000 articles.", "Please refer to the supplementary material for details.", "• Research papers.", "We consider full texts of papers from two communities: our own ACL community captured by papers from ACL, NAACL, EMNLP, and TACL from 1980 to 2014 (Radev et al., 2009 ; and the NIPS community from 1987 to 2016.", "3 There are 4.8K papers from the ACL community and 6.6K papers from the NIPS community.", "The processed datasets are available at https://chenhaot.com/ pages/idea-relations.html.", "In order to operationalize ideas in a text corpus, we consider two ways to represent ideas.", "• Topics.", "We extract topics from each document by running LDA (Blei et al., 2003) on each corpus C. In all datasets, we set the number of topics to 50.", "4 Formally, I is the 50 topics learned from the corpus, and each document is represented as the set of topics that are present with greater than 0.01 probability in the topic distribution for that document.", "• Keywords.", "We identify a list of distinguishing keywords for each corpus by comparing its word frequencies to the background frequencies found in other corpora using the informative Dirichlet prior model in Monroe et al.", "(2008) .", "We set the number of keywords to 100 for all corpora.", "For news articles, the background corpus for each issue is comprised of all articles from the other four issues.", "For research papers, we use NIPS as the background corpus for ACL and vice versa to identify what are the core concepts for each of these research areas.", "Formally, I is the 100 top distinguishing keywords in the corpus and each document is represented as the set of keywords within I that are present in the document.", "Refer to the supplementary material for a list of example keywords in each corpus.", "In both procedures, we lemmatize all words and add common bigram phrases to the vocabulary following Mikolov et al.", "(2013) .", "Note that in our analysis, ideas are only present or absent in a document, and a document can in principle be mapped to any subset of ideas in I.", "In our experiments 90% of documents are marked as containing between 7 and 14 ideas using topics, 8 and 33 ideas using keywords.", "Characterizing the Space of Relations To provide an overview of the four relation types in Fig.", "1 , we first examine the empirical distributions of the two statistics of interest across pairs of ideas.", "In most exploratory studies, however, we are most interested in pairs that exemplify each type of relation, i.e., the most extreme points in each quadrant.", "We thus look at these pairs in each corpus to observe how the four types differ in salience across datasets.", "Empirical Distribution Properties To the best of our knowledge, the distributions of pairwise cooccurrence and prevalence correlation have not been examined in previous literature.", "We thus first investigate the marginal distributions of cooccurrence and prevalence correlation and then our framework is to analyze relations between ideas, so this choice is not essential in this work.", "(Scott, 2015) .", "The plots along the axes show the marginal distribution of the corresponding dimension.", "In each plot, we give the Pearson correlation, and all Pearson correlations' p-values are less than 10 −40 .", "In these plots, we use topics to represent ideas.", "their joint distribution.", "Fig.", "3 shows three examples: two from news articles and one from research papers.", "We will also focus our case studies on these three corpora in §4.", "The corresponding plots for keywords have been relegated to supplementary material due to space limitations.", "Cooccurrence tends to be unimodal but not normal.", "In all of our datasets, pairwise cooccurrence ( PMI) presents a unimodal distribution that somewhat resembles a normal distribution, but it is rarely precisely normal.", "We cannot reject the hypothesis that it is unimodal for any dataset (using topics or keywords) using the dip test (Hartigan and Hartigan, 1985) , though D'Agostino's K 2 test (D'Agostino et al., 1990) rejects normality in almost all cases.", "Prevalence correlation exhibits diverse distributions.", "Pairwise prevalence correlation follows different distributions in news articles compared to research papers: they are unimodal in news articles, but not in ACL or NIPS.", "The dip test only rejects the unimodality hypothesis in NIPS.", "None follow normal distributions based on D'Agostino's K 2 test.", "Cooccurrence is positively correlated with prevalence correlation.", "In all of our datasets, cooccurrence is positively correlated with prevalence correlation whether we use topics or keywords to represent ideas, although the Pearson correlation coefficients vary.", "This suggests that there are more friendship and head-to-head relations than tryst and arms-race relations.", "Based on the results of kernel density estimation, we also observe that this correlation is often loose, e.g., in ACL topics, cooccurrence spreads widely at each mode of prevalence correlation.", "776 Relative Strength of Extreme Pairs We are interested in how our framework can identify intriguing relations between ideas.", "These potentially interesting pairs likely correspond to the extreme points in each quadrant instead of the ones around the origin, where PMI and prevalence correlation are both close to zero.", "Here we compare the relative strength of extreme pairs in each dataset.", "We will discuss how these extreme pairs confirm existing knowledge and suggest new hypotheses via case studies in §4.", "For each relation type, we average the strengths of the 25 pairs with the strongest relations in that type, with strength defined in Eq.", "3.", "This heuristic (henceforth collective strength) allows us to collectively compare the strengths of the most prominent friendship, tryst, arms-race, and head-to-head relations.", "The results are not sensitive to the choice of 25.", "Fig.", "4 shows the collective strength of the four types in all of our datasets.", "The most common ordering is: friendship > head-to-head > arms-race > tryst.", "The fact that friendship and head-to-head relations are strong is consistent with the positive correlation between cooccurrence and prevalence correlation.", "In news, friendship is the strongest relation type, but head-to-head is the strongest in ACL topics and NIPS topics.", "This suggests, unsurprisingly, that there are stronger head-to-head competitions (i.e., one idea takes over another) between ideas in scientific research than in news.", "We also see that topics show greater strength in our scientific article collections, while keywords dominate in news, especially in friendship.", "We conjecture that terms in scientific literature are often overloaded (e.g., a tree could be a parse tree or a decision tree), necessitating some abstraction when representing ideas.", "In contrast, news stories are more self-contained and seek to employ consistent usage.", "Exploratory Studies We present case studies based on strongly related pairs of ideas in the four types of relation.", "Throughout this section, \"rank\" refers to the rank of the relation strength between a pair of ideas in its corresponding relation type.", "International Relations in Terrorism Following a decade of declining violence in the 90s, the events of September 11, 2001 precipitated a dramatic increase in concern about terrorism, and a major shift in how it was framed (Kern et al., 2003) .", "As a showcase, we consider a topic which encompasses much of the U.S. government's response to terrorism: \"federal, state\".", "5 We observe two topics engaging in an \"arms race\" with this one: \"afghanistan, taliban\" and \"pakistan, india\".", "These correspond to two geopolitical regions closely linked to the U.S. government's concern with terrorism, and both were sites of U.S. military action during the period of our dataset.", "Events abroad and the U.S. government's responses follow the arms-race pattern, each holding increasing 5 As in §1, we summarize each topic using a pair of strongly associated words, instead of assigning a name.", "Figure 6 : Tryst relation between arab and islam using keywords to represent ideas (#2 in tryst): these two words tend to cooccur but are anti-correlated in prevalence over time.", "In particular, islam was rarely used in coverage of terrorism in the 1980s.", "attention with the other, likely because they share the same underlying cause.", "We also observe two head-to-head rivals to the \"federal, state\" topic: \"iran, libya\" and \"israel, palestinian\".", "While these topics correspond to regions that are hotly debated in the U.S., their coverage in news tends not to correlate temporally with the U.S. government's responses to terrorism, at least during the time period of our corpus.", "Discussion of these regions was more prevalent in the 80s and 90s, with declining media coverage since then (Kern et al., 2003) .", "The relations between these topics are consistent with structural balance theory (Cartwright and Harary, 1956; Heider, 1946) , which suggests that the enemy of an enemy is a friend.", "The \"afghanistan, taliban\" topic has the strongest friendship relation with the \"pakistan, india\" topic, i.e., they are likely to cooccur and are positively correlated in prevalence.", "Similarly, the \"iran, libya\" topic is a close \"friend\" with the \"israel, palestinian\" topic (ranked 8th in friendship).", "Fig.", "5a shows the relations between the \"federal, state\" topic and four international topics.", "Edge colors indicate relation types and the number in an edge label presents the ranking of its strength in the corresponding relation type.", "Fig.", "5b and Fig.", "5c represent concrete examples in Fig.", "5a : \"federal, state\" and \"afghanistan, taliban\" follow similar trends, although \"afghanistan, taliban\" fluctuates over time due to significant events such as the September 11 attacks in 2001 and the death of Bin Laden in 2011; while \"iran, lybia\" is negatively correlated with \"federal, state\".", "In fact, more than 70% of terrorism news in the 80s contained the \"iran, lybia\" topic.", "When using keywords to represent ideas, we observe similar relations between the term homeland security and terms related to the above foreign countries.", "In addition, we highlight an interesting but unexpected tryst relation between arab and islam (Fig.", "6) .", "It is not surprising that these two words tend to cooccur in the same news articles, but the usage of islam in the news is increasing while arab is declining.", "The increasing prevalence of islam and decreasing prevalence of arab over this time period can also be seen, for example, using Google's n-gram viewer, but it of course provides no information about cooccurrence.", "This trend has not been previously noted to the best of our knowledge, although an article in the Huffington Post called for news editors to distinguish Muslim from Arab.", "6 Our observation suggests a conjecture that the news media have increasingly linked terrorism to a religious group rather than an ethnic group, perhaps in part due to the tie between the events of 9/11 and Afghanistan, which is not an Arab or Arabic-speaking country.", "We leave it to further investigation to confirm or reject this hypothesis.", "To further demonstrate the effectiveness of our approach, we compare a pair's rank using only cooccurrence or prevalence correlation with its rank in our framework.", "Table 1 shows the results for three pairs above.", "If we had looked at only cooccurrence or prevalence correlation, we would probably have missed these interesting pairs.", "PMI Corr \"federal, state\", \"afghanistan, taliban\" (#2 in arms-race) 43 99 \"federal, state\", \"iran, lybia\" (#2 in head-to-head) 36 56 arab, islam (#2 in tryst) 106 1,494 Ethnicity Keywords in Immigration In addition to results on topics in §1, we observe unexpected patterns about ethnicity keywords in immigration news.", "Our observation starts with a top tryst relation between latino and asian.", "Although these words are likely to cooccur, their prevalence trajectories differ, with the discussion of Asian immigrants in the 1990s giving way to a focus on the word latino from 2000 onward.", "Possible theories to explain this observation include that undocumented immigrants are generally perceived as a Latino issue, or that Latino voters are increasingly influential in U.S. elections.", "Furthermore, latino holds head-to-head relations with two subgroups of Latin American immigrants: haitian and cuban.", "In particular, the strength of the relation with haitian is ranked #18 in headto-head relations.", "Meanwhile, haitian and cuban have a friendship relation, which is again consistent with structural balance theory.", "The decreasing prevalence of haitian and cuban perhaps speaks to the shifting geographical focus of recent immigration to the U.S., and issues of the Latino panethnicity.", "In fact, a majority of Latinos prefer to identify with their national origin relative to the pan-ethnic terms (Taylor et al., 2012) .", "However, we should also note that much of this coverage relates to a set of specific refugee crises, temporarily elevating the political importance of these nations in the U.S.", "Nevertheless, the underlying social and political reasons behind these head-to-head relations are worth further investigation.", "Relations between Topics in ACL Finally, we analyze relations between topics in the ACL Anthology.", "It turns out that \"machine translation\" is at a central position among top ranked relations in all the four types (Fig.", "8) .", "7 It is part of the strongest relation in all four types except tryst (ranked #5).", "The full relation graph presents further patterns.", "Friendship demonstrates transitivity: both \"machine translation\" and \"word alignment\" have similar relations with other topics.", "But such transitivity does not hold for tryst: although the prevalence of \"rule, forest methods\" is anti-correlated with both \"machine translation\" and \"sentiment analysis\", \"sentiment analysis\" seldom cooccurs with \"rule, for-est methods\" because \"sentiment analysis\" is seldom built on parsing algorithms.", "Similarly, \"rule, forest methods\" and \"discourse (coherence)\" hold an armsrace relation: they do not tend to cooccur and both decline in relative prevalence as \"machine translation\" rises.", "The prevalence of each of these ideas in comparison to machine translation is shown in in Fig.", "9 , which reveals additional detail.", "Figure 9 : Relations between topics in ACL Anthology in the space of cooccurrence and prevalence correlation (prevalence correlation is shown explicitly and cooccurrence is encoded in row captions), color coded to match the text.", "The y-axis represents the relative proportion of papers in a year that contain the corresponding topic.", "The top 10 words for the rule, forest methods topic are rule, grammar, derivation, span, algorithm, forest, parsing, figure, set, string.", "Concluding Discussion We proposed a method to characterize relations between ideas in texts through the lens of cooccurrence within documents and prevalence correlation over time.", "For the first time, we observe that the distribution of pairwise cooccurrence is unimodal, while the distribution of pairwise prevalence correlation is not always unimodal, and show that they are positively correlated.", "This combination suggests four types of relations between ideas, and these four types are all found to varying extents in our experiments.", "We illustrate our computational method by exploratory studies on news corpora and scientific research papers.", "We not only confirm existing knowledge but also suggest hypotheses around the usage of arab and islam in terrorism and latino and asian in immigration.", "It is important to note that the relations found using our approach depend on the nature of the representation of ideas and the source of texts.", "For instance, we cannot expect relations found in news articles to reflect shifts in public opinion if news articles do not effectively track public opinion.", "Our method is entirely observational.", "It remains as a further stage of analysis to understand the underlying reasons that lead to these relations be-tween ideas.", "In scientific research, for example, it could simply be the progress of science, i.e., newer ideas overtake older ones deemed less valuable at a given time; on the other hand, history suggests that it is not always the correct ideas that are most expressed, and many other factors may be important.", "Similarly, in news coverage, underlying sociological and political situations have significant impact on which ideas are presented, and how.", "There are many potential directions to improve our method to account for complex relations between ideas.", "For instance, we assume that both ideas and relations are statically grounded in keywords or topics.", "In reality, ideas and relations both evolve over time: a tryst relation might appear as friendship if we focus on a narrower time period.", "Similarly, new ideas show up and even the same idea may change over time and be represented by different words." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "3.1", "3.2", "4", "4.1", "4.2", "4.3", "6" ], "paper_header_content": [ "Introduction", "Computational Framework", "Cooccurrence and Prevalence Correlation", "Datasets and Representation of Ideas", "Characterizing the Space of Relations", "Empirical Distribution Properties", "Relative Strength of Extreme Pairs", "Exploratory Studies", "International Relations in Terrorism", "Ethnicity Keywords in Immigration", "Relations between Topics in ACL", "Concluding Discussion" ] }
GEM-SciDuet-train-93#paper-1238#slide-8
A wide range of datasets
Newspapers and research articles as datasets
Newspapers and research articles as datasets
[]
GEM-SciDuet-train-93#paper-1238#slide-9
1238
Friendships, Rivalries, and Trysts: Characterizing Relations between Ideas in Texts
Understanding how ideas relate to each other is a fundamental question in many domains, ranging from intellectual history to public communication. Because ideas are naturally embedded in texts, we propose the first framework to systematically characterize the relations between ideas based on their occurrence in a corpus of documents, independent of how these ideas are represented. Combining two statistics-cooccurrence within documents and prevalence correlation over time-our approach reveals a number of different ways in which ideas can cooperate and compete. For instance, two ideas can closely track each other's prevalence over time, and yet rarely cooccur, almost like a "cold war" scenario. We observe that pairwise cooccurrence and prevalence correlation exhibit different distributions. We further demonstrate that our approach is able to uncover intriguing relations between ideas through in-depth case studies on news articles and research papers.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228 ], "paper_content_text": [ "Introduction Ideas exist in the mind, but are made manifest in language, where they compete with each other for the scarce resource of human attention.", "Milton (1644) used the \"marketplace of ideas\" metaphor to argue that the truth will win out when ideas freely compete; Dawkins (1976) similarly likened the evolution of ideas to natural selection of genes.", "We propose a framework to quantitatively characterize competition and cooperation between ideas in texts, independent of how they might be represented.", "By \"ideas\", we mean any discrete conceptual units that can be identified as being present or absent in a document.", "In this work, we consider representing ideas using keywords and topics obtained in an unsupervised fashion, but our way of characterizing the relations between ideas could be applied to many other types of textual representations, such as frames (Card et al., 2015) and hashtags.", "What does it mean for two ideas to compete in texts, quantitatively?", "Consider, for example, the issue of immigration.", "There are two strongly competing narratives about the roughly 11 million people 1 who are residing in the United States without permission.", "One is \"illegal aliens\", who \"steal\" jobs and deny opportunities to legal immigrants; the other is \"undocumented immigrants\", who are already part of the fabric of society and deserve a path to citizenship (Merolla et al., 2013) .", "Although prior knowledge suggests that these two narratives compete, it is not immediately obvious what measures might reveal this competition in a corpus of writing about immigration.", "One question is whether or not these two ideas cooccur in the same documents.", "In the example above, these narratives are used by distinct groups of people with different ideologies.", "The fact that they don't cooccur is one clue that they may be in competition with each other.", "However, cooccurrence is insufficient to express the selection process of ideas, i.e., some ideas fade out over time, while others rise in popularity, analogous to the populations of species in nature.", "Of the two narratives on immigration, we may expect one to win out at the expense of another as public opinion shifts.", "Alternatively, we might expect to see these narratives reinforcing each other, as both sides intensify their messaging in response to growing opposition, much like the U.S.S.R. and immigration, deportation republican, party Figure 1 : Relations between ideas in the space of cooccurrence and prevalence correlation (prevalence correlation is shown explicitly and cooccurrence is encoded in row captions).", "We use topics from LDA (Blei et al., 2003) to represent ideas.", "Each topic is named with a pair of words that are most strongly associated with the topic in LDA.", "Subplots show examples of relations between topics found in U.S. newspaper articles on immigration from 1980 to 2016, color coded to match the description in text.", "The y-axis represents the proportion of news articles in a year (in our corpus) that contain the corresponding topic.", "All examples are among the top 3 strongest relations in each type except (\"immigrant, undocumented\", \"illegal, alien\"), which corresponds to the two competing narratives.", "We explain the formal definition of strength in §2.", "the U.S. during the cold war.", "To capture these possibilities, we use prevalence correlation over time.", "Building on these insights, we propose a framework that combines cooccurrence within documents and prevalence correlation over time.", "This framework gives rise to four possible types of relation that correspond to the four quadrants in Fig.", "1 .", "We explain each type using examples from news articles in U.S. newspapers on immigration from 1980 to 2016.", "Here, we have used LDA to identify ideas in the form of topics, and we denote each idea with a pair of words most strongly associated with the corresponding topic.", "Friendship (correlated over time, likely to cooccur).", "The \"immigrant, undocumented\" topic tends to cooccur with \"obama, president\" and both topics have been rising during the period of our dataset, likely because the \"undocumented immigrants\" narrative was an important part of Obama's framing of the immigration issue (Haynes et al., 2016) .", "Head-to-head (anti-correlated over time, unlikely to cooccur).", "\"immigrant, undocumented\" and \"illegal, alien\" are in a head-to-head competition: these two topics rarely cooccur, and \"immigrant, undocu-mented\" has been growing in prevalence, while the usage of \"illegal, alien\" in newspapers has been declining.", "This observation agrees with a report from Pew Research Center (Guskin, 2013) .", "Tryst (anti-correlated over time, likely to cooccur).", "The two off-diagonal examples use topics related to law enforcement.", "Overall, \"immigration, deportation\" and \"detention, jail\" often cooccur but \"detention, jail\" has been declining, while \"immigration, deportation\" has been rising.", "This possibly relates to the promises to overhaul the immigration detention system (Kalhan, 2010).", "2 Arms-race (correlated over time, unlikely to cooccur).", "One of the above law enforcement topics (\"immigration, deportation\") and a topic on the Republican party (\"republican, party\") hold an armsrace relation: they are both growing in prevalence over time, but rarely cooccur, perhaps suggesting an underlying common cause.", "Note that our terminology describes the relations between ideas in texts, not necessarily between the entities to which the ideas refer.", "For example, we find that the relation between \"Israel\" and \"Palestine\" is \"friendship\" in news articles on terrorism, based on their prevalence correlation and cooccurrence in that corpus.", "We introduce the formal definition of our framework in §2 and apply it to news articles on five issues and research papers from ACL Anthology and NIPS as testbeds.", "We operationalize ideas using topics (Blei et al., 2003) and keywords (Monroe et al., 2008) .", "To explore whether the four relation types exist and how strong these relations are, we first examine the marginal and joint distributions of cooccurrence and prevalence correlation ( §3).", "We find that cooccurrence shows a unimodal normal-shaped distribution but prevalence correlation demonstrates more diverse distributions across corpora.", "As we would expect, there are, in general, more and stronger friendship and head-to-head relations than arms-race and tryst relations.", "Second, we demonstrate the effectiveness of our framework through in-depth case studies ( §4).", "We not only validate existing knowledge about some news issues and research areas, but also identify hypotheses that require further investigation.", "For example, using keywords to represent ideas, a top pair with the tryst relation in news articles on terrorism is \"arab\" and \"islam\"; they are likely to cooccur, but \"islam\" is rising in relative prevalence while \"arab\" is declining.", "This suggests a conjecture that the news media have increasingly linked terrorism to a religious group rather than an ethnic group.", "We also show relations between topics in ACL that center around machine translation.", "Our work is a first step towards understanding relations between ideas from text corpora, a complex and important research question.", "We provide some concluding thoughts in §6.", "Computational Framework The aim of our computational framework is to explore relations between ideas.", "We thus assume that the set of relevant ideas has been identified, and those expressed in each document have been tabulated.", "Our open-source implementation is at https://github.com/Noahs-ARK/ idea_relations/.", "In the following, we introduce our formal definitions and datasets.", "∀x, y ∈ I, PMI(x, y) = logP (x, y) P (x)P (y) = C + log 1+ t k 1{x∈dt k }·1{y∈dt k } (1+ t k 1{x∈dt k })·(1+ t k 1{y∈dt k }) (1) r(x, y) = t P (x|t)−P (x|t) P (y|t)−P (y|t) t P (x|t)−P (x|t) 2 t P (y|t)−P (y|t) 2 (2) Figure 2 : Eq.", "1 is the empirical pointwise mutual information for two ideas, our measure of cooccurrence of ideas; note that we use add-one smoothing in estimating PMI.", "Eq.", "2 is the Pearson correlation between two ideas' prevalence over time.", "Cooccurrence and Prevalence Correlation As discussed in the introduction, we focus on two dimensions to quantify relations between ideas: 1. cooccurrence reveals to what extent two ideas tend to occur in the same contexts; 2. similarity between the relative prevalence of ideas over time reveals how two ideas relate in terms of popularity or coverage.", "Our input is a collection of documents, each represented by a set of ideas and indexed by time.", "We denote a static set of ideas as I and a text corpus that consists of these ideas as C = {D 1 , .", ".", ".", ", D T }, where D t = {d t 1 , .", ".", ".", ", d t N t } gives the collection of documents at timestep t, and each document, d t k , is represented as a subset of ideas in I.", "Here T is the total number of timesteps, and N t is the number of documents at timestep t. It follows that the total number of documents N = T t=1 N t .", "In order to formally capture the two dimensions above, we employ two commonly-used statistics.", "First, we use empirical pointwise mutual information (PMI) to capture the cooccurrence of ideas within the same document (Church and Hanks, 1990); see Eq.", "1 in Fig.", "2 .", "Positive PMI indicates that ideas occur together more frequently than would be expected if they were independent, while negative PMI indicates the opposite.", "Second, we compute the correlation between normalized document frequency of ideas to capture the relation between the relative prevalence of ideas across documents over time; see Eq.", "2 in Fig.", "2 .", "Positiver indicates that two ideas have similar prevalence over time, while negativer sug-gests two anti-correlated ideas (i.e., when one goes up, the other goes down).", "The four types of relations in the introduction can now be obtained using PMI andr, which capture cooccurrence and prevalence correlation respectively.", "We further define the strength of the relation between two ideas as the absolute value of the product of their PMI andr scores: ∀x, y ∈ I, strength(x, y) = | PMI(x, y)×r(x, y)|.", "(3) Datasets and Representation of Ideas We use two types of datasets to validate our framework: news articles and research papers.", "We choose these two domains because competition between ideas has received significant interest in history of science (Kuhn, 1996) and research on framing (Chong and Druckman, 2007; Entman, 1993; Gitlin, 1980; Lakoff, 2014) .", "Furthermore, interesting differences may exist in these two domains as news evolves with external events and scientific research progresses through innovations.", "• News articles.", "We follow the strategy in Card et al.", "(2015) to obtain news articles from Lex-isNexis on five issues: abortion, immigration, same-sex marriage, smoking, and terrorism.", "We search for relevant articles using LexisNexis subject terms in U.S. newspapers from 1980 to 2016.", "Each of these corpora contains more than 25,000 articles.", "Please refer to the supplementary material for details.", "• Research papers.", "We consider full texts of papers from two communities: our own ACL community captured by papers from ACL, NAACL, EMNLP, and TACL from 1980 to 2014 (Radev et al., 2009 ; and the NIPS community from 1987 to 2016.", "3 There are 4.8K papers from the ACL community and 6.6K papers from the NIPS community.", "The processed datasets are available at https://chenhaot.com/ pages/idea-relations.html.", "In order to operationalize ideas in a text corpus, we consider two ways to represent ideas.", "• Topics.", "We extract topics from each document by running LDA (Blei et al., 2003) on each corpus C. In all datasets, we set the number of topics to 50.", "4 Formally, I is the 50 topics learned from the corpus, and each document is represented as the set of topics that are present with greater than 0.01 probability in the topic distribution for that document.", "• Keywords.", "We identify a list of distinguishing keywords for each corpus by comparing its word frequencies to the background frequencies found in other corpora using the informative Dirichlet prior model in Monroe et al.", "(2008) .", "We set the number of keywords to 100 for all corpora.", "For news articles, the background corpus for each issue is comprised of all articles from the other four issues.", "For research papers, we use NIPS as the background corpus for ACL and vice versa to identify what are the core concepts for each of these research areas.", "Formally, I is the 100 top distinguishing keywords in the corpus and each document is represented as the set of keywords within I that are present in the document.", "Refer to the supplementary material for a list of example keywords in each corpus.", "In both procedures, we lemmatize all words and add common bigram phrases to the vocabulary following Mikolov et al.", "(2013) .", "Note that in our analysis, ideas are only present or absent in a document, and a document can in principle be mapped to any subset of ideas in I.", "In our experiments 90% of documents are marked as containing between 7 and 14 ideas using topics, 8 and 33 ideas using keywords.", "Characterizing the Space of Relations To provide an overview of the four relation types in Fig.", "1 , we first examine the empirical distributions of the two statistics of interest across pairs of ideas.", "In most exploratory studies, however, we are most interested in pairs that exemplify each type of relation, i.e., the most extreme points in each quadrant.", "We thus look at these pairs in each corpus to observe how the four types differ in salience across datasets.", "Empirical Distribution Properties To the best of our knowledge, the distributions of pairwise cooccurrence and prevalence correlation have not been examined in previous literature.", "We thus first investigate the marginal distributions of cooccurrence and prevalence correlation and then our framework is to analyze relations between ideas, so this choice is not essential in this work.", "(Scott, 2015) .", "The plots along the axes show the marginal distribution of the corresponding dimension.", "In each plot, we give the Pearson correlation, and all Pearson correlations' p-values are less than 10 −40 .", "In these plots, we use topics to represent ideas.", "their joint distribution.", "Fig.", "3 shows three examples: two from news articles and one from research papers.", "We will also focus our case studies on these three corpora in §4.", "The corresponding plots for keywords have been relegated to supplementary material due to space limitations.", "Cooccurrence tends to be unimodal but not normal.", "In all of our datasets, pairwise cooccurrence ( PMI) presents a unimodal distribution that somewhat resembles a normal distribution, but it is rarely precisely normal.", "We cannot reject the hypothesis that it is unimodal for any dataset (using topics or keywords) using the dip test (Hartigan and Hartigan, 1985) , though D'Agostino's K 2 test (D'Agostino et al., 1990) rejects normality in almost all cases.", "Prevalence correlation exhibits diverse distributions.", "Pairwise prevalence correlation follows different distributions in news articles compared to research papers: they are unimodal in news articles, but not in ACL or NIPS.", "The dip test only rejects the unimodality hypothesis in NIPS.", "None follow normal distributions based on D'Agostino's K 2 test.", "Cooccurrence is positively correlated with prevalence correlation.", "In all of our datasets, cooccurrence is positively correlated with prevalence correlation whether we use topics or keywords to represent ideas, although the Pearson correlation coefficients vary.", "This suggests that there are more friendship and head-to-head relations than tryst and arms-race relations.", "Based on the results of kernel density estimation, we also observe that this correlation is often loose, e.g., in ACL topics, cooccurrence spreads widely at each mode of prevalence correlation.", "776 Relative Strength of Extreme Pairs We are interested in how our framework can identify intriguing relations between ideas.", "These potentially interesting pairs likely correspond to the extreme points in each quadrant instead of the ones around the origin, where PMI and prevalence correlation are both close to zero.", "Here we compare the relative strength of extreme pairs in each dataset.", "We will discuss how these extreme pairs confirm existing knowledge and suggest new hypotheses via case studies in §4.", "For each relation type, we average the strengths of the 25 pairs with the strongest relations in that type, with strength defined in Eq.", "3.", "This heuristic (henceforth collective strength) allows us to collectively compare the strengths of the most prominent friendship, tryst, arms-race, and head-to-head relations.", "The results are not sensitive to the choice of 25.", "Fig.", "4 shows the collective strength of the four types in all of our datasets.", "The most common ordering is: friendship > head-to-head > arms-race > tryst.", "The fact that friendship and head-to-head relations are strong is consistent with the positive correlation between cooccurrence and prevalence correlation.", "In news, friendship is the strongest relation type, but head-to-head is the strongest in ACL topics and NIPS topics.", "This suggests, unsurprisingly, that there are stronger head-to-head competitions (i.e., one idea takes over another) between ideas in scientific research than in news.", "We also see that topics show greater strength in our scientific article collections, while keywords dominate in news, especially in friendship.", "We conjecture that terms in scientific literature are often overloaded (e.g., a tree could be a parse tree or a decision tree), necessitating some abstraction when representing ideas.", "In contrast, news stories are more self-contained and seek to employ consistent usage.", "Exploratory Studies We present case studies based on strongly related pairs of ideas in the four types of relation.", "Throughout this section, \"rank\" refers to the rank of the relation strength between a pair of ideas in its corresponding relation type.", "International Relations in Terrorism Following a decade of declining violence in the 90s, the events of September 11, 2001 precipitated a dramatic increase in concern about terrorism, and a major shift in how it was framed (Kern et al., 2003) .", "As a showcase, we consider a topic which encompasses much of the U.S. government's response to terrorism: \"federal, state\".", "5 We observe two topics engaging in an \"arms race\" with this one: \"afghanistan, taliban\" and \"pakistan, india\".", "These correspond to two geopolitical regions closely linked to the U.S. government's concern with terrorism, and both were sites of U.S. military action during the period of our dataset.", "Events abroad and the U.S. government's responses follow the arms-race pattern, each holding increasing 5 As in §1, we summarize each topic using a pair of strongly associated words, instead of assigning a name.", "Figure 6 : Tryst relation between arab and islam using keywords to represent ideas (#2 in tryst): these two words tend to cooccur but are anti-correlated in prevalence over time.", "In particular, islam was rarely used in coverage of terrorism in the 1980s.", "attention with the other, likely because they share the same underlying cause.", "We also observe two head-to-head rivals to the \"federal, state\" topic: \"iran, libya\" and \"israel, palestinian\".", "While these topics correspond to regions that are hotly debated in the U.S., their coverage in news tends not to correlate temporally with the U.S. government's responses to terrorism, at least during the time period of our corpus.", "Discussion of these regions was more prevalent in the 80s and 90s, with declining media coverage since then (Kern et al., 2003) .", "The relations between these topics are consistent with structural balance theory (Cartwright and Harary, 1956; Heider, 1946) , which suggests that the enemy of an enemy is a friend.", "The \"afghanistan, taliban\" topic has the strongest friendship relation with the \"pakistan, india\" topic, i.e., they are likely to cooccur and are positively correlated in prevalence.", "Similarly, the \"iran, libya\" topic is a close \"friend\" with the \"israel, palestinian\" topic (ranked 8th in friendship).", "Fig.", "5a shows the relations between the \"federal, state\" topic and four international topics.", "Edge colors indicate relation types and the number in an edge label presents the ranking of its strength in the corresponding relation type.", "Fig.", "5b and Fig.", "5c represent concrete examples in Fig.", "5a : \"federal, state\" and \"afghanistan, taliban\" follow similar trends, although \"afghanistan, taliban\" fluctuates over time due to significant events such as the September 11 attacks in 2001 and the death of Bin Laden in 2011; while \"iran, lybia\" is negatively correlated with \"federal, state\".", "In fact, more than 70% of terrorism news in the 80s contained the \"iran, lybia\" topic.", "When using keywords to represent ideas, we observe similar relations between the term homeland security and terms related to the above foreign countries.", "In addition, we highlight an interesting but unexpected tryst relation between arab and islam (Fig.", "6) .", "It is not surprising that these two words tend to cooccur in the same news articles, but the usage of islam in the news is increasing while arab is declining.", "The increasing prevalence of islam and decreasing prevalence of arab over this time period can also be seen, for example, using Google's n-gram viewer, but it of course provides no information about cooccurrence.", "This trend has not been previously noted to the best of our knowledge, although an article in the Huffington Post called for news editors to distinguish Muslim from Arab.", "6 Our observation suggests a conjecture that the news media have increasingly linked terrorism to a religious group rather than an ethnic group, perhaps in part due to the tie between the events of 9/11 and Afghanistan, which is not an Arab or Arabic-speaking country.", "We leave it to further investigation to confirm or reject this hypothesis.", "To further demonstrate the effectiveness of our approach, we compare a pair's rank using only cooccurrence or prevalence correlation with its rank in our framework.", "Table 1 shows the results for three pairs above.", "If we had looked at only cooccurrence or prevalence correlation, we would probably have missed these interesting pairs.", "PMI Corr \"federal, state\", \"afghanistan, taliban\" (#2 in arms-race) 43 99 \"federal, state\", \"iran, lybia\" (#2 in head-to-head) 36 56 arab, islam (#2 in tryst) 106 1,494 Ethnicity Keywords in Immigration In addition to results on topics in §1, we observe unexpected patterns about ethnicity keywords in immigration news.", "Our observation starts with a top tryst relation between latino and asian.", "Although these words are likely to cooccur, their prevalence trajectories differ, with the discussion of Asian immigrants in the 1990s giving way to a focus on the word latino from 2000 onward.", "Possible theories to explain this observation include that undocumented immigrants are generally perceived as a Latino issue, or that Latino voters are increasingly influential in U.S. elections.", "Furthermore, latino holds head-to-head relations with two subgroups of Latin American immigrants: haitian and cuban.", "In particular, the strength of the relation with haitian is ranked #18 in headto-head relations.", "Meanwhile, haitian and cuban have a friendship relation, which is again consistent with structural balance theory.", "The decreasing prevalence of haitian and cuban perhaps speaks to the shifting geographical focus of recent immigration to the U.S., and issues of the Latino panethnicity.", "In fact, a majority of Latinos prefer to identify with their national origin relative to the pan-ethnic terms (Taylor et al., 2012) .", "However, we should also note that much of this coverage relates to a set of specific refugee crises, temporarily elevating the political importance of these nations in the U.S.", "Nevertheless, the underlying social and political reasons behind these head-to-head relations are worth further investigation.", "Relations between Topics in ACL Finally, we analyze relations between topics in the ACL Anthology.", "It turns out that \"machine translation\" is at a central position among top ranked relations in all the four types (Fig.", "8) .", "7 It is part of the strongest relation in all four types except tryst (ranked #5).", "The full relation graph presents further patterns.", "Friendship demonstrates transitivity: both \"machine translation\" and \"word alignment\" have similar relations with other topics.", "But such transitivity does not hold for tryst: although the prevalence of \"rule, forest methods\" is anti-correlated with both \"machine translation\" and \"sentiment analysis\", \"sentiment analysis\" seldom cooccurs with \"rule, for-est methods\" because \"sentiment analysis\" is seldom built on parsing algorithms.", "Similarly, \"rule, forest methods\" and \"discourse (coherence)\" hold an armsrace relation: they do not tend to cooccur and both decline in relative prevalence as \"machine translation\" rises.", "The prevalence of each of these ideas in comparison to machine translation is shown in in Fig.", "9 , which reveals additional detail.", "Figure 9 : Relations between topics in ACL Anthology in the space of cooccurrence and prevalence correlation (prevalence correlation is shown explicitly and cooccurrence is encoded in row captions), color coded to match the text.", "The y-axis represents the relative proportion of papers in a year that contain the corresponding topic.", "The top 10 words for the rule, forest methods topic are rule, grammar, derivation, span, algorithm, forest, parsing, figure, set, string.", "Concluding Discussion We proposed a method to characterize relations between ideas in texts through the lens of cooccurrence within documents and prevalence correlation over time.", "For the first time, we observe that the distribution of pairwise cooccurrence is unimodal, while the distribution of pairwise prevalence correlation is not always unimodal, and show that they are positively correlated.", "This combination suggests four types of relations between ideas, and these four types are all found to varying extents in our experiments.", "We illustrate our computational method by exploratory studies on news corpora and scientific research papers.", "We not only confirm existing knowledge but also suggest hypotheses around the usage of arab and islam in terrorism and latino and asian in immigration.", "It is important to note that the relations found using our approach depend on the nature of the representation of ideas and the source of texts.", "For instance, we cannot expect relations found in news articles to reflect shifts in public opinion if news articles do not effectively track public opinion.", "Our method is entirely observational.", "It remains as a further stage of analysis to understand the underlying reasons that lead to these relations be-tween ideas.", "In scientific research, for example, it could simply be the progress of science, i.e., newer ideas overtake older ones deemed less valuable at a given time; on the other hand, history suggests that it is not always the correct ideas that are most expressed, and many other factors may be important.", "Similarly, in news coverage, underlying sociological and political situations have significant impact on which ideas are presented, and how.", "There are many potential directions to improve our method to account for complex relations between ideas.", "For instance, we assume that both ideas and relations are statically grounded in keywords or topics.", "In reality, ideas and relations both evolve over time: a tryst relation might appear as friendship if we focus on a narrower time period.", "Similarly, new ideas show up and even the same idea may change over time and be represented by different words." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "3.1", "3.2", "4", "4.1", "4.2", "4.3", "6" ], "paper_header_content": [ "Introduction", "Computational Framework", "Cooccurrence and Prevalence Correlation", "Datasets and Representation of Ideas", "Characterizing the Space of Relations", "Empirical Distribution Properties", "Relative Strength of Extreme Pairs", "Exploratory Studies", "International Relations in Terrorism", "Ethnicity Keywords in Immigration", "Relations between Topics in ACL", "Concluding Discussion" ] }
GEM-SciDuet-train-93#paper-1238#slide-9
Joint distributions
Correlated, but many pairs in all four quadrants!
Correlated, but many pairs in all four quadrants!
[]
GEM-SciDuet-train-93#paper-1238#slide-10
1238
Friendships, Rivalries, and Trysts: Characterizing Relations between Ideas in Texts
Understanding how ideas relate to each other is a fundamental question in many domains, ranging from intellectual history to public communication. Because ideas are naturally embedded in texts, we propose the first framework to systematically characterize the relations between ideas based on their occurrence in a corpus of documents, independent of how these ideas are represented. Combining two statistics-cooccurrence within documents and prevalence correlation over time-our approach reveals a number of different ways in which ideas can cooperate and compete. For instance, two ideas can closely track each other's prevalence over time, and yet rarely cooccur, almost like a "cold war" scenario. We observe that pairwise cooccurrence and prevalence correlation exhibit different distributions. We further demonstrate that our approach is able to uncover intriguing relations between ideas through in-depth case studies on news articles and research papers.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228 ], "paper_content_text": [ "Introduction Ideas exist in the mind, but are made manifest in language, where they compete with each other for the scarce resource of human attention.", "Milton (1644) used the \"marketplace of ideas\" metaphor to argue that the truth will win out when ideas freely compete; Dawkins (1976) similarly likened the evolution of ideas to natural selection of genes.", "We propose a framework to quantitatively characterize competition and cooperation between ideas in texts, independent of how they might be represented.", "By \"ideas\", we mean any discrete conceptual units that can be identified as being present or absent in a document.", "In this work, we consider representing ideas using keywords and topics obtained in an unsupervised fashion, but our way of characterizing the relations between ideas could be applied to many other types of textual representations, such as frames (Card et al., 2015) and hashtags.", "What does it mean for two ideas to compete in texts, quantitatively?", "Consider, for example, the issue of immigration.", "There are two strongly competing narratives about the roughly 11 million people 1 who are residing in the United States without permission.", "One is \"illegal aliens\", who \"steal\" jobs and deny opportunities to legal immigrants; the other is \"undocumented immigrants\", who are already part of the fabric of society and deserve a path to citizenship (Merolla et al., 2013) .", "Although prior knowledge suggests that these two narratives compete, it is not immediately obvious what measures might reveal this competition in a corpus of writing about immigration.", "One question is whether or not these two ideas cooccur in the same documents.", "In the example above, these narratives are used by distinct groups of people with different ideologies.", "The fact that they don't cooccur is one clue that they may be in competition with each other.", "However, cooccurrence is insufficient to express the selection process of ideas, i.e., some ideas fade out over time, while others rise in popularity, analogous to the populations of species in nature.", "Of the two narratives on immigration, we may expect one to win out at the expense of another as public opinion shifts.", "Alternatively, we might expect to see these narratives reinforcing each other, as both sides intensify their messaging in response to growing opposition, much like the U.S.S.R. and immigration, deportation republican, party Figure 1 : Relations between ideas in the space of cooccurrence and prevalence correlation (prevalence correlation is shown explicitly and cooccurrence is encoded in row captions).", "We use topics from LDA (Blei et al., 2003) to represent ideas.", "Each topic is named with a pair of words that are most strongly associated with the topic in LDA.", "Subplots show examples of relations between topics found in U.S. newspaper articles on immigration from 1980 to 2016, color coded to match the description in text.", "The y-axis represents the proportion of news articles in a year (in our corpus) that contain the corresponding topic.", "All examples are among the top 3 strongest relations in each type except (\"immigrant, undocumented\", \"illegal, alien\"), which corresponds to the two competing narratives.", "We explain the formal definition of strength in §2.", "the U.S. during the cold war.", "To capture these possibilities, we use prevalence correlation over time.", "Building on these insights, we propose a framework that combines cooccurrence within documents and prevalence correlation over time.", "This framework gives rise to four possible types of relation that correspond to the four quadrants in Fig.", "1 .", "We explain each type using examples from news articles in U.S. newspapers on immigration from 1980 to 2016.", "Here, we have used LDA to identify ideas in the form of topics, and we denote each idea with a pair of words most strongly associated with the corresponding topic.", "Friendship (correlated over time, likely to cooccur).", "The \"immigrant, undocumented\" topic tends to cooccur with \"obama, president\" and both topics have been rising during the period of our dataset, likely because the \"undocumented immigrants\" narrative was an important part of Obama's framing of the immigration issue (Haynes et al., 2016) .", "Head-to-head (anti-correlated over time, unlikely to cooccur).", "\"immigrant, undocumented\" and \"illegal, alien\" are in a head-to-head competition: these two topics rarely cooccur, and \"immigrant, undocu-mented\" has been growing in prevalence, while the usage of \"illegal, alien\" in newspapers has been declining.", "This observation agrees with a report from Pew Research Center (Guskin, 2013) .", "Tryst (anti-correlated over time, likely to cooccur).", "The two off-diagonal examples use topics related to law enforcement.", "Overall, \"immigration, deportation\" and \"detention, jail\" often cooccur but \"detention, jail\" has been declining, while \"immigration, deportation\" has been rising.", "This possibly relates to the promises to overhaul the immigration detention system (Kalhan, 2010).", "2 Arms-race (correlated over time, unlikely to cooccur).", "One of the above law enforcement topics (\"immigration, deportation\") and a topic on the Republican party (\"republican, party\") hold an armsrace relation: they are both growing in prevalence over time, but rarely cooccur, perhaps suggesting an underlying common cause.", "Note that our terminology describes the relations between ideas in texts, not necessarily between the entities to which the ideas refer.", "For example, we find that the relation between \"Israel\" and \"Palestine\" is \"friendship\" in news articles on terrorism, based on their prevalence correlation and cooccurrence in that corpus.", "We introduce the formal definition of our framework in §2 and apply it to news articles on five issues and research papers from ACL Anthology and NIPS as testbeds.", "We operationalize ideas using topics (Blei et al., 2003) and keywords (Monroe et al., 2008) .", "To explore whether the four relation types exist and how strong these relations are, we first examine the marginal and joint distributions of cooccurrence and prevalence correlation ( §3).", "We find that cooccurrence shows a unimodal normal-shaped distribution but prevalence correlation demonstrates more diverse distributions across corpora.", "As we would expect, there are, in general, more and stronger friendship and head-to-head relations than arms-race and tryst relations.", "Second, we demonstrate the effectiveness of our framework through in-depth case studies ( §4).", "We not only validate existing knowledge about some news issues and research areas, but also identify hypotheses that require further investigation.", "For example, using keywords to represent ideas, a top pair with the tryst relation in news articles on terrorism is \"arab\" and \"islam\"; they are likely to cooccur, but \"islam\" is rising in relative prevalence while \"arab\" is declining.", "This suggests a conjecture that the news media have increasingly linked terrorism to a religious group rather than an ethnic group.", "We also show relations between topics in ACL that center around machine translation.", "Our work is a first step towards understanding relations between ideas from text corpora, a complex and important research question.", "We provide some concluding thoughts in §6.", "Computational Framework The aim of our computational framework is to explore relations between ideas.", "We thus assume that the set of relevant ideas has been identified, and those expressed in each document have been tabulated.", "Our open-source implementation is at https://github.com/Noahs-ARK/ idea_relations/.", "In the following, we introduce our formal definitions and datasets.", "∀x, y ∈ I, PMI(x, y) = logP (x, y) P (x)P (y) = C + log 1+ t k 1{x∈dt k }·1{y∈dt k } (1+ t k 1{x∈dt k })·(1+ t k 1{y∈dt k }) (1) r(x, y) = t P (x|t)−P (x|t) P (y|t)−P (y|t) t P (x|t)−P (x|t) 2 t P (y|t)−P (y|t) 2 (2) Figure 2 : Eq.", "1 is the empirical pointwise mutual information for two ideas, our measure of cooccurrence of ideas; note that we use add-one smoothing in estimating PMI.", "Eq.", "2 is the Pearson correlation between two ideas' prevalence over time.", "Cooccurrence and Prevalence Correlation As discussed in the introduction, we focus on two dimensions to quantify relations between ideas: 1. cooccurrence reveals to what extent two ideas tend to occur in the same contexts; 2. similarity between the relative prevalence of ideas over time reveals how two ideas relate in terms of popularity or coverage.", "Our input is a collection of documents, each represented by a set of ideas and indexed by time.", "We denote a static set of ideas as I and a text corpus that consists of these ideas as C = {D 1 , .", ".", ".", ", D T }, where D t = {d t 1 , .", ".", ".", ", d t N t } gives the collection of documents at timestep t, and each document, d t k , is represented as a subset of ideas in I.", "Here T is the total number of timesteps, and N t is the number of documents at timestep t. It follows that the total number of documents N = T t=1 N t .", "In order to formally capture the two dimensions above, we employ two commonly-used statistics.", "First, we use empirical pointwise mutual information (PMI) to capture the cooccurrence of ideas within the same document (Church and Hanks, 1990); see Eq.", "1 in Fig.", "2 .", "Positive PMI indicates that ideas occur together more frequently than would be expected if they were independent, while negative PMI indicates the opposite.", "Second, we compute the correlation between normalized document frequency of ideas to capture the relation between the relative prevalence of ideas across documents over time; see Eq.", "2 in Fig.", "2 .", "Positiver indicates that two ideas have similar prevalence over time, while negativer sug-gests two anti-correlated ideas (i.e., when one goes up, the other goes down).", "The four types of relations in the introduction can now be obtained using PMI andr, which capture cooccurrence and prevalence correlation respectively.", "We further define the strength of the relation between two ideas as the absolute value of the product of their PMI andr scores: ∀x, y ∈ I, strength(x, y) = | PMI(x, y)×r(x, y)|.", "(3) Datasets and Representation of Ideas We use two types of datasets to validate our framework: news articles and research papers.", "We choose these two domains because competition between ideas has received significant interest in history of science (Kuhn, 1996) and research on framing (Chong and Druckman, 2007; Entman, 1993; Gitlin, 1980; Lakoff, 2014) .", "Furthermore, interesting differences may exist in these two domains as news evolves with external events and scientific research progresses through innovations.", "• News articles.", "We follow the strategy in Card et al.", "(2015) to obtain news articles from Lex-isNexis on five issues: abortion, immigration, same-sex marriage, smoking, and terrorism.", "We search for relevant articles using LexisNexis subject terms in U.S. newspapers from 1980 to 2016.", "Each of these corpora contains more than 25,000 articles.", "Please refer to the supplementary material for details.", "• Research papers.", "We consider full texts of papers from two communities: our own ACL community captured by papers from ACL, NAACL, EMNLP, and TACL from 1980 to 2014 (Radev et al., 2009 ; and the NIPS community from 1987 to 2016.", "3 There are 4.8K papers from the ACL community and 6.6K papers from the NIPS community.", "The processed datasets are available at https://chenhaot.com/ pages/idea-relations.html.", "In order to operationalize ideas in a text corpus, we consider two ways to represent ideas.", "• Topics.", "We extract topics from each document by running LDA (Blei et al., 2003) on each corpus C. In all datasets, we set the number of topics to 50.", "4 Formally, I is the 50 topics learned from the corpus, and each document is represented as the set of topics that are present with greater than 0.01 probability in the topic distribution for that document.", "• Keywords.", "We identify a list of distinguishing keywords for each corpus by comparing its word frequencies to the background frequencies found in other corpora using the informative Dirichlet prior model in Monroe et al.", "(2008) .", "We set the number of keywords to 100 for all corpora.", "For news articles, the background corpus for each issue is comprised of all articles from the other four issues.", "For research papers, we use NIPS as the background corpus for ACL and vice versa to identify what are the core concepts for each of these research areas.", "Formally, I is the 100 top distinguishing keywords in the corpus and each document is represented as the set of keywords within I that are present in the document.", "Refer to the supplementary material for a list of example keywords in each corpus.", "In both procedures, we lemmatize all words and add common bigram phrases to the vocabulary following Mikolov et al.", "(2013) .", "Note that in our analysis, ideas are only present or absent in a document, and a document can in principle be mapped to any subset of ideas in I.", "In our experiments 90% of documents are marked as containing between 7 and 14 ideas using topics, 8 and 33 ideas using keywords.", "Characterizing the Space of Relations To provide an overview of the four relation types in Fig.", "1 , we first examine the empirical distributions of the two statistics of interest across pairs of ideas.", "In most exploratory studies, however, we are most interested in pairs that exemplify each type of relation, i.e., the most extreme points in each quadrant.", "We thus look at these pairs in each corpus to observe how the four types differ in salience across datasets.", "Empirical Distribution Properties To the best of our knowledge, the distributions of pairwise cooccurrence and prevalence correlation have not been examined in previous literature.", "We thus first investigate the marginal distributions of cooccurrence and prevalence correlation and then our framework is to analyze relations between ideas, so this choice is not essential in this work.", "(Scott, 2015) .", "The plots along the axes show the marginal distribution of the corresponding dimension.", "In each plot, we give the Pearson correlation, and all Pearson correlations' p-values are less than 10 −40 .", "In these plots, we use topics to represent ideas.", "their joint distribution.", "Fig.", "3 shows three examples: two from news articles and one from research papers.", "We will also focus our case studies on these three corpora in §4.", "The corresponding plots for keywords have been relegated to supplementary material due to space limitations.", "Cooccurrence tends to be unimodal but not normal.", "In all of our datasets, pairwise cooccurrence ( PMI) presents a unimodal distribution that somewhat resembles a normal distribution, but it is rarely precisely normal.", "We cannot reject the hypothesis that it is unimodal for any dataset (using topics or keywords) using the dip test (Hartigan and Hartigan, 1985) , though D'Agostino's K 2 test (D'Agostino et al., 1990) rejects normality in almost all cases.", "Prevalence correlation exhibits diverse distributions.", "Pairwise prevalence correlation follows different distributions in news articles compared to research papers: they are unimodal in news articles, but not in ACL or NIPS.", "The dip test only rejects the unimodality hypothesis in NIPS.", "None follow normal distributions based on D'Agostino's K 2 test.", "Cooccurrence is positively correlated with prevalence correlation.", "In all of our datasets, cooccurrence is positively correlated with prevalence correlation whether we use topics or keywords to represent ideas, although the Pearson correlation coefficients vary.", "This suggests that there are more friendship and head-to-head relations than tryst and arms-race relations.", "Based on the results of kernel density estimation, we also observe that this correlation is often loose, e.g., in ACL topics, cooccurrence spreads widely at each mode of prevalence correlation.", "776 Relative Strength of Extreme Pairs We are interested in how our framework can identify intriguing relations between ideas.", "These potentially interesting pairs likely correspond to the extreme points in each quadrant instead of the ones around the origin, where PMI and prevalence correlation are both close to zero.", "Here we compare the relative strength of extreme pairs in each dataset.", "We will discuss how these extreme pairs confirm existing knowledge and suggest new hypotheses via case studies in §4.", "For each relation type, we average the strengths of the 25 pairs with the strongest relations in that type, with strength defined in Eq.", "3.", "This heuristic (henceforth collective strength) allows us to collectively compare the strengths of the most prominent friendship, tryst, arms-race, and head-to-head relations.", "The results are not sensitive to the choice of 25.", "Fig.", "4 shows the collective strength of the four types in all of our datasets.", "The most common ordering is: friendship > head-to-head > arms-race > tryst.", "The fact that friendship and head-to-head relations are strong is consistent with the positive correlation between cooccurrence and prevalence correlation.", "In news, friendship is the strongest relation type, but head-to-head is the strongest in ACL topics and NIPS topics.", "This suggests, unsurprisingly, that there are stronger head-to-head competitions (i.e., one idea takes over another) between ideas in scientific research than in news.", "We also see that topics show greater strength in our scientific article collections, while keywords dominate in news, especially in friendship.", "We conjecture that terms in scientific literature are often overloaded (e.g., a tree could be a parse tree or a decision tree), necessitating some abstraction when representing ideas.", "In contrast, news stories are more self-contained and seek to employ consistent usage.", "Exploratory Studies We present case studies based on strongly related pairs of ideas in the four types of relation.", "Throughout this section, \"rank\" refers to the rank of the relation strength between a pair of ideas in its corresponding relation type.", "International Relations in Terrorism Following a decade of declining violence in the 90s, the events of September 11, 2001 precipitated a dramatic increase in concern about terrorism, and a major shift in how it was framed (Kern et al., 2003) .", "As a showcase, we consider a topic which encompasses much of the U.S. government's response to terrorism: \"federal, state\".", "5 We observe two topics engaging in an \"arms race\" with this one: \"afghanistan, taliban\" and \"pakistan, india\".", "These correspond to two geopolitical regions closely linked to the U.S. government's concern with terrorism, and both were sites of U.S. military action during the period of our dataset.", "Events abroad and the U.S. government's responses follow the arms-race pattern, each holding increasing 5 As in §1, we summarize each topic using a pair of strongly associated words, instead of assigning a name.", "Figure 6 : Tryst relation between arab and islam using keywords to represent ideas (#2 in tryst): these two words tend to cooccur but are anti-correlated in prevalence over time.", "In particular, islam was rarely used in coverage of terrorism in the 1980s.", "attention with the other, likely because they share the same underlying cause.", "We also observe two head-to-head rivals to the \"federal, state\" topic: \"iran, libya\" and \"israel, palestinian\".", "While these topics correspond to regions that are hotly debated in the U.S., their coverage in news tends not to correlate temporally with the U.S. government's responses to terrorism, at least during the time period of our corpus.", "Discussion of these regions was more prevalent in the 80s and 90s, with declining media coverage since then (Kern et al., 2003) .", "The relations between these topics are consistent with structural balance theory (Cartwright and Harary, 1956; Heider, 1946) , which suggests that the enemy of an enemy is a friend.", "The \"afghanistan, taliban\" topic has the strongest friendship relation with the \"pakistan, india\" topic, i.e., they are likely to cooccur and are positively correlated in prevalence.", "Similarly, the \"iran, libya\" topic is a close \"friend\" with the \"israel, palestinian\" topic (ranked 8th in friendship).", "Fig.", "5a shows the relations between the \"federal, state\" topic and four international topics.", "Edge colors indicate relation types and the number in an edge label presents the ranking of its strength in the corresponding relation type.", "Fig.", "5b and Fig.", "5c represent concrete examples in Fig.", "5a : \"federal, state\" and \"afghanistan, taliban\" follow similar trends, although \"afghanistan, taliban\" fluctuates over time due to significant events such as the September 11 attacks in 2001 and the death of Bin Laden in 2011; while \"iran, lybia\" is negatively correlated with \"federal, state\".", "In fact, more than 70% of terrorism news in the 80s contained the \"iran, lybia\" topic.", "When using keywords to represent ideas, we observe similar relations between the term homeland security and terms related to the above foreign countries.", "In addition, we highlight an interesting but unexpected tryst relation between arab and islam (Fig.", "6) .", "It is not surprising that these two words tend to cooccur in the same news articles, but the usage of islam in the news is increasing while arab is declining.", "The increasing prevalence of islam and decreasing prevalence of arab over this time period can also be seen, for example, using Google's n-gram viewer, but it of course provides no information about cooccurrence.", "This trend has not been previously noted to the best of our knowledge, although an article in the Huffington Post called for news editors to distinguish Muslim from Arab.", "6 Our observation suggests a conjecture that the news media have increasingly linked terrorism to a religious group rather than an ethnic group, perhaps in part due to the tie between the events of 9/11 and Afghanistan, which is not an Arab or Arabic-speaking country.", "We leave it to further investigation to confirm or reject this hypothesis.", "To further demonstrate the effectiveness of our approach, we compare a pair's rank using only cooccurrence or prevalence correlation with its rank in our framework.", "Table 1 shows the results for three pairs above.", "If we had looked at only cooccurrence or prevalence correlation, we would probably have missed these interesting pairs.", "PMI Corr \"federal, state\", \"afghanistan, taliban\" (#2 in arms-race) 43 99 \"federal, state\", \"iran, lybia\" (#2 in head-to-head) 36 56 arab, islam (#2 in tryst) 106 1,494 Ethnicity Keywords in Immigration In addition to results on topics in §1, we observe unexpected patterns about ethnicity keywords in immigration news.", "Our observation starts with a top tryst relation between latino and asian.", "Although these words are likely to cooccur, their prevalence trajectories differ, with the discussion of Asian immigrants in the 1990s giving way to a focus on the word latino from 2000 onward.", "Possible theories to explain this observation include that undocumented immigrants are generally perceived as a Latino issue, or that Latino voters are increasingly influential in U.S. elections.", "Furthermore, latino holds head-to-head relations with two subgroups of Latin American immigrants: haitian and cuban.", "In particular, the strength of the relation with haitian is ranked #18 in headto-head relations.", "Meanwhile, haitian and cuban have a friendship relation, which is again consistent with structural balance theory.", "The decreasing prevalence of haitian and cuban perhaps speaks to the shifting geographical focus of recent immigration to the U.S., and issues of the Latino panethnicity.", "In fact, a majority of Latinos prefer to identify with their national origin relative to the pan-ethnic terms (Taylor et al., 2012) .", "However, we should also note that much of this coverage relates to a set of specific refugee crises, temporarily elevating the political importance of these nations in the U.S.", "Nevertheless, the underlying social and political reasons behind these head-to-head relations are worth further investigation.", "Relations between Topics in ACL Finally, we analyze relations between topics in the ACL Anthology.", "It turns out that \"machine translation\" is at a central position among top ranked relations in all the four types (Fig.", "8) .", "7 It is part of the strongest relation in all four types except tryst (ranked #5).", "The full relation graph presents further patterns.", "Friendship demonstrates transitivity: both \"machine translation\" and \"word alignment\" have similar relations with other topics.", "But such transitivity does not hold for tryst: although the prevalence of \"rule, forest methods\" is anti-correlated with both \"machine translation\" and \"sentiment analysis\", \"sentiment analysis\" seldom cooccurs with \"rule, for-est methods\" because \"sentiment analysis\" is seldom built on parsing algorithms.", "Similarly, \"rule, forest methods\" and \"discourse (coherence)\" hold an armsrace relation: they do not tend to cooccur and both decline in relative prevalence as \"machine translation\" rises.", "The prevalence of each of these ideas in comparison to machine translation is shown in in Fig.", "9 , which reveals additional detail.", "Figure 9 : Relations between topics in ACL Anthology in the space of cooccurrence and prevalence correlation (prevalence correlation is shown explicitly and cooccurrence is encoded in row captions), color coded to match the text.", "The y-axis represents the relative proportion of papers in a year that contain the corresponding topic.", "The top 10 words for the rule, forest methods topic are rule, grammar, derivation, span, algorithm, forest, parsing, figure, set, string.", "Concluding Discussion We proposed a method to characterize relations between ideas in texts through the lens of cooccurrence within documents and prevalence correlation over time.", "For the first time, we observe that the distribution of pairwise cooccurrence is unimodal, while the distribution of pairwise prevalence correlation is not always unimodal, and show that they are positively correlated.", "This combination suggests four types of relations between ideas, and these four types are all found to varying extents in our experiments.", "We illustrate our computational method by exploratory studies on news corpora and scientific research papers.", "We not only confirm existing knowledge but also suggest hypotheses around the usage of arab and islam in terrorism and latino and asian in immigration.", "It is important to note that the relations found using our approach depend on the nature of the representation of ideas and the source of texts.", "For instance, we cannot expect relations found in news articles to reflect shifts in public opinion if news articles do not effectively track public opinion.", "Our method is entirely observational.", "It remains as a further stage of analysis to understand the underlying reasons that lead to these relations be-tween ideas.", "In scientific research, for example, it could simply be the progress of science, i.e., newer ideas overtake older ones deemed less valuable at a given time; on the other hand, history suggests that it is not always the correct ideas that are most expressed, and many other factors may be important.", "Similarly, in news coverage, underlying sociological and political situations have significant impact on which ideas are presented, and how.", "There are many potential directions to improve our method to account for complex relations between ideas.", "For instance, we assume that both ideas and relations are statically grounded in keywords or topics.", "In reality, ideas and relations both evolve over time: a tryst relation might appear as friendship if we focus on a narrower time period.", "Similarly, new ideas show up and even the same idea may change over time and be represented by different words." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "3.1", "3.2", "4", "4.1", "4.2", "4.3", "6" ], "paper_header_content": [ "Introduction", "Computational Framework", "Cooccurrence and Prevalence Correlation", "Datasets and Representation of Ideas", "Characterizing the Space of Relations", "Empirical Distribution Properties", "Relative Strength of Extreme Pairs", "Exploratory Studies", "International Relations in Terrorism", "Ethnicity Keywords in Immigration", "Relations between Topics in ACL", "Concluding Discussion" ] }
GEM-SciDuet-train-93#paper-1238#slide-10
The strength of relations
Strength = |PMI| |correlati o n Extreme pairs are the interesting ones!
Strength = |PMI| |correlati o n Extreme pairs are the interesting ones!
[]
GEM-SciDuet-train-93#paper-1238#slide-13
1238
Friendships, Rivalries, and Trysts: Characterizing Relations between Ideas in Texts
Understanding how ideas relate to each other is a fundamental question in many domains, ranging from intellectual history to public communication. Because ideas are naturally embedded in texts, we propose the first framework to systematically characterize the relations between ideas based on their occurrence in a corpus of documents, independent of how these ideas are represented. Combining two statistics-cooccurrence within documents and prevalence correlation over time-our approach reveals a number of different ways in which ideas can cooperate and compete. For instance, two ideas can closely track each other's prevalence over time, and yet rarely cooccur, almost like a "cold war" scenario. We observe that pairwise cooccurrence and prevalence correlation exhibit different distributions. We further demonstrate that our approach is able to uncover intriguing relations between ideas through in-depth case studies on news articles and research papers.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228 ], "paper_content_text": [ "Introduction Ideas exist in the mind, but are made manifest in language, where they compete with each other for the scarce resource of human attention.", "Milton (1644) used the \"marketplace of ideas\" metaphor to argue that the truth will win out when ideas freely compete; Dawkins (1976) similarly likened the evolution of ideas to natural selection of genes.", "We propose a framework to quantitatively characterize competition and cooperation between ideas in texts, independent of how they might be represented.", "By \"ideas\", we mean any discrete conceptual units that can be identified as being present or absent in a document.", "In this work, we consider representing ideas using keywords and topics obtained in an unsupervised fashion, but our way of characterizing the relations between ideas could be applied to many other types of textual representations, such as frames (Card et al., 2015) and hashtags.", "What does it mean for two ideas to compete in texts, quantitatively?", "Consider, for example, the issue of immigration.", "There are two strongly competing narratives about the roughly 11 million people 1 who are residing in the United States without permission.", "One is \"illegal aliens\", who \"steal\" jobs and deny opportunities to legal immigrants; the other is \"undocumented immigrants\", who are already part of the fabric of society and deserve a path to citizenship (Merolla et al., 2013) .", "Although prior knowledge suggests that these two narratives compete, it is not immediately obvious what measures might reveal this competition in a corpus of writing about immigration.", "One question is whether or not these two ideas cooccur in the same documents.", "In the example above, these narratives are used by distinct groups of people with different ideologies.", "The fact that they don't cooccur is one clue that they may be in competition with each other.", "However, cooccurrence is insufficient to express the selection process of ideas, i.e., some ideas fade out over time, while others rise in popularity, analogous to the populations of species in nature.", "Of the two narratives on immigration, we may expect one to win out at the expense of another as public opinion shifts.", "Alternatively, we might expect to see these narratives reinforcing each other, as both sides intensify their messaging in response to growing opposition, much like the U.S.S.R. and immigration, deportation republican, party Figure 1 : Relations between ideas in the space of cooccurrence and prevalence correlation (prevalence correlation is shown explicitly and cooccurrence is encoded in row captions).", "We use topics from LDA (Blei et al., 2003) to represent ideas.", "Each topic is named with a pair of words that are most strongly associated with the topic in LDA.", "Subplots show examples of relations between topics found in U.S. newspaper articles on immigration from 1980 to 2016, color coded to match the description in text.", "The y-axis represents the proportion of news articles in a year (in our corpus) that contain the corresponding topic.", "All examples are among the top 3 strongest relations in each type except (\"immigrant, undocumented\", \"illegal, alien\"), which corresponds to the two competing narratives.", "We explain the formal definition of strength in §2.", "the U.S. during the cold war.", "To capture these possibilities, we use prevalence correlation over time.", "Building on these insights, we propose a framework that combines cooccurrence within documents and prevalence correlation over time.", "This framework gives rise to four possible types of relation that correspond to the four quadrants in Fig.", "1 .", "We explain each type using examples from news articles in U.S. newspapers on immigration from 1980 to 2016.", "Here, we have used LDA to identify ideas in the form of topics, and we denote each idea with a pair of words most strongly associated with the corresponding topic.", "Friendship (correlated over time, likely to cooccur).", "The \"immigrant, undocumented\" topic tends to cooccur with \"obama, president\" and both topics have been rising during the period of our dataset, likely because the \"undocumented immigrants\" narrative was an important part of Obama's framing of the immigration issue (Haynes et al., 2016) .", "Head-to-head (anti-correlated over time, unlikely to cooccur).", "\"immigrant, undocumented\" and \"illegal, alien\" are in a head-to-head competition: these two topics rarely cooccur, and \"immigrant, undocu-mented\" has been growing in prevalence, while the usage of \"illegal, alien\" in newspapers has been declining.", "This observation agrees with a report from Pew Research Center (Guskin, 2013) .", "Tryst (anti-correlated over time, likely to cooccur).", "The two off-diagonal examples use topics related to law enforcement.", "Overall, \"immigration, deportation\" and \"detention, jail\" often cooccur but \"detention, jail\" has been declining, while \"immigration, deportation\" has been rising.", "This possibly relates to the promises to overhaul the immigration detention system (Kalhan, 2010).", "2 Arms-race (correlated over time, unlikely to cooccur).", "One of the above law enforcement topics (\"immigration, deportation\") and a topic on the Republican party (\"republican, party\") hold an armsrace relation: they are both growing in prevalence over time, but rarely cooccur, perhaps suggesting an underlying common cause.", "Note that our terminology describes the relations between ideas in texts, not necessarily between the entities to which the ideas refer.", "For example, we find that the relation between \"Israel\" and \"Palestine\" is \"friendship\" in news articles on terrorism, based on their prevalence correlation and cooccurrence in that corpus.", "We introduce the formal definition of our framework in §2 and apply it to news articles on five issues and research papers from ACL Anthology and NIPS as testbeds.", "We operationalize ideas using topics (Blei et al., 2003) and keywords (Monroe et al., 2008) .", "To explore whether the four relation types exist and how strong these relations are, we first examine the marginal and joint distributions of cooccurrence and prevalence correlation ( §3).", "We find that cooccurrence shows a unimodal normal-shaped distribution but prevalence correlation demonstrates more diverse distributions across corpora.", "As we would expect, there are, in general, more and stronger friendship and head-to-head relations than arms-race and tryst relations.", "Second, we demonstrate the effectiveness of our framework through in-depth case studies ( §4).", "We not only validate existing knowledge about some news issues and research areas, but also identify hypotheses that require further investigation.", "For example, using keywords to represent ideas, a top pair with the tryst relation in news articles on terrorism is \"arab\" and \"islam\"; they are likely to cooccur, but \"islam\" is rising in relative prevalence while \"arab\" is declining.", "This suggests a conjecture that the news media have increasingly linked terrorism to a religious group rather than an ethnic group.", "We also show relations between topics in ACL that center around machine translation.", "Our work is a first step towards understanding relations between ideas from text corpora, a complex and important research question.", "We provide some concluding thoughts in §6.", "Computational Framework The aim of our computational framework is to explore relations between ideas.", "We thus assume that the set of relevant ideas has been identified, and those expressed in each document have been tabulated.", "Our open-source implementation is at https://github.com/Noahs-ARK/ idea_relations/.", "In the following, we introduce our formal definitions and datasets.", "∀x, y ∈ I, PMI(x, y) = logP (x, y) P (x)P (y) = C + log 1+ t k 1{x∈dt k }·1{y∈dt k } (1+ t k 1{x∈dt k })·(1+ t k 1{y∈dt k }) (1) r(x, y) = t P (x|t)−P (x|t) P (y|t)−P (y|t) t P (x|t)−P (x|t) 2 t P (y|t)−P (y|t) 2 (2) Figure 2 : Eq.", "1 is the empirical pointwise mutual information for two ideas, our measure of cooccurrence of ideas; note that we use add-one smoothing in estimating PMI.", "Eq.", "2 is the Pearson correlation between two ideas' prevalence over time.", "Cooccurrence and Prevalence Correlation As discussed in the introduction, we focus on two dimensions to quantify relations between ideas: 1. cooccurrence reveals to what extent two ideas tend to occur in the same contexts; 2. similarity between the relative prevalence of ideas over time reveals how two ideas relate in terms of popularity or coverage.", "Our input is a collection of documents, each represented by a set of ideas and indexed by time.", "We denote a static set of ideas as I and a text corpus that consists of these ideas as C = {D 1 , .", ".", ".", ", D T }, where D t = {d t 1 , .", ".", ".", ", d t N t } gives the collection of documents at timestep t, and each document, d t k , is represented as a subset of ideas in I.", "Here T is the total number of timesteps, and N t is the number of documents at timestep t. It follows that the total number of documents N = T t=1 N t .", "In order to formally capture the two dimensions above, we employ two commonly-used statistics.", "First, we use empirical pointwise mutual information (PMI) to capture the cooccurrence of ideas within the same document (Church and Hanks, 1990); see Eq.", "1 in Fig.", "2 .", "Positive PMI indicates that ideas occur together more frequently than would be expected if they were independent, while negative PMI indicates the opposite.", "Second, we compute the correlation between normalized document frequency of ideas to capture the relation between the relative prevalence of ideas across documents over time; see Eq.", "2 in Fig.", "2 .", "Positiver indicates that two ideas have similar prevalence over time, while negativer sug-gests two anti-correlated ideas (i.e., when one goes up, the other goes down).", "The four types of relations in the introduction can now be obtained using PMI andr, which capture cooccurrence and prevalence correlation respectively.", "We further define the strength of the relation between two ideas as the absolute value of the product of their PMI andr scores: ∀x, y ∈ I, strength(x, y) = | PMI(x, y)×r(x, y)|.", "(3) Datasets and Representation of Ideas We use two types of datasets to validate our framework: news articles and research papers.", "We choose these two domains because competition between ideas has received significant interest in history of science (Kuhn, 1996) and research on framing (Chong and Druckman, 2007; Entman, 1993; Gitlin, 1980; Lakoff, 2014) .", "Furthermore, interesting differences may exist in these two domains as news evolves with external events and scientific research progresses through innovations.", "• News articles.", "We follow the strategy in Card et al.", "(2015) to obtain news articles from Lex-isNexis on five issues: abortion, immigration, same-sex marriage, smoking, and terrorism.", "We search for relevant articles using LexisNexis subject terms in U.S. newspapers from 1980 to 2016.", "Each of these corpora contains more than 25,000 articles.", "Please refer to the supplementary material for details.", "• Research papers.", "We consider full texts of papers from two communities: our own ACL community captured by papers from ACL, NAACL, EMNLP, and TACL from 1980 to 2014 (Radev et al., 2009 ; and the NIPS community from 1987 to 2016.", "3 There are 4.8K papers from the ACL community and 6.6K papers from the NIPS community.", "The processed datasets are available at https://chenhaot.com/ pages/idea-relations.html.", "In order to operationalize ideas in a text corpus, we consider two ways to represent ideas.", "• Topics.", "We extract topics from each document by running LDA (Blei et al., 2003) on each corpus C. In all datasets, we set the number of topics to 50.", "4 Formally, I is the 50 topics learned from the corpus, and each document is represented as the set of topics that are present with greater than 0.01 probability in the topic distribution for that document.", "• Keywords.", "We identify a list of distinguishing keywords for each corpus by comparing its word frequencies to the background frequencies found in other corpora using the informative Dirichlet prior model in Monroe et al.", "(2008) .", "We set the number of keywords to 100 for all corpora.", "For news articles, the background corpus for each issue is comprised of all articles from the other four issues.", "For research papers, we use NIPS as the background corpus for ACL and vice versa to identify what are the core concepts for each of these research areas.", "Formally, I is the 100 top distinguishing keywords in the corpus and each document is represented as the set of keywords within I that are present in the document.", "Refer to the supplementary material for a list of example keywords in each corpus.", "In both procedures, we lemmatize all words and add common bigram phrases to the vocabulary following Mikolov et al.", "(2013) .", "Note that in our analysis, ideas are only present or absent in a document, and a document can in principle be mapped to any subset of ideas in I.", "In our experiments 90% of documents are marked as containing between 7 and 14 ideas using topics, 8 and 33 ideas using keywords.", "Characterizing the Space of Relations To provide an overview of the four relation types in Fig.", "1 , we first examine the empirical distributions of the two statistics of interest across pairs of ideas.", "In most exploratory studies, however, we are most interested in pairs that exemplify each type of relation, i.e., the most extreme points in each quadrant.", "We thus look at these pairs in each corpus to observe how the four types differ in salience across datasets.", "Empirical Distribution Properties To the best of our knowledge, the distributions of pairwise cooccurrence and prevalence correlation have not been examined in previous literature.", "We thus first investigate the marginal distributions of cooccurrence and prevalence correlation and then our framework is to analyze relations between ideas, so this choice is not essential in this work.", "(Scott, 2015) .", "The plots along the axes show the marginal distribution of the corresponding dimension.", "In each plot, we give the Pearson correlation, and all Pearson correlations' p-values are less than 10 −40 .", "In these plots, we use topics to represent ideas.", "their joint distribution.", "Fig.", "3 shows three examples: two from news articles and one from research papers.", "We will also focus our case studies on these three corpora in §4.", "The corresponding plots for keywords have been relegated to supplementary material due to space limitations.", "Cooccurrence tends to be unimodal but not normal.", "In all of our datasets, pairwise cooccurrence ( PMI) presents a unimodal distribution that somewhat resembles a normal distribution, but it is rarely precisely normal.", "We cannot reject the hypothesis that it is unimodal for any dataset (using topics or keywords) using the dip test (Hartigan and Hartigan, 1985) , though D'Agostino's K 2 test (D'Agostino et al., 1990) rejects normality in almost all cases.", "Prevalence correlation exhibits diverse distributions.", "Pairwise prevalence correlation follows different distributions in news articles compared to research papers: they are unimodal in news articles, but not in ACL or NIPS.", "The dip test only rejects the unimodality hypothesis in NIPS.", "None follow normal distributions based on D'Agostino's K 2 test.", "Cooccurrence is positively correlated with prevalence correlation.", "In all of our datasets, cooccurrence is positively correlated with prevalence correlation whether we use topics or keywords to represent ideas, although the Pearson correlation coefficients vary.", "This suggests that there are more friendship and head-to-head relations than tryst and arms-race relations.", "Based on the results of kernel density estimation, we also observe that this correlation is often loose, e.g., in ACL topics, cooccurrence spreads widely at each mode of prevalence correlation.", "776 Relative Strength of Extreme Pairs We are interested in how our framework can identify intriguing relations between ideas.", "These potentially interesting pairs likely correspond to the extreme points in each quadrant instead of the ones around the origin, where PMI and prevalence correlation are both close to zero.", "Here we compare the relative strength of extreme pairs in each dataset.", "We will discuss how these extreme pairs confirm existing knowledge and suggest new hypotheses via case studies in §4.", "For each relation type, we average the strengths of the 25 pairs with the strongest relations in that type, with strength defined in Eq.", "3.", "This heuristic (henceforth collective strength) allows us to collectively compare the strengths of the most prominent friendship, tryst, arms-race, and head-to-head relations.", "The results are not sensitive to the choice of 25.", "Fig.", "4 shows the collective strength of the four types in all of our datasets.", "The most common ordering is: friendship > head-to-head > arms-race > tryst.", "The fact that friendship and head-to-head relations are strong is consistent with the positive correlation between cooccurrence and prevalence correlation.", "In news, friendship is the strongest relation type, but head-to-head is the strongest in ACL topics and NIPS topics.", "This suggests, unsurprisingly, that there are stronger head-to-head competitions (i.e., one idea takes over another) between ideas in scientific research than in news.", "We also see that topics show greater strength in our scientific article collections, while keywords dominate in news, especially in friendship.", "We conjecture that terms in scientific literature are often overloaded (e.g., a tree could be a parse tree or a decision tree), necessitating some abstraction when representing ideas.", "In contrast, news stories are more self-contained and seek to employ consistent usage.", "Exploratory Studies We present case studies based on strongly related pairs of ideas in the four types of relation.", "Throughout this section, \"rank\" refers to the rank of the relation strength between a pair of ideas in its corresponding relation type.", "International Relations in Terrorism Following a decade of declining violence in the 90s, the events of September 11, 2001 precipitated a dramatic increase in concern about terrorism, and a major shift in how it was framed (Kern et al., 2003) .", "As a showcase, we consider a topic which encompasses much of the U.S. government's response to terrorism: \"federal, state\".", "5 We observe two topics engaging in an \"arms race\" with this one: \"afghanistan, taliban\" and \"pakistan, india\".", "These correspond to two geopolitical regions closely linked to the U.S. government's concern with terrorism, and both were sites of U.S. military action during the period of our dataset.", "Events abroad and the U.S. government's responses follow the arms-race pattern, each holding increasing 5 As in §1, we summarize each topic using a pair of strongly associated words, instead of assigning a name.", "Figure 6 : Tryst relation between arab and islam using keywords to represent ideas (#2 in tryst): these two words tend to cooccur but are anti-correlated in prevalence over time.", "In particular, islam was rarely used in coverage of terrorism in the 1980s.", "attention with the other, likely because they share the same underlying cause.", "We also observe two head-to-head rivals to the \"federal, state\" topic: \"iran, libya\" and \"israel, palestinian\".", "While these topics correspond to regions that are hotly debated in the U.S., their coverage in news tends not to correlate temporally with the U.S. government's responses to terrorism, at least during the time period of our corpus.", "Discussion of these regions was more prevalent in the 80s and 90s, with declining media coverage since then (Kern et al., 2003) .", "The relations between these topics are consistent with structural balance theory (Cartwright and Harary, 1956; Heider, 1946) , which suggests that the enemy of an enemy is a friend.", "The \"afghanistan, taliban\" topic has the strongest friendship relation with the \"pakistan, india\" topic, i.e., they are likely to cooccur and are positively correlated in prevalence.", "Similarly, the \"iran, libya\" topic is a close \"friend\" with the \"israel, palestinian\" topic (ranked 8th in friendship).", "Fig.", "5a shows the relations between the \"federal, state\" topic and four international topics.", "Edge colors indicate relation types and the number in an edge label presents the ranking of its strength in the corresponding relation type.", "Fig.", "5b and Fig.", "5c represent concrete examples in Fig.", "5a : \"federal, state\" and \"afghanistan, taliban\" follow similar trends, although \"afghanistan, taliban\" fluctuates over time due to significant events such as the September 11 attacks in 2001 and the death of Bin Laden in 2011; while \"iran, lybia\" is negatively correlated with \"federal, state\".", "In fact, more than 70% of terrorism news in the 80s contained the \"iran, lybia\" topic.", "When using keywords to represent ideas, we observe similar relations between the term homeland security and terms related to the above foreign countries.", "In addition, we highlight an interesting but unexpected tryst relation between arab and islam (Fig.", "6) .", "It is not surprising that these two words tend to cooccur in the same news articles, but the usage of islam in the news is increasing while arab is declining.", "The increasing prevalence of islam and decreasing prevalence of arab over this time period can also be seen, for example, using Google's n-gram viewer, but it of course provides no information about cooccurrence.", "This trend has not been previously noted to the best of our knowledge, although an article in the Huffington Post called for news editors to distinguish Muslim from Arab.", "6 Our observation suggests a conjecture that the news media have increasingly linked terrorism to a religious group rather than an ethnic group, perhaps in part due to the tie between the events of 9/11 and Afghanistan, which is not an Arab or Arabic-speaking country.", "We leave it to further investigation to confirm or reject this hypothesis.", "To further demonstrate the effectiveness of our approach, we compare a pair's rank using only cooccurrence or prevalence correlation with its rank in our framework.", "Table 1 shows the results for three pairs above.", "If we had looked at only cooccurrence or prevalence correlation, we would probably have missed these interesting pairs.", "PMI Corr \"federal, state\", \"afghanistan, taliban\" (#2 in arms-race) 43 99 \"federal, state\", \"iran, lybia\" (#2 in head-to-head) 36 56 arab, islam (#2 in tryst) 106 1,494 Ethnicity Keywords in Immigration In addition to results on topics in §1, we observe unexpected patterns about ethnicity keywords in immigration news.", "Our observation starts with a top tryst relation between latino and asian.", "Although these words are likely to cooccur, their prevalence trajectories differ, with the discussion of Asian immigrants in the 1990s giving way to a focus on the word latino from 2000 onward.", "Possible theories to explain this observation include that undocumented immigrants are generally perceived as a Latino issue, or that Latino voters are increasingly influential in U.S. elections.", "Furthermore, latino holds head-to-head relations with two subgroups of Latin American immigrants: haitian and cuban.", "In particular, the strength of the relation with haitian is ranked #18 in headto-head relations.", "Meanwhile, haitian and cuban have a friendship relation, which is again consistent with structural balance theory.", "The decreasing prevalence of haitian and cuban perhaps speaks to the shifting geographical focus of recent immigration to the U.S., and issues of the Latino panethnicity.", "In fact, a majority of Latinos prefer to identify with their national origin relative to the pan-ethnic terms (Taylor et al., 2012) .", "However, we should also note that much of this coverage relates to a set of specific refugee crises, temporarily elevating the political importance of these nations in the U.S.", "Nevertheless, the underlying social and political reasons behind these head-to-head relations are worth further investigation.", "Relations between Topics in ACL Finally, we analyze relations between topics in the ACL Anthology.", "It turns out that \"machine translation\" is at a central position among top ranked relations in all the four types (Fig.", "8) .", "7 It is part of the strongest relation in all four types except tryst (ranked #5).", "The full relation graph presents further patterns.", "Friendship demonstrates transitivity: both \"machine translation\" and \"word alignment\" have similar relations with other topics.", "But such transitivity does not hold for tryst: although the prevalence of \"rule, forest methods\" is anti-correlated with both \"machine translation\" and \"sentiment analysis\", \"sentiment analysis\" seldom cooccurs with \"rule, for-est methods\" because \"sentiment analysis\" is seldom built on parsing algorithms.", "Similarly, \"rule, forest methods\" and \"discourse (coherence)\" hold an armsrace relation: they do not tend to cooccur and both decline in relative prevalence as \"machine translation\" rises.", "The prevalence of each of these ideas in comparison to machine translation is shown in in Fig.", "9 , which reveals additional detail.", "Figure 9 : Relations between topics in ACL Anthology in the space of cooccurrence and prevalence correlation (prevalence correlation is shown explicitly and cooccurrence is encoded in row captions), color coded to match the text.", "The y-axis represents the relative proportion of papers in a year that contain the corresponding topic.", "The top 10 words for the rule, forest methods topic are rule, grammar, derivation, span, algorithm, forest, parsing, figure, set, string.", "Concluding Discussion We proposed a method to characterize relations between ideas in texts through the lens of cooccurrence within documents and prevalence correlation over time.", "For the first time, we observe that the distribution of pairwise cooccurrence is unimodal, while the distribution of pairwise prevalence correlation is not always unimodal, and show that they are positively correlated.", "This combination suggests four types of relations between ideas, and these four types are all found to varying extents in our experiments.", "We illustrate our computational method by exploratory studies on news corpora and scientific research papers.", "We not only confirm existing knowledge but also suggest hypotheses around the usage of arab and islam in terrorism and latino and asian in immigration.", "It is important to note that the relations found using our approach depend on the nature of the representation of ideas and the source of texts.", "For instance, we cannot expect relations found in news articles to reflect shifts in public opinion if news articles do not effectively track public opinion.", "Our method is entirely observational.", "It remains as a further stage of analysis to understand the underlying reasons that lead to these relations be-tween ideas.", "In scientific research, for example, it could simply be the progress of science, i.e., newer ideas overtake older ones deemed less valuable at a given time; on the other hand, history suggests that it is not always the correct ideas that are most expressed, and many other factors may be important.", "Similarly, in news coverage, underlying sociological and political situations have significant impact on which ideas are presented, and how.", "There are many potential directions to improve our method to account for complex relations between ideas.", "For instance, we assume that both ideas and relations are statically grounded in keywords or topics.", "In reality, ideas and relations both evolve over time: a tryst relation might appear as friendship if we focus on a narrower time period.", "Similarly, new ideas show up and even the same idea may change over time and be represented by different words." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "3.1", "3.2", "4", "4.1", "4.2", "4.3", "6" ], "paper_header_content": [ "Introduction", "Computational Framework", "Cooccurrence and Prevalence Correlation", "Datasets and Representation of Ideas", "Characterizing the Space of Relations", "Empirical Distribution Properties", "Relative Strength of Extreme Pairs", "Exploratory Studies", "International Relations in Terrorism", "Ethnicity Keywords in Immigration", "Relations between Topics in ACL", "Concluding Discussion" ] }
GEM-SciDuet-train-93#paper-1238#slide-13
Top relations between ideas
The relations between these topics are consistent with structural balance theory: the enemy of an enemy is a friend
The relations between these topics are consistent with structural balance theory: the enemy of an enemy is a friend
[]
GEM-SciDuet-train-93#paper-1238#slide-14
1238
Friendships, Rivalries, and Trysts: Characterizing Relations between Ideas in Texts
Understanding how ideas relate to each other is a fundamental question in many domains, ranging from intellectual history to public communication. Because ideas are naturally embedded in texts, we propose the first framework to systematically characterize the relations between ideas based on their occurrence in a corpus of documents, independent of how these ideas are represented. Combining two statistics-cooccurrence within documents and prevalence correlation over time-our approach reveals a number of different ways in which ideas can cooperate and compete. For instance, two ideas can closely track each other's prevalence over time, and yet rarely cooccur, almost like a "cold war" scenario. We observe that pairwise cooccurrence and prevalence correlation exhibit different distributions. We further demonstrate that our approach is able to uncover intriguing relations between ideas through in-depth case studies on news articles and research papers.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228 ], "paper_content_text": [ "Introduction Ideas exist in the mind, but are made manifest in language, where they compete with each other for the scarce resource of human attention.", "Milton (1644) used the \"marketplace of ideas\" metaphor to argue that the truth will win out when ideas freely compete; Dawkins (1976) similarly likened the evolution of ideas to natural selection of genes.", "We propose a framework to quantitatively characterize competition and cooperation between ideas in texts, independent of how they might be represented.", "By \"ideas\", we mean any discrete conceptual units that can be identified as being present or absent in a document.", "In this work, we consider representing ideas using keywords and topics obtained in an unsupervised fashion, but our way of characterizing the relations between ideas could be applied to many other types of textual representations, such as frames (Card et al., 2015) and hashtags.", "What does it mean for two ideas to compete in texts, quantitatively?", "Consider, for example, the issue of immigration.", "There are two strongly competing narratives about the roughly 11 million people 1 who are residing in the United States without permission.", "One is \"illegal aliens\", who \"steal\" jobs and deny opportunities to legal immigrants; the other is \"undocumented immigrants\", who are already part of the fabric of society and deserve a path to citizenship (Merolla et al., 2013) .", "Although prior knowledge suggests that these two narratives compete, it is not immediately obvious what measures might reveal this competition in a corpus of writing about immigration.", "One question is whether or not these two ideas cooccur in the same documents.", "In the example above, these narratives are used by distinct groups of people with different ideologies.", "The fact that they don't cooccur is one clue that they may be in competition with each other.", "However, cooccurrence is insufficient to express the selection process of ideas, i.e., some ideas fade out over time, while others rise in popularity, analogous to the populations of species in nature.", "Of the two narratives on immigration, we may expect one to win out at the expense of another as public opinion shifts.", "Alternatively, we might expect to see these narratives reinforcing each other, as both sides intensify their messaging in response to growing opposition, much like the U.S.S.R. and immigration, deportation republican, party Figure 1 : Relations between ideas in the space of cooccurrence and prevalence correlation (prevalence correlation is shown explicitly and cooccurrence is encoded in row captions).", "We use topics from LDA (Blei et al., 2003) to represent ideas.", "Each topic is named with a pair of words that are most strongly associated with the topic in LDA.", "Subplots show examples of relations between topics found in U.S. newspaper articles on immigration from 1980 to 2016, color coded to match the description in text.", "The y-axis represents the proportion of news articles in a year (in our corpus) that contain the corresponding topic.", "All examples are among the top 3 strongest relations in each type except (\"immigrant, undocumented\", \"illegal, alien\"), which corresponds to the two competing narratives.", "We explain the formal definition of strength in §2.", "the U.S. during the cold war.", "To capture these possibilities, we use prevalence correlation over time.", "Building on these insights, we propose a framework that combines cooccurrence within documents and prevalence correlation over time.", "This framework gives rise to four possible types of relation that correspond to the four quadrants in Fig.", "1 .", "We explain each type using examples from news articles in U.S. newspapers on immigration from 1980 to 2016.", "Here, we have used LDA to identify ideas in the form of topics, and we denote each idea with a pair of words most strongly associated with the corresponding topic.", "Friendship (correlated over time, likely to cooccur).", "The \"immigrant, undocumented\" topic tends to cooccur with \"obama, president\" and both topics have been rising during the period of our dataset, likely because the \"undocumented immigrants\" narrative was an important part of Obama's framing of the immigration issue (Haynes et al., 2016) .", "Head-to-head (anti-correlated over time, unlikely to cooccur).", "\"immigrant, undocumented\" and \"illegal, alien\" are in a head-to-head competition: these two topics rarely cooccur, and \"immigrant, undocu-mented\" has been growing in prevalence, while the usage of \"illegal, alien\" in newspapers has been declining.", "This observation agrees with a report from Pew Research Center (Guskin, 2013) .", "Tryst (anti-correlated over time, likely to cooccur).", "The two off-diagonal examples use topics related to law enforcement.", "Overall, \"immigration, deportation\" and \"detention, jail\" often cooccur but \"detention, jail\" has been declining, while \"immigration, deportation\" has been rising.", "This possibly relates to the promises to overhaul the immigration detention system (Kalhan, 2010).", "2 Arms-race (correlated over time, unlikely to cooccur).", "One of the above law enforcement topics (\"immigration, deportation\") and a topic on the Republican party (\"republican, party\") hold an armsrace relation: they are both growing in prevalence over time, but rarely cooccur, perhaps suggesting an underlying common cause.", "Note that our terminology describes the relations between ideas in texts, not necessarily between the entities to which the ideas refer.", "For example, we find that the relation between \"Israel\" and \"Palestine\" is \"friendship\" in news articles on terrorism, based on their prevalence correlation and cooccurrence in that corpus.", "We introduce the formal definition of our framework in §2 and apply it to news articles on five issues and research papers from ACL Anthology and NIPS as testbeds.", "We operationalize ideas using topics (Blei et al., 2003) and keywords (Monroe et al., 2008) .", "To explore whether the four relation types exist and how strong these relations are, we first examine the marginal and joint distributions of cooccurrence and prevalence correlation ( §3).", "We find that cooccurrence shows a unimodal normal-shaped distribution but prevalence correlation demonstrates more diverse distributions across corpora.", "As we would expect, there are, in general, more and stronger friendship and head-to-head relations than arms-race and tryst relations.", "Second, we demonstrate the effectiveness of our framework through in-depth case studies ( §4).", "We not only validate existing knowledge about some news issues and research areas, but also identify hypotheses that require further investigation.", "For example, using keywords to represent ideas, a top pair with the tryst relation in news articles on terrorism is \"arab\" and \"islam\"; they are likely to cooccur, but \"islam\" is rising in relative prevalence while \"arab\" is declining.", "This suggests a conjecture that the news media have increasingly linked terrorism to a religious group rather than an ethnic group.", "We also show relations between topics in ACL that center around machine translation.", "Our work is a first step towards understanding relations between ideas from text corpora, a complex and important research question.", "We provide some concluding thoughts in §6.", "Computational Framework The aim of our computational framework is to explore relations between ideas.", "We thus assume that the set of relevant ideas has been identified, and those expressed in each document have been tabulated.", "Our open-source implementation is at https://github.com/Noahs-ARK/ idea_relations/.", "In the following, we introduce our formal definitions and datasets.", "∀x, y ∈ I, PMI(x, y) = logP (x, y) P (x)P (y) = C + log 1+ t k 1{x∈dt k }·1{y∈dt k } (1+ t k 1{x∈dt k })·(1+ t k 1{y∈dt k }) (1) r(x, y) = t P (x|t)−P (x|t) P (y|t)−P (y|t) t P (x|t)−P (x|t) 2 t P (y|t)−P (y|t) 2 (2) Figure 2 : Eq.", "1 is the empirical pointwise mutual information for two ideas, our measure of cooccurrence of ideas; note that we use add-one smoothing in estimating PMI.", "Eq.", "2 is the Pearson correlation between two ideas' prevalence over time.", "Cooccurrence and Prevalence Correlation As discussed in the introduction, we focus on two dimensions to quantify relations between ideas: 1. cooccurrence reveals to what extent two ideas tend to occur in the same contexts; 2. similarity between the relative prevalence of ideas over time reveals how two ideas relate in terms of popularity or coverage.", "Our input is a collection of documents, each represented by a set of ideas and indexed by time.", "We denote a static set of ideas as I and a text corpus that consists of these ideas as C = {D 1 , .", ".", ".", ", D T }, where D t = {d t 1 , .", ".", ".", ", d t N t } gives the collection of documents at timestep t, and each document, d t k , is represented as a subset of ideas in I.", "Here T is the total number of timesteps, and N t is the number of documents at timestep t. It follows that the total number of documents N = T t=1 N t .", "In order to formally capture the two dimensions above, we employ two commonly-used statistics.", "First, we use empirical pointwise mutual information (PMI) to capture the cooccurrence of ideas within the same document (Church and Hanks, 1990); see Eq.", "1 in Fig.", "2 .", "Positive PMI indicates that ideas occur together more frequently than would be expected if they were independent, while negative PMI indicates the opposite.", "Second, we compute the correlation between normalized document frequency of ideas to capture the relation between the relative prevalence of ideas across documents over time; see Eq.", "2 in Fig.", "2 .", "Positiver indicates that two ideas have similar prevalence over time, while negativer sug-gests two anti-correlated ideas (i.e., when one goes up, the other goes down).", "The four types of relations in the introduction can now be obtained using PMI andr, which capture cooccurrence and prevalence correlation respectively.", "We further define the strength of the relation between two ideas as the absolute value of the product of their PMI andr scores: ∀x, y ∈ I, strength(x, y) = | PMI(x, y)×r(x, y)|.", "(3) Datasets and Representation of Ideas We use two types of datasets to validate our framework: news articles and research papers.", "We choose these two domains because competition between ideas has received significant interest in history of science (Kuhn, 1996) and research on framing (Chong and Druckman, 2007; Entman, 1993; Gitlin, 1980; Lakoff, 2014) .", "Furthermore, interesting differences may exist in these two domains as news evolves with external events and scientific research progresses through innovations.", "• News articles.", "We follow the strategy in Card et al.", "(2015) to obtain news articles from Lex-isNexis on five issues: abortion, immigration, same-sex marriage, smoking, and terrorism.", "We search for relevant articles using LexisNexis subject terms in U.S. newspapers from 1980 to 2016.", "Each of these corpora contains more than 25,000 articles.", "Please refer to the supplementary material for details.", "• Research papers.", "We consider full texts of papers from two communities: our own ACL community captured by papers from ACL, NAACL, EMNLP, and TACL from 1980 to 2014 (Radev et al., 2009 ; and the NIPS community from 1987 to 2016.", "3 There are 4.8K papers from the ACL community and 6.6K papers from the NIPS community.", "The processed datasets are available at https://chenhaot.com/ pages/idea-relations.html.", "In order to operationalize ideas in a text corpus, we consider two ways to represent ideas.", "• Topics.", "We extract topics from each document by running LDA (Blei et al., 2003) on each corpus C. In all datasets, we set the number of topics to 50.", "4 Formally, I is the 50 topics learned from the corpus, and each document is represented as the set of topics that are present with greater than 0.01 probability in the topic distribution for that document.", "• Keywords.", "We identify a list of distinguishing keywords for each corpus by comparing its word frequencies to the background frequencies found in other corpora using the informative Dirichlet prior model in Monroe et al.", "(2008) .", "We set the number of keywords to 100 for all corpora.", "For news articles, the background corpus for each issue is comprised of all articles from the other four issues.", "For research papers, we use NIPS as the background corpus for ACL and vice versa to identify what are the core concepts for each of these research areas.", "Formally, I is the 100 top distinguishing keywords in the corpus and each document is represented as the set of keywords within I that are present in the document.", "Refer to the supplementary material for a list of example keywords in each corpus.", "In both procedures, we lemmatize all words and add common bigram phrases to the vocabulary following Mikolov et al.", "(2013) .", "Note that in our analysis, ideas are only present or absent in a document, and a document can in principle be mapped to any subset of ideas in I.", "In our experiments 90% of documents are marked as containing between 7 and 14 ideas using topics, 8 and 33 ideas using keywords.", "Characterizing the Space of Relations To provide an overview of the four relation types in Fig.", "1 , we first examine the empirical distributions of the two statistics of interest across pairs of ideas.", "In most exploratory studies, however, we are most interested in pairs that exemplify each type of relation, i.e., the most extreme points in each quadrant.", "We thus look at these pairs in each corpus to observe how the four types differ in salience across datasets.", "Empirical Distribution Properties To the best of our knowledge, the distributions of pairwise cooccurrence and prevalence correlation have not been examined in previous literature.", "We thus first investigate the marginal distributions of cooccurrence and prevalence correlation and then our framework is to analyze relations between ideas, so this choice is not essential in this work.", "(Scott, 2015) .", "The plots along the axes show the marginal distribution of the corresponding dimension.", "In each plot, we give the Pearson correlation, and all Pearson correlations' p-values are less than 10 −40 .", "In these plots, we use topics to represent ideas.", "their joint distribution.", "Fig.", "3 shows three examples: two from news articles and one from research papers.", "We will also focus our case studies on these three corpora in §4.", "The corresponding plots for keywords have been relegated to supplementary material due to space limitations.", "Cooccurrence tends to be unimodal but not normal.", "In all of our datasets, pairwise cooccurrence ( PMI) presents a unimodal distribution that somewhat resembles a normal distribution, but it is rarely precisely normal.", "We cannot reject the hypothesis that it is unimodal for any dataset (using topics or keywords) using the dip test (Hartigan and Hartigan, 1985) , though D'Agostino's K 2 test (D'Agostino et al., 1990) rejects normality in almost all cases.", "Prevalence correlation exhibits diverse distributions.", "Pairwise prevalence correlation follows different distributions in news articles compared to research papers: they are unimodal in news articles, but not in ACL or NIPS.", "The dip test only rejects the unimodality hypothesis in NIPS.", "None follow normal distributions based on D'Agostino's K 2 test.", "Cooccurrence is positively correlated with prevalence correlation.", "In all of our datasets, cooccurrence is positively correlated with prevalence correlation whether we use topics or keywords to represent ideas, although the Pearson correlation coefficients vary.", "This suggests that there are more friendship and head-to-head relations than tryst and arms-race relations.", "Based on the results of kernel density estimation, we also observe that this correlation is often loose, e.g., in ACL topics, cooccurrence spreads widely at each mode of prevalence correlation.", "776 Relative Strength of Extreme Pairs We are interested in how our framework can identify intriguing relations between ideas.", "These potentially interesting pairs likely correspond to the extreme points in each quadrant instead of the ones around the origin, where PMI and prevalence correlation are both close to zero.", "Here we compare the relative strength of extreme pairs in each dataset.", "We will discuss how these extreme pairs confirm existing knowledge and suggest new hypotheses via case studies in §4.", "For each relation type, we average the strengths of the 25 pairs with the strongest relations in that type, with strength defined in Eq.", "3.", "This heuristic (henceforth collective strength) allows us to collectively compare the strengths of the most prominent friendship, tryst, arms-race, and head-to-head relations.", "The results are not sensitive to the choice of 25.", "Fig.", "4 shows the collective strength of the four types in all of our datasets.", "The most common ordering is: friendship > head-to-head > arms-race > tryst.", "The fact that friendship and head-to-head relations are strong is consistent with the positive correlation between cooccurrence and prevalence correlation.", "In news, friendship is the strongest relation type, but head-to-head is the strongest in ACL topics and NIPS topics.", "This suggests, unsurprisingly, that there are stronger head-to-head competitions (i.e., one idea takes over another) between ideas in scientific research than in news.", "We also see that topics show greater strength in our scientific article collections, while keywords dominate in news, especially in friendship.", "We conjecture that terms in scientific literature are often overloaded (e.g., a tree could be a parse tree or a decision tree), necessitating some abstraction when representing ideas.", "In contrast, news stories are more self-contained and seek to employ consistent usage.", "Exploratory Studies We present case studies based on strongly related pairs of ideas in the four types of relation.", "Throughout this section, \"rank\" refers to the rank of the relation strength between a pair of ideas in its corresponding relation type.", "International Relations in Terrorism Following a decade of declining violence in the 90s, the events of September 11, 2001 precipitated a dramatic increase in concern about terrorism, and a major shift in how it was framed (Kern et al., 2003) .", "As a showcase, we consider a topic which encompasses much of the U.S. government's response to terrorism: \"federal, state\".", "5 We observe two topics engaging in an \"arms race\" with this one: \"afghanistan, taliban\" and \"pakistan, india\".", "These correspond to two geopolitical regions closely linked to the U.S. government's concern with terrorism, and both were sites of U.S. military action during the period of our dataset.", "Events abroad and the U.S. government's responses follow the arms-race pattern, each holding increasing 5 As in §1, we summarize each topic using a pair of strongly associated words, instead of assigning a name.", "Figure 6 : Tryst relation between arab and islam using keywords to represent ideas (#2 in tryst): these two words tend to cooccur but are anti-correlated in prevalence over time.", "In particular, islam was rarely used in coverage of terrorism in the 1980s.", "attention with the other, likely because they share the same underlying cause.", "We also observe two head-to-head rivals to the \"federal, state\" topic: \"iran, libya\" and \"israel, palestinian\".", "While these topics correspond to regions that are hotly debated in the U.S., their coverage in news tends not to correlate temporally with the U.S. government's responses to terrorism, at least during the time period of our corpus.", "Discussion of these regions was more prevalent in the 80s and 90s, with declining media coverage since then (Kern et al., 2003) .", "The relations between these topics are consistent with structural balance theory (Cartwright and Harary, 1956; Heider, 1946) , which suggests that the enemy of an enemy is a friend.", "The \"afghanistan, taliban\" topic has the strongest friendship relation with the \"pakistan, india\" topic, i.e., they are likely to cooccur and are positively correlated in prevalence.", "Similarly, the \"iran, libya\" topic is a close \"friend\" with the \"israel, palestinian\" topic (ranked 8th in friendship).", "Fig.", "5a shows the relations between the \"federal, state\" topic and four international topics.", "Edge colors indicate relation types and the number in an edge label presents the ranking of its strength in the corresponding relation type.", "Fig.", "5b and Fig.", "5c represent concrete examples in Fig.", "5a : \"federal, state\" and \"afghanistan, taliban\" follow similar trends, although \"afghanistan, taliban\" fluctuates over time due to significant events such as the September 11 attacks in 2001 and the death of Bin Laden in 2011; while \"iran, lybia\" is negatively correlated with \"federal, state\".", "In fact, more than 70% of terrorism news in the 80s contained the \"iran, lybia\" topic.", "When using keywords to represent ideas, we observe similar relations between the term homeland security and terms related to the above foreign countries.", "In addition, we highlight an interesting but unexpected tryst relation between arab and islam (Fig.", "6) .", "It is not surprising that these two words tend to cooccur in the same news articles, but the usage of islam in the news is increasing while arab is declining.", "The increasing prevalence of islam and decreasing prevalence of arab over this time period can also be seen, for example, using Google's n-gram viewer, but it of course provides no information about cooccurrence.", "This trend has not been previously noted to the best of our knowledge, although an article in the Huffington Post called for news editors to distinguish Muslim from Arab.", "6 Our observation suggests a conjecture that the news media have increasingly linked terrorism to a religious group rather than an ethnic group, perhaps in part due to the tie between the events of 9/11 and Afghanistan, which is not an Arab or Arabic-speaking country.", "We leave it to further investigation to confirm or reject this hypothesis.", "To further demonstrate the effectiveness of our approach, we compare a pair's rank using only cooccurrence or prevalence correlation with its rank in our framework.", "Table 1 shows the results for three pairs above.", "If we had looked at only cooccurrence or prevalence correlation, we would probably have missed these interesting pairs.", "PMI Corr \"federal, state\", \"afghanistan, taliban\" (#2 in arms-race) 43 99 \"federal, state\", \"iran, lybia\" (#2 in head-to-head) 36 56 arab, islam (#2 in tryst) 106 1,494 Ethnicity Keywords in Immigration In addition to results on topics in §1, we observe unexpected patterns about ethnicity keywords in immigration news.", "Our observation starts with a top tryst relation between latino and asian.", "Although these words are likely to cooccur, their prevalence trajectories differ, with the discussion of Asian immigrants in the 1990s giving way to a focus on the word latino from 2000 onward.", "Possible theories to explain this observation include that undocumented immigrants are generally perceived as a Latino issue, or that Latino voters are increasingly influential in U.S. elections.", "Furthermore, latino holds head-to-head relations with two subgroups of Latin American immigrants: haitian and cuban.", "In particular, the strength of the relation with haitian is ranked #18 in headto-head relations.", "Meanwhile, haitian and cuban have a friendship relation, which is again consistent with structural balance theory.", "The decreasing prevalence of haitian and cuban perhaps speaks to the shifting geographical focus of recent immigration to the U.S., and issues of the Latino panethnicity.", "In fact, a majority of Latinos prefer to identify with their national origin relative to the pan-ethnic terms (Taylor et al., 2012) .", "However, we should also note that much of this coverage relates to a set of specific refugee crises, temporarily elevating the political importance of these nations in the U.S.", "Nevertheless, the underlying social and political reasons behind these head-to-head relations are worth further investigation.", "Relations between Topics in ACL Finally, we analyze relations between topics in the ACL Anthology.", "It turns out that \"machine translation\" is at a central position among top ranked relations in all the four types (Fig.", "8) .", "7 It is part of the strongest relation in all four types except tryst (ranked #5).", "The full relation graph presents further patterns.", "Friendship demonstrates transitivity: both \"machine translation\" and \"word alignment\" have similar relations with other topics.", "But such transitivity does not hold for tryst: although the prevalence of \"rule, forest methods\" is anti-correlated with both \"machine translation\" and \"sentiment analysis\", \"sentiment analysis\" seldom cooccurs with \"rule, for-est methods\" because \"sentiment analysis\" is seldom built on parsing algorithms.", "Similarly, \"rule, forest methods\" and \"discourse (coherence)\" hold an armsrace relation: they do not tend to cooccur and both decline in relative prevalence as \"machine translation\" rises.", "The prevalence of each of these ideas in comparison to machine translation is shown in in Fig.", "9 , which reveals additional detail.", "Figure 9 : Relations between topics in ACL Anthology in the space of cooccurrence and prevalence correlation (prevalence correlation is shown explicitly and cooccurrence is encoded in row captions), color coded to match the text.", "The y-axis represents the relative proportion of papers in a year that contain the corresponding topic.", "The top 10 words for the rule, forest methods topic are rule, grammar, derivation, span, algorithm, forest, parsing, figure, set, string.", "Concluding Discussion We proposed a method to characterize relations between ideas in texts through the lens of cooccurrence within documents and prevalence correlation over time.", "For the first time, we observe that the distribution of pairwise cooccurrence is unimodal, while the distribution of pairwise prevalence correlation is not always unimodal, and show that they are positively correlated.", "This combination suggests four types of relations between ideas, and these four types are all found to varying extents in our experiments.", "We illustrate our computational method by exploratory studies on news corpora and scientific research papers.", "We not only confirm existing knowledge but also suggest hypotheses around the usage of arab and islam in terrorism and latino and asian in immigration.", "It is important to note that the relations found using our approach depend on the nature of the representation of ideas and the source of texts.", "For instance, we cannot expect relations found in news articles to reflect shifts in public opinion if news articles do not effectively track public opinion.", "Our method is entirely observational.", "It remains as a further stage of analysis to understand the underlying reasons that lead to these relations be-tween ideas.", "In scientific research, for example, it could simply be the progress of science, i.e., newer ideas overtake older ones deemed less valuable at a given time; on the other hand, history suggests that it is not always the correct ideas that are most expressed, and many other factors may be important.", "Similarly, in news coverage, underlying sociological and political situations have significant impact on which ideas are presented, and how.", "There are many potential directions to improve our method to account for complex relations between ideas.", "For instance, we assume that both ideas and relations are statically grounded in keywords or topics.", "In reality, ideas and relations both evolve over time: a tryst relation might appear as friendship if we focus on a narrower time period.", "Similarly, new ideas show up and even the same idea may change over time and be represented by different words." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "3.1", "3.2", "4", "4.1", "4.2", "4.3", "6" ], "paper_header_content": [ "Introduction", "Computational Framework", "Cooccurrence and Prevalence Correlation", "Datasets and Representation of Ideas", "Characterizing the Space of Relations", "Empirical Distribution Properties", "Relative Strength of Extreme Pairs", "Exploratory Studies", "International Relations in Terrorism", "Ethnicity Keywords in Immigration", "Relations between Topics in ACL", "Concluding Discussion" ] }
GEM-SciDuet-train-93#paper-1238#slide-14
Effective explorations
Rank among all relations federal, state afghanistan, taliban federal, state iran, lybia The interesting pair is ranked much higher according to our framework.
Rank among all relations federal, state afghanistan, taliban federal, state iran, lybia The interesting pair is ranked much higher according to our framework.
[]
GEM-SciDuet-train-93#paper-1238#slide-15
1238
Friendships, Rivalries, and Trysts: Characterizing Relations between Ideas in Texts
Understanding how ideas relate to each other is a fundamental question in many domains, ranging from intellectual history to public communication. Because ideas are naturally embedded in texts, we propose the first framework to systematically characterize the relations between ideas based on their occurrence in a corpus of documents, independent of how these ideas are represented. Combining two statistics-cooccurrence within documents and prevalence correlation over time-our approach reveals a number of different ways in which ideas can cooperate and compete. For instance, two ideas can closely track each other's prevalence over time, and yet rarely cooccur, almost like a "cold war" scenario. We observe that pairwise cooccurrence and prevalence correlation exhibit different distributions. We further demonstrate that our approach is able to uncover intriguing relations between ideas through in-depth case studies on news articles and research papers.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228 ], "paper_content_text": [ "Introduction Ideas exist in the mind, but are made manifest in language, where they compete with each other for the scarce resource of human attention.", "Milton (1644) used the \"marketplace of ideas\" metaphor to argue that the truth will win out when ideas freely compete; Dawkins (1976) similarly likened the evolution of ideas to natural selection of genes.", "We propose a framework to quantitatively characterize competition and cooperation between ideas in texts, independent of how they might be represented.", "By \"ideas\", we mean any discrete conceptual units that can be identified as being present or absent in a document.", "In this work, we consider representing ideas using keywords and topics obtained in an unsupervised fashion, but our way of characterizing the relations between ideas could be applied to many other types of textual representations, such as frames (Card et al., 2015) and hashtags.", "What does it mean for two ideas to compete in texts, quantitatively?", "Consider, for example, the issue of immigration.", "There are two strongly competing narratives about the roughly 11 million people 1 who are residing in the United States without permission.", "One is \"illegal aliens\", who \"steal\" jobs and deny opportunities to legal immigrants; the other is \"undocumented immigrants\", who are already part of the fabric of society and deserve a path to citizenship (Merolla et al., 2013) .", "Although prior knowledge suggests that these two narratives compete, it is not immediately obvious what measures might reveal this competition in a corpus of writing about immigration.", "One question is whether or not these two ideas cooccur in the same documents.", "In the example above, these narratives are used by distinct groups of people with different ideologies.", "The fact that they don't cooccur is one clue that they may be in competition with each other.", "However, cooccurrence is insufficient to express the selection process of ideas, i.e., some ideas fade out over time, while others rise in popularity, analogous to the populations of species in nature.", "Of the two narratives on immigration, we may expect one to win out at the expense of another as public opinion shifts.", "Alternatively, we might expect to see these narratives reinforcing each other, as both sides intensify their messaging in response to growing opposition, much like the U.S.S.R. and immigration, deportation republican, party Figure 1 : Relations between ideas in the space of cooccurrence and prevalence correlation (prevalence correlation is shown explicitly and cooccurrence is encoded in row captions).", "We use topics from LDA (Blei et al., 2003) to represent ideas.", "Each topic is named with a pair of words that are most strongly associated with the topic in LDA.", "Subplots show examples of relations between topics found in U.S. newspaper articles on immigration from 1980 to 2016, color coded to match the description in text.", "The y-axis represents the proportion of news articles in a year (in our corpus) that contain the corresponding topic.", "All examples are among the top 3 strongest relations in each type except (\"immigrant, undocumented\", \"illegal, alien\"), which corresponds to the two competing narratives.", "We explain the formal definition of strength in §2.", "the U.S. during the cold war.", "To capture these possibilities, we use prevalence correlation over time.", "Building on these insights, we propose a framework that combines cooccurrence within documents and prevalence correlation over time.", "This framework gives rise to four possible types of relation that correspond to the four quadrants in Fig.", "1 .", "We explain each type using examples from news articles in U.S. newspapers on immigration from 1980 to 2016.", "Here, we have used LDA to identify ideas in the form of topics, and we denote each idea with a pair of words most strongly associated with the corresponding topic.", "Friendship (correlated over time, likely to cooccur).", "The \"immigrant, undocumented\" topic tends to cooccur with \"obama, president\" and both topics have been rising during the period of our dataset, likely because the \"undocumented immigrants\" narrative was an important part of Obama's framing of the immigration issue (Haynes et al., 2016) .", "Head-to-head (anti-correlated over time, unlikely to cooccur).", "\"immigrant, undocumented\" and \"illegal, alien\" are in a head-to-head competition: these two topics rarely cooccur, and \"immigrant, undocu-mented\" has been growing in prevalence, while the usage of \"illegal, alien\" in newspapers has been declining.", "This observation agrees with a report from Pew Research Center (Guskin, 2013) .", "Tryst (anti-correlated over time, likely to cooccur).", "The two off-diagonal examples use topics related to law enforcement.", "Overall, \"immigration, deportation\" and \"detention, jail\" often cooccur but \"detention, jail\" has been declining, while \"immigration, deportation\" has been rising.", "This possibly relates to the promises to overhaul the immigration detention system (Kalhan, 2010).", "2 Arms-race (correlated over time, unlikely to cooccur).", "One of the above law enforcement topics (\"immigration, deportation\") and a topic on the Republican party (\"republican, party\") hold an armsrace relation: they are both growing in prevalence over time, but rarely cooccur, perhaps suggesting an underlying common cause.", "Note that our terminology describes the relations between ideas in texts, not necessarily between the entities to which the ideas refer.", "For example, we find that the relation between \"Israel\" and \"Palestine\" is \"friendship\" in news articles on terrorism, based on their prevalence correlation and cooccurrence in that corpus.", "We introduce the formal definition of our framework in §2 and apply it to news articles on five issues and research papers from ACL Anthology and NIPS as testbeds.", "We operationalize ideas using topics (Blei et al., 2003) and keywords (Monroe et al., 2008) .", "To explore whether the four relation types exist and how strong these relations are, we first examine the marginal and joint distributions of cooccurrence and prevalence correlation ( §3).", "We find that cooccurrence shows a unimodal normal-shaped distribution but prevalence correlation demonstrates more diverse distributions across corpora.", "As we would expect, there are, in general, more and stronger friendship and head-to-head relations than arms-race and tryst relations.", "Second, we demonstrate the effectiveness of our framework through in-depth case studies ( §4).", "We not only validate existing knowledge about some news issues and research areas, but also identify hypotheses that require further investigation.", "For example, using keywords to represent ideas, a top pair with the tryst relation in news articles on terrorism is \"arab\" and \"islam\"; they are likely to cooccur, but \"islam\" is rising in relative prevalence while \"arab\" is declining.", "This suggests a conjecture that the news media have increasingly linked terrorism to a religious group rather than an ethnic group.", "We also show relations between topics in ACL that center around machine translation.", "Our work is a first step towards understanding relations between ideas from text corpora, a complex and important research question.", "We provide some concluding thoughts in §6.", "Computational Framework The aim of our computational framework is to explore relations between ideas.", "We thus assume that the set of relevant ideas has been identified, and those expressed in each document have been tabulated.", "Our open-source implementation is at https://github.com/Noahs-ARK/ idea_relations/.", "In the following, we introduce our formal definitions and datasets.", "∀x, y ∈ I, PMI(x, y) = logP (x, y) P (x)P (y) = C + log 1+ t k 1{x∈dt k }·1{y∈dt k } (1+ t k 1{x∈dt k })·(1+ t k 1{y∈dt k }) (1) r(x, y) = t P (x|t)−P (x|t) P (y|t)−P (y|t) t P (x|t)−P (x|t) 2 t P (y|t)−P (y|t) 2 (2) Figure 2 : Eq.", "1 is the empirical pointwise mutual information for two ideas, our measure of cooccurrence of ideas; note that we use add-one smoothing in estimating PMI.", "Eq.", "2 is the Pearson correlation between two ideas' prevalence over time.", "Cooccurrence and Prevalence Correlation As discussed in the introduction, we focus on two dimensions to quantify relations between ideas: 1. cooccurrence reveals to what extent two ideas tend to occur in the same contexts; 2. similarity between the relative prevalence of ideas over time reveals how two ideas relate in terms of popularity or coverage.", "Our input is a collection of documents, each represented by a set of ideas and indexed by time.", "We denote a static set of ideas as I and a text corpus that consists of these ideas as C = {D 1 , .", ".", ".", ", D T }, where D t = {d t 1 , .", ".", ".", ", d t N t } gives the collection of documents at timestep t, and each document, d t k , is represented as a subset of ideas in I.", "Here T is the total number of timesteps, and N t is the number of documents at timestep t. It follows that the total number of documents N = T t=1 N t .", "In order to formally capture the two dimensions above, we employ two commonly-used statistics.", "First, we use empirical pointwise mutual information (PMI) to capture the cooccurrence of ideas within the same document (Church and Hanks, 1990); see Eq.", "1 in Fig.", "2 .", "Positive PMI indicates that ideas occur together more frequently than would be expected if they were independent, while negative PMI indicates the opposite.", "Second, we compute the correlation between normalized document frequency of ideas to capture the relation between the relative prevalence of ideas across documents over time; see Eq.", "2 in Fig.", "2 .", "Positiver indicates that two ideas have similar prevalence over time, while negativer sug-gests two anti-correlated ideas (i.e., when one goes up, the other goes down).", "The four types of relations in the introduction can now be obtained using PMI andr, which capture cooccurrence and prevalence correlation respectively.", "We further define the strength of the relation between two ideas as the absolute value of the product of their PMI andr scores: ∀x, y ∈ I, strength(x, y) = | PMI(x, y)×r(x, y)|.", "(3) Datasets and Representation of Ideas We use two types of datasets to validate our framework: news articles and research papers.", "We choose these two domains because competition between ideas has received significant interest in history of science (Kuhn, 1996) and research on framing (Chong and Druckman, 2007; Entman, 1993; Gitlin, 1980; Lakoff, 2014) .", "Furthermore, interesting differences may exist in these two domains as news evolves with external events and scientific research progresses through innovations.", "• News articles.", "We follow the strategy in Card et al.", "(2015) to obtain news articles from Lex-isNexis on five issues: abortion, immigration, same-sex marriage, smoking, and terrorism.", "We search for relevant articles using LexisNexis subject terms in U.S. newspapers from 1980 to 2016.", "Each of these corpora contains more than 25,000 articles.", "Please refer to the supplementary material for details.", "• Research papers.", "We consider full texts of papers from two communities: our own ACL community captured by papers from ACL, NAACL, EMNLP, and TACL from 1980 to 2014 (Radev et al., 2009 ; and the NIPS community from 1987 to 2016.", "3 There are 4.8K papers from the ACL community and 6.6K papers from the NIPS community.", "The processed datasets are available at https://chenhaot.com/ pages/idea-relations.html.", "In order to operationalize ideas in a text corpus, we consider two ways to represent ideas.", "• Topics.", "We extract topics from each document by running LDA (Blei et al., 2003) on each corpus C. In all datasets, we set the number of topics to 50.", "4 Formally, I is the 50 topics learned from the corpus, and each document is represented as the set of topics that are present with greater than 0.01 probability in the topic distribution for that document.", "• Keywords.", "We identify a list of distinguishing keywords for each corpus by comparing its word frequencies to the background frequencies found in other corpora using the informative Dirichlet prior model in Monroe et al.", "(2008) .", "We set the number of keywords to 100 for all corpora.", "For news articles, the background corpus for each issue is comprised of all articles from the other four issues.", "For research papers, we use NIPS as the background corpus for ACL and vice versa to identify what are the core concepts for each of these research areas.", "Formally, I is the 100 top distinguishing keywords in the corpus and each document is represented as the set of keywords within I that are present in the document.", "Refer to the supplementary material for a list of example keywords in each corpus.", "In both procedures, we lemmatize all words and add common bigram phrases to the vocabulary following Mikolov et al.", "(2013) .", "Note that in our analysis, ideas are only present or absent in a document, and a document can in principle be mapped to any subset of ideas in I.", "In our experiments 90% of documents are marked as containing between 7 and 14 ideas using topics, 8 and 33 ideas using keywords.", "Characterizing the Space of Relations To provide an overview of the four relation types in Fig.", "1 , we first examine the empirical distributions of the two statistics of interest across pairs of ideas.", "In most exploratory studies, however, we are most interested in pairs that exemplify each type of relation, i.e., the most extreme points in each quadrant.", "We thus look at these pairs in each corpus to observe how the four types differ in salience across datasets.", "Empirical Distribution Properties To the best of our knowledge, the distributions of pairwise cooccurrence and prevalence correlation have not been examined in previous literature.", "We thus first investigate the marginal distributions of cooccurrence and prevalence correlation and then our framework is to analyze relations between ideas, so this choice is not essential in this work.", "(Scott, 2015) .", "The plots along the axes show the marginal distribution of the corresponding dimension.", "In each plot, we give the Pearson correlation, and all Pearson correlations' p-values are less than 10 −40 .", "In these plots, we use topics to represent ideas.", "their joint distribution.", "Fig.", "3 shows three examples: two from news articles and one from research papers.", "We will also focus our case studies on these three corpora in §4.", "The corresponding plots for keywords have been relegated to supplementary material due to space limitations.", "Cooccurrence tends to be unimodal but not normal.", "In all of our datasets, pairwise cooccurrence ( PMI) presents a unimodal distribution that somewhat resembles a normal distribution, but it is rarely precisely normal.", "We cannot reject the hypothesis that it is unimodal for any dataset (using topics or keywords) using the dip test (Hartigan and Hartigan, 1985) , though D'Agostino's K 2 test (D'Agostino et al., 1990) rejects normality in almost all cases.", "Prevalence correlation exhibits diverse distributions.", "Pairwise prevalence correlation follows different distributions in news articles compared to research papers: they are unimodal in news articles, but not in ACL or NIPS.", "The dip test only rejects the unimodality hypothesis in NIPS.", "None follow normal distributions based on D'Agostino's K 2 test.", "Cooccurrence is positively correlated with prevalence correlation.", "In all of our datasets, cooccurrence is positively correlated with prevalence correlation whether we use topics or keywords to represent ideas, although the Pearson correlation coefficients vary.", "This suggests that there are more friendship and head-to-head relations than tryst and arms-race relations.", "Based on the results of kernel density estimation, we also observe that this correlation is often loose, e.g., in ACL topics, cooccurrence spreads widely at each mode of prevalence correlation.", "776 Relative Strength of Extreme Pairs We are interested in how our framework can identify intriguing relations between ideas.", "These potentially interesting pairs likely correspond to the extreme points in each quadrant instead of the ones around the origin, where PMI and prevalence correlation are both close to zero.", "Here we compare the relative strength of extreme pairs in each dataset.", "We will discuss how these extreme pairs confirm existing knowledge and suggest new hypotheses via case studies in §4.", "For each relation type, we average the strengths of the 25 pairs with the strongest relations in that type, with strength defined in Eq.", "3.", "This heuristic (henceforth collective strength) allows us to collectively compare the strengths of the most prominent friendship, tryst, arms-race, and head-to-head relations.", "The results are not sensitive to the choice of 25.", "Fig.", "4 shows the collective strength of the four types in all of our datasets.", "The most common ordering is: friendship > head-to-head > arms-race > tryst.", "The fact that friendship and head-to-head relations are strong is consistent with the positive correlation between cooccurrence and prevalence correlation.", "In news, friendship is the strongest relation type, but head-to-head is the strongest in ACL topics and NIPS topics.", "This suggests, unsurprisingly, that there are stronger head-to-head competitions (i.e., one idea takes over another) between ideas in scientific research than in news.", "We also see that topics show greater strength in our scientific article collections, while keywords dominate in news, especially in friendship.", "We conjecture that terms in scientific literature are often overloaded (e.g., a tree could be a parse tree or a decision tree), necessitating some abstraction when representing ideas.", "In contrast, news stories are more self-contained and seek to employ consistent usage.", "Exploratory Studies We present case studies based on strongly related pairs of ideas in the four types of relation.", "Throughout this section, \"rank\" refers to the rank of the relation strength between a pair of ideas in its corresponding relation type.", "International Relations in Terrorism Following a decade of declining violence in the 90s, the events of September 11, 2001 precipitated a dramatic increase in concern about terrorism, and a major shift in how it was framed (Kern et al., 2003) .", "As a showcase, we consider a topic which encompasses much of the U.S. government's response to terrorism: \"federal, state\".", "5 We observe two topics engaging in an \"arms race\" with this one: \"afghanistan, taliban\" and \"pakistan, india\".", "These correspond to two geopolitical regions closely linked to the U.S. government's concern with terrorism, and both were sites of U.S. military action during the period of our dataset.", "Events abroad and the U.S. government's responses follow the arms-race pattern, each holding increasing 5 As in §1, we summarize each topic using a pair of strongly associated words, instead of assigning a name.", "Figure 6 : Tryst relation between arab and islam using keywords to represent ideas (#2 in tryst): these two words tend to cooccur but are anti-correlated in prevalence over time.", "In particular, islam was rarely used in coverage of terrorism in the 1980s.", "attention with the other, likely because they share the same underlying cause.", "We also observe two head-to-head rivals to the \"federal, state\" topic: \"iran, libya\" and \"israel, palestinian\".", "While these topics correspond to regions that are hotly debated in the U.S., their coverage in news tends not to correlate temporally with the U.S. government's responses to terrorism, at least during the time period of our corpus.", "Discussion of these regions was more prevalent in the 80s and 90s, with declining media coverage since then (Kern et al., 2003) .", "The relations between these topics are consistent with structural balance theory (Cartwright and Harary, 1956; Heider, 1946) , which suggests that the enemy of an enemy is a friend.", "The \"afghanistan, taliban\" topic has the strongest friendship relation with the \"pakistan, india\" topic, i.e., they are likely to cooccur and are positively correlated in prevalence.", "Similarly, the \"iran, libya\" topic is a close \"friend\" with the \"israel, palestinian\" topic (ranked 8th in friendship).", "Fig.", "5a shows the relations between the \"federal, state\" topic and four international topics.", "Edge colors indicate relation types and the number in an edge label presents the ranking of its strength in the corresponding relation type.", "Fig.", "5b and Fig.", "5c represent concrete examples in Fig.", "5a : \"federal, state\" and \"afghanistan, taliban\" follow similar trends, although \"afghanistan, taliban\" fluctuates over time due to significant events such as the September 11 attacks in 2001 and the death of Bin Laden in 2011; while \"iran, lybia\" is negatively correlated with \"federal, state\".", "In fact, more than 70% of terrorism news in the 80s contained the \"iran, lybia\" topic.", "When using keywords to represent ideas, we observe similar relations between the term homeland security and terms related to the above foreign countries.", "In addition, we highlight an interesting but unexpected tryst relation between arab and islam (Fig.", "6) .", "It is not surprising that these two words tend to cooccur in the same news articles, but the usage of islam in the news is increasing while arab is declining.", "The increasing prevalence of islam and decreasing prevalence of arab over this time period can also be seen, for example, using Google's n-gram viewer, but it of course provides no information about cooccurrence.", "This trend has not been previously noted to the best of our knowledge, although an article in the Huffington Post called for news editors to distinguish Muslim from Arab.", "6 Our observation suggests a conjecture that the news media have increasingly linked terrorism to a religious group rather than an ethnic group, perhaps in part due to the tie between the events of 9/11 and Afghanistan, which is not an Arab or Arabic-speaking country.", "We leave it to further investigation to confirm or reject this hypothesis.", "To further demonstrate the effectiveness of our approach, we compare a pair's rank using only cooccurrence or prevalence correlation with its rank in our framework.", "Table 1 shows the results for three pairs above.", "If we had looked at only cooccurrence or prevalence correlation, we would probably have missed these interesting pairs.", "PMI Corr \"federal, state\", \"afghanistan, taliban\" (#2 in arms-race) 43 99 \"federal, state\", \"iran, lybia\" (#2 in head-to-head) 36 56 arab, islam (#2 in tryst) 106 1,494 Ethnicity Keywords in Immigration In addition to results on topics in §1, we observe unexpected patterns about ethnicity keywords in immigration news.", "Our observation starts with a top tryst relation between latino and asian.", "Although these words are likely to cooccur, their prevalence trajectories differ, with the discussion of Asian immigrants in the 1990s giving way to a focus on the word latino from 2000 onward.", "Possible theories to explain this observation include that undocumented immigrants are generally perceived as a Latino issue, or that Latino voters are increasingly influential in U.S. elections.", "Furthermore, latino holds head-to-head relations with two subgroups of Latin American immigrants: haitian and cuban.", "In particular, the strength of the relation with haitian is ranked #18 in headto-head relations.", "Meanwhile, haitian and cuban have a friendship relation, which is again consistent with structural balance theory.", "The decreasing prevalence of haitian and cuban perhaps speaks to the shifting geographical focus of recent immigration to the U.S., and issues of the Latino panethnicity.", "In fact, a majority of Latinos prefer to identify with their national origin relative to the pan-ethnic terms (Taylor et al., 2012) .", "However, we should also note that much of this coverage relates to a set of specific refugee crises, temporarily elevating the political importance of these nations in the U.S.", "Nevertheless, the underlying social and political reasons behind these head-to-head relations are worth further investigation.", "Relations between Topics in ACL Finally, we analyze relations between topics in the ACL Anthology.", "It turns out that \"machine translation\" is at a central position among top ranked relations in all the four types (Fig.", "8) .", "7 It is part of the strongest relation in all four types except tryst (ranked #5).", "The full relation graph presents further patterns.", "Friendship demonstrates transitivity: both \"machine translation\" and \"word alignment\" have similar relations with other topics.", "But such transitivity does not hold for tryst: although the prevalence of \"rule, forest methods\" is anti-correlated with both \"machine translation\" and \"sentiment analysis\", \"sentiment analysis\" seldom cooccurs with \"rule, for-est methods\" because \"sentiment analysis\" is seldom built on parsing algorithms.", "Similarly, \"rule, forest methods\" and \"discourse (coherence)\" hold an armsrace relation: they do not tend to cooccur and both decline in relative prevalence as \"machine translation\" rises.", "The prevalence of each of these ideas in comparison to machine translation is shown in in Fig.", "9 , which reveals additional detail.", "Figure 9 : Relations between topics in ACL Anthology in the space of cooccurrence and prevalence correlation (prevalence correlation is shown explicitly and cooccurrence is encoded in row captions), color coded to match the text.", "The y-axis represents the relative proportion of papers in a year that contain the corresponding topic.", "The top 10 words for the rule, forest methods topic are rule, grammar, derivation, span, algorithm, forest, parsing, figure, set, string.", "Concluding Discussion We proposed a method to characterize relations between ideas in texts through the lens of cooccurrence within documents and prevalence correlation over time.", "For the first time, we observe that the distribution of pairwise cooccurrence is unimodal, while the distribution of pairwise prevalence correlation is not always unimodal, and show that they are positively correlated.", "This combination suggests four types of relations between ideas, and these four types are all found to varying extents in our experiments.", "We illustrate our computational method by exploratory studies on news corpora and scientific research papers.", "We not only confirm existing knowledge but also suggest hypotheses around the usage of arab and islam in terrorism and latino and asian in immigration.", "It is important to note that the relations found using our approach depend on the nature of the representation of ideas and the source of texts.", "For instance, we cannot expect relations found in news articles to reflect shifts in public opinion if news articles do not effectively track public opinion.", "Our method is entirely observational.", "It remains as a further stage of analysis to understand the underlying reasons that lead to these relations be-tween ideas.", "In scientific research, for example, it could simply be the progress of science, i.e., newer ideas overtake older ones deemed less valuable at a given time; on the other hand, history suggests that it is not always the correct ideas that are most expressed, and many other factors may be important.", "Similarly, in news coverage, underlying sociological and political situations have significant impact on which ideas are presented, and how.", "There are many potential directions to improve our method to account for complex relations between ideas.", "For instance, we assume that both ideas and relations are statically grounded in keywords or topics.", "In reality, ideas and relations both evolve over time: a tryst relation might appear as friendship if we focus on a narrower time period.", "Similarly, new ideas show up and even the same idea may change over time and be represented by different words." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "3.1", "3.2", "4", "4.1", "4.2", "4.3", "6" ], "paper_header_content": [ "Introduction", "Computational Framework", "Cooccurrence and Prevalence Correlation", "Datasets and Representation of Ideas", "Characterizing the Space of Relations", "Empirical Distribution Properties", "Relative Strength of Extreme Pairs", "Exploratory Studies", "International Relations in Terrorism", "Ethnicity Keywords in Immigration", "Relations between Topics in ACL", "Concluding Discussion" ] }
GEM-SciDuet-train-93#paper-1238#slide-15
Acl teaser
machine translation rule,forest methods machine translation word alignment machine translation discourse (coherence) machine translation sentiment analysis
machine translation rule,forest methods machine translation word alignment machine translation discourse (coherence) machine translation sentiment analysis
[]
GEM-SciDuet-train-94#paper-1239#slide-0
1239
Beyond Binary Labels: Political Ideology Prediction of Twitter Users
Automatic political preference prediction from social media posts has to date proven successful only in distinguishing between publicly declared liberals and conservatives in the US. This study examines users' political ideology using a sevenpoint scale which enables us to identify politically moderate and neutral usersgroups which are of particular interest to political scientists and pollsters. Using a novel data set with political ideology labels self-reported through surveys, our goal is two-fold: a) to characterize the political groups of users through language use on Twitter; b) to build a fine-grained model that predicts political ideology of unseen users. Our results identify differences in both political leaning and engagement and the extent to which each group tweets using political keywords. Finally, we demonstrate how to improve ideology prediction accuracy by exploiting the relationships between the user groups.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175 ], "paper_content_text": [ "Automatic political preference prediction from social media posts has to date proven successful only in distinguishing between publicly declared liberals and conservatives in the US.", "This study examines users' political ideology using a sevenpoint scale which enables us to identify politically moderate and neutral usersgroups which are of particular interest to political scientists and pollsters.", "Using a novel data set with political ideology labels self-reported through surveys, our goal is two-fold: a) to characterize the political groups of users through language use on Twitter; b) to build a fine-grained model that predicts political ideology of unseen users.", "Our results identify differences in both political leaning and engagement and the extent to which each group tweets using political keywords.", "Finally, we demonstrate how to improve ideology prediction accuracy by exploiting the relationships between the user groups.", "Introduction Social media is used by people to share their opinions and views.", "Unsurprisingly, an important part of the population shares opinions and news related to politics or causes they support, thus offering strong cues about their political preferences and ideologies.", "In addition, political membership is also predictable purely from one's interests or demographics -it is much more likely for a religious person to be conservative or for a younger person to lean liberal (Ellis and Stimson, 2012) .", "* Work carried out during a research visit at the University of Pennsylvania User trait prediction from text is based on the assumption that language use reflects a user's demographics, psychological states or preferences.", "Applications include prediction of age (Rao et al., 2010; Flekova et al., 2016b) , gender (Burger et al., 2011; Sap et al., 2014) , personality (Schwartz et al., 2013; , socioeconomic status (Preoţiuc-Pietro et al., 2015a,b; Liu et al., 2016c) , popularity (Lampos et al., 2014) or location (Cheng et al., 2010) .", "Research on predicting political orientation has focused on methodological improvements (Pennacchiotti and Popescu, 2011) and used data sets with publicly stated dichotomous political orientation labels due to their easy accessibility (Sylwester and Purver, 2015) .", "However, these data sets are not representative samples of the entire population (Cohen and Ruths, 2013) and do not accurately reflect the variety of political attitudes and engagement (Kam et al., 2007) .", "For example, we expect users who state their political affiliation in their profile description, tweet with partisan hashtags or appear in public party lists to use social media as a means of popularizing and supporting their political beliefs (Bar-berASa, 2015) .", "Many users may choose not to publicly post about their political preference for various social goals or perhaps this preference may not be strong or representative enough to be disclosed online.", "Dichotomous political preference also ignores users who do not have a political ideology.", "All of these types of users are very important for researchers aiming to understand group preferences, traits or moral values (Lewis and Reiley, 2014; Hersh, 2015) .", "The most common political ideology spectrum in the US is the conservative -liberal (Ellis and Stimson, 2012) .", "We collect a novel data set of Twitter users mapped to this seven-point spectrum which allows us to: 1.", "Uncover the differences in language use between ideological groups; 2.", "Develop a user-level political ideology prediction algorithm that classifies all levels of engagement and leverages the structure in the political ideology spectrum.", "First, using a broad range of language features including unigrams, word clusters and emotions, we study the linguistic differences between the two ideologically extreme groups, the two ideologically moderate groups and between both extremes and moderates in order to provide insight into the content they post on Twitter.", "In addition, we examine the extent to which the ideological groups in our data set post about politics and compare it to a data set obtained similarly to previous work.", "In prediction experiments, we show how accurately we can distinguish between opposing ideological groups in various scenarios and that previous binary political orientation prediction has been oversimplified.", "Then, we measure the extent to which we can predict the two dimensions of political leaning and engagement.", "Finally, we build an ideology classifier in a multi-task learning setup that leverages the relationships between groups.", "1 Related Work Automatically inferring user traits from their online footprints is a prolific topic of research, enabled by the increasing availability of user generated data and advances in machine learning.", "Beyond its research oriented goals, user profiling has important industry applications in online marketing, personalization or large-scale audience profiling.", "To this end, researchers have used a wide range of types of online footprints, including video (Subramanian et al., 2013) , audio (Alam and Riccardi, 2014 ), text (Preoţiuc-Pietro et al., 2015a) , profile images (Liu et al., 2016a) , social data (Van Der Heide et al., 2012; Hall et al., 2014) , social networks (Perozzi and Skiena, 2015; Rout et al., 2013) , payment data (Wang et al., 2016) and endorsements .", "Political orientation prediction has been studied in two related, albeit crucially different scenarios, as also identified in (Zafar et al., 2016) .", "First, researchers aimed to identify and quantify orientation of words (Monroe et al., 2008) , hashtags (Weber et al., 2013) or documents (Iyyer et al., 2014) , or to detect bias (Yano et al., 2010) or impartiality (Zafar et al., 2016) at a document level.", "Our study belongs to the second category, where political orientation is inferred at a user-level.", "All previous studies study labeling US conservatives vs. liberals using either text (Rao et al., 2010) , social network connections (Zamal et al., 2012) , platform-specific features (Conover et al., 2011) or a combination of these (Pennacchiotti and Popescu, 2011; Volkova et al., 2014) , with very high reported accuracies of up to 94.9% (Conover et al., 2011) .", "However, all previous work on predicting userlevel political preferences are limited to a binary prediction between liberal/democrat and conservative/republican, disregarding any nuances in political ideology.", "In addition, as the focus of the studies is more on the methodological or interpretation aspects of the problem, another downside is that the user labels were obtained in simple, albeit biased ways.", "These include users who explicitly state their political orientation on user lists of party supporters (Zamal et al., 2012; Pennacchiotti and Popescu, 2011) , supporting partisan causes (Rao et al., 2010) , by following political figures (Volkova et al., 2014) or party accounts (Sylwester and Purver, 2015) or that retweet partisan hashtags (Conover et al., 2011) .", "As also identified in (Cohen and Ruths, 2013) and further confirmed later in this study, these data sets are biased: most people do not clearly state their political preference online -fewer than 5% according to Priante et al.", "(2016) -and those that state their preference are very likely to be political activists.", "Cohen and Ruths (2013) demonstrated that predictive accuracy of classifiers is significantly lower when confronted with users that do not explicitly mention their political orientation.", "Despite this, their study is limited because in their hardest classification task, they use crowdsourced political orientation labels, which may not correspond to reality and suffer from biases (Flekova et al., 2016a; .", "Further, they still only look at predicting binary political orientation.", "To date, no other research on this topic has taken into account these findings.", "Data Set The main data set used in this study consists of 3,938 users recruited through the Qualtrics platform (D 1 ).", "Each participant was compensated with 3 USD for 15 minutes of their time.", "All participants first answered the same demographic questions (including political ideology), then were directed to one of four sets of psychological questionnaires unrelated to the political ideology question.", "They were asked to self-report their political ideology on a seven point scale: Very conservative (1), Conservative (2), Moderately conservative (3), Moderate (4), Moderately liberal (5), Liberal (6), Very liberal (7).", "In addition, participants had the option of choosing Apathetic and Other, which have ambiguous fits on the conservative -liberal spectrum and were removed from our analysis (399 users).", "We also asked participants to self-report their gender (2322 female, 1205 male, 12 other) and age.", "Participants were all from the US in order to limit the impact of cultural and political factors.", "The political ideology distribution in our sample is presented in Figure 1 .", "We asked users their Twitter handle and downloaded their most recent 3,200 tweets, leading to a total of 4,833,133 tweets.", "Before adding users to our 3,938 user data set, we performed the following checks to ensure that the Twitter handle was the user's own: 1) after compensation, users were if they were truthful in reporting their handle and if not, we removed their data from analysis; 2) we manually examined all handles marked as verified by Twitter or that had over 2000 followers and eliminated them if they were celebrities or corporate/news accounts, as these were unlikely the users who participated in the survey.", "This study received approval from the Institutional Review Board (IRB) of the University of Pennsylvania.", "In addition, to facilitate comparison to previous work, we also use a data set of 13,651 users with overt political orientation (D 2 ).", "We selected popular political figures unambiguously associated with US liberal politics (@SenSanders, @JoeBiden, @CoryBooker, @JohnKerry) or US conservative politics (@marcorubio, @tedcruz, @RandPaul, @RealBenCarson).", "Liberals in our set (N l = 7417) had to follow on Twitter all of the liberal political figures and none of the conservative figures.", "Likewise, conservative users (N c = 6234) had to follow all of the conservative figures and no liberal figures.", "We downloaded up to 3,200 of each user's most recent tweets, leading to a total of 25,493,407 tweets.", "All tweets were downloaded around 10 August 2016.", "Features In our analysis, we use a broad range of linguistic features described below.", "Unigrams We use the bag-of-words representation to reduce each user's posting history to a normalised frequency distribution over the vocabulary consisting of all words used by at least 10% of the users (6,060 words).", "LIWC Traditional psychological studies use a dictionary-based approach to representing text.", "The most popular method is based on Linguistic Inquiry and Word Count (LIWC) (Pennebaker et al., 2001) , and automatically counts word frequencies for 64 different categories manually constructed based on psychological theory.", "These include different parts-of-speech, topical categories and emotions.", "Each user is thereby represented as a frequency distribution over these categories.", "Word2Vec Topics An alternative to LIWC is to use automatically generated word clusters i.e., groups of words that are semantically and/or syntactically similar.", "The clusters help reducing the feature space and provides additional interpretability.", "To create these groups of words, we use an automatic method that leverages word co-occurrence patterns in large corpora by making use of the distributional hypothesis: similar words tend to cooccur in similar contexts (Harris, 1954) .", "Based on co-occurrence statistics, each word is represented as a low dimensional vector of numbers with words closer in this space being more similar (Deerwester et al., 1990) .", "We use the method from (Preoţiuc-Pietro et al., 2015a) to compute topics using word2vec similarity (Mikolov et al., 2013a,b) and spectral clustering (Shi and Malik, 2000; von Luxburg, 2007) of different sizes (from 30 to 2000).", "We have tried other alternatives to building clusters: using other word similarities to generate clusters -such as NPMI (Lampos et al., 2014) or GloVe as proposed in (Preoţiuc-Pietro et al., 2015a) -or using standard topic modelling approached to create soft clusters of words e.g., Latent Dirichlet Allocation (Blei et al., 2003) .", "For brevity, we present experiments with the best performing feature set containing 500 Word2Vec clusters.", "We aggregate all the words posted in a users' tweets and represent each user as a distribution of the fraction of words belonging to each cluster.", "Sentiment & Emotions We hypothesise that different political ideologies differ in the type and amount of emotions the users express through their posts.", "The most studied model of discrete emotions is the Ekman model (Ekman, 1992; Strapparava and Mihalcea, 2008; Strapparava et al., 2004) which posits the existence of six basic emotions: anger, disgust, fear, joy, sadness and surprise.", "We automatically quantify these emotions from our Twitter data set using a publicly available crowd-sourcing derived lexicon of words associated with any of the six emotions, as well as general positive and negative sentiment Turney, 2010, 2013) .", "Using these lexicons, we assign a predicted emotion to each message and then average across all users' posts to obtain user level emotion expression scores.", "Political Terms In order to select unigrams pertaining to politics, we assigned the most frequent 12,000 unigrams in our data set to three categories: • Political words: mentions of political terms (234); • Political NEs: mentions of politician proper names out of the political terms (39); • Media NEs: mentions of political media sources and pundits out of the political terms (20).", "This coding was initially performed by a research assistant studying political science with good knowledge of US politics and were further filtered and checked by one of the authors.", "Analysis First, we explore the relationships between language use and political ideological groups within each feature set and pairs of opposing user groups.", "To illustrate differences between ideological groups we compare the two political extremes (Very Conservative -Very Liberal) and the political moderates (Moderate Conservative -Moderate Liberal).", "We further compare outright moderates with a group combining the two political extremes to study if we can uncover differences in political engagement and extremity, regardless of the conservative-liberal leaning.", "We use univariate partial linear correlations with age and gender as co-variates to factor out the influence of basic demographics.", "For example, in D 1 , users who reported themselves as very conservative are older and more likely males (µ age = 35.1, pct male = 44%) than the data average (µ age = 31.2, pct male = 35%).", "Additionally, prior to combining the two ideologically extreme groups, we sub-sampled the larger class (Very Liberal) to match the smaller class (Very Conservative) in age and gender.", "In the later prediction experiments, we do not perform matching, as this represents useful signal for classification (Ellis and Stimson, 2012) .", "Results with unigrams are presented in Figure 2 and with the other features in Table 1 .", "These are selected using standard statistical significance tests.", "Very Conservatives vs.", "Very Liberals The comparison between the extreme categories reveals the largest number of significant differences.", "The unigrams and Word2Vec clusters specific to conservatives are dominated by religion specific terms ('praying', 'god', W2V-485, W2V-018, W2V-099, L-RELIG), confirming a well-documented relationship (Gelman, 2009) and words describing family relationships ('uncle', 'son', L-FAMILY), another conservative value (Lakoff, 1997) .", "The emphasis on religious terms among conservatives is consistent with the claim that many Americans associate 'conservative' with 'religious' (Ellis and Stimson, 2012) .", "Extreme liberals show a tendency to use more adjectives (W2V-075, W2V-110), adverbs (L-ADVERB), conjunctions (L-CONJ) and comparisons (L-COMPARE) which indicate more nuanced and complex posts.", "Extreme conservatives post tweets higher in all positive emotions than liberals (L-POSEMO, Emot-Joy, Emot-Positive), confirming a previously hypothesised relationship (Napier and Jost, 2008) .", "However, extreme liberals are not associated with posting negative emotions either, only using words that reflect more anxiety (L-ANX), which is related to neuroticism in which the liberals are higher (Gerber et al., 2010) .", "Political term analysis reveals the partisan terms Figure 2 : Unigrams with the highest 80 Pearson correlations shown as word clouds in three vertical panels with a binary variable representing the two ideological groups compared.", "The size of the unigram is scaled by its correlation with the ideological group in bold.", "The color indexes relative frequency, from light blue (rarely used) to dark blue (frequently used).", "All correlations are significant at p < .05 and controlled for age and gender.", "', 'racism', 'feminism', 'transgender') .", "This perhaps reflects the desire for conservatives on Twitter to identify like-minded individuals, as extreme conservatives are a minority on the platform.", "Liberals, by contrast, use the platform to discuss and popularize their causes.", "Moderate Conservatives vs.", "Moderate Liberals Comparing the two sides of moderate users reveals a slightly more nuanced view of the two ideologies.", "While moderate conservatives still make heavy use of religious terms and express positive emotions (Emot-Joy, L-DRIVES), they also use affiliative language (L-AFFILIATION) and plural pronouns (L-WE).", "Moderate liberals are identified by very different features compared to their more extreme counterparts.", "Most striking is the use of swear and sex words (L-SEXUAL, L-ANGER, W2V-316), also highlighted by Sylwester and Purver (2015) .", "Two word clusters relating to British culture (W2V-458) and art (W2V-373) reflect that liberals are more inclined towards arts (Dollinger, 2007) .", "Statistically significant political terms are very few compared to the previous comparison, probably due to their lower overall usage, which we further investigate later.", "Moderates vs. Extremists Our final comparison looks at outright moderates compared to the two extreme groups combined, as we hypothesise the existence of a difference in overall political engagement.", "Moderates are not characterized by many features besides a topic of casual words (W2V-098), indicating the heterogeneity of this group of users.", "However, regardless of their orientation, the ideological extremists stand out from moderates.", "They use words and word clusters related to political actors (W2V-309), issues (W2V-237) and laws (W2V-296, W2V-288).", "LIWC analysis uncovers differences in article use (L-ARTICLE) or power words (L-POWER) specific of political tweets.", "The overall sentiment of these users is negative (Emot-Fear, Emot-Disgust, Emot-Sadness, L-DEATH) compared to moderates.", "This reveals -combined with the finding from the first comparison -that while extreme conservatives are overall more positive than liberals, both groups share negative expression.", "Political terms are almost all significantly correlated with the extreme ideological groups, Con.", "(1) Con.", "(2) M.Con.", "(3) Mod.", "(4) confirming the existence of a difference in political engagement which we study in detail next.", "Figure 3 presents the use of the three types of political terms across the 7 ideological groups in D 1 and the two political groups from D 2 .", "We notice the following: Political Terms • D 2 has a huge skew towards political words, with an average of more than three times more political terms across all three categories than our extreme classes from D 1 ; • Within the groups in D 1 , we observe an almost perfectly symmetrical U-shape across all three types of political terms, confirming our hypothesis about political engagement; • The difference between 1-2/6-7 is larger than 2-3/5-6.", "The extreme liberals and conservatives are disproportionately political, and have the potential to give Twitter's political discussions an unrepresentative, extremist hue (Fiorina, 1999) .", "It is also possible, however, that characterizing one as an extreme liberal or conservative indicates as much about her level of political engagement as it does about her placement on a left-right scale (Converse, 1964; Broockman, 2016) .", "Prediction In this section we build predictive models of political ideology and compare them to data sets obtained using previous work.", "Cross-Group Prediction First, we experiment with classifying between conservatives and liberals across various levels of political engagement in D 1 and between the two polarized groups in D 2 .", "We use logistic regression classification to compare three setups in Table 2 with results measured with ROC AUC as the classes are slightly inbalanced: • 10-fold cross-validation where training is performed on the same task as the testing (principal diagonal); • A train-test setup where training is performed on one task (presented in rows) and testing is performed on another (presented in columns); • A domain adaptation setup (results in brackets) where on each of the 10 folds, the 9 training folds (presented in rows) are supplemented with all the data from a different task (presented in columns) using the EasyAdapt algorithm (Daumé III, 2007) as a proof on concept on the effects of using additional distantly supervised data.", "Data pooling lead to worse results than EasyAdapt.", "Each of the three tasks from D 1 have a similar number of training samples, hence we do not expect that data set size has any effects in comparing the results across tasks.", "The results with both sets of features show that: • Prediction performance is much higher for D 2 than for D 1 , with the more extreme groups in D 1 being easier to predict than the moderate groups.", "This confirms that the very high accuracies reported by previous research are an artifact of user label collection and that on regular users, the expected accuracy is much lower (Cohen and Ruths, 2013) .", "We further show that, as the level of political engagement decreases, the classification problem becomes even harder; • The model trained on D 2 and Word2Vec word clusters performs significantly worse on D 1 tasks even if the training data is over 10 times larger.", "When using political words, the D 2 trained classifier performs relatively well on all tasks from D 1 ; • Overall, using political words as features performs better than Word2Vec clusters in the binary classification tasks; • Domain adaptation helps in the majority of cases, leading to improvements of up to .03 in AUC (predicting 2v6 supplemented with 3v5 data).", "Political Leaning and Engagement Prediction Political leaning (Conservative -Liberal, excluding the Moderate group) can be considered an ordinal variable and the prediction problem framed as one of regression.", "In addition to the political leaning prediction, based on analysis and previous prediction results, we hypothesize the existence of a separate dimension of political engagement regardless of the partisan side.", "Thus, we merge users from classes 3-5, 2-6, 1-7 and create a variable with four values, where the lowest value is represented by moderate users (4) and the highest value is represented by either very conservative (1) or very liberal (7) users.", "We use a linear regression algorithm with an Elastic Net regularizer (Zou and Hastie, 2005) as implemented in ScikitLearn (Pedregosa et al., 2011) .", "To evaluate our results, we split our data into 10 stratified folds and performed crossvalidation on one held-out fold at a time.", "For all our methods we tune the parameters of our models on a separate validation fold.", "The overall performance is assessed using Pearson correlation between the set of predicted values and the userreported score.", "Results are presented in Table 3 .", "735 The same patterns hold when evaluating the results with Root Mean Squared Error (RMSE).", "Table 3 : Pearson correlations between the predictions and self-reported ideologies using linear regression with each feature category and a linear combination of their predictions in a 10-fold cross-validation setup.", "Political leaning is represented on the 1-7 scale removing the moderates (4).", "Political engagement is a scale ranging from 4 through 3-5 and 2-6 to 1-7.", "The results show that both dimensions can be predicted well above chance, with political leaning being easier to predict than engagement.", "Word2Vec clusters obtain the highest predictive accuracy for political leaning, even though they did not perform as well in the previous classification tasks.", "For political engagement, political terms and Word2Vec clusters obtain similar predictive accuracy.", "This result is expected based on the results from Figure 3 , which showed how political term usage varies across groups, and how it is especially dependent on political engagement.", "While political terms are very effective at distinguishing between two opposing political groups, they can not discriminate as well between levels of engagement within the same ideological orientation.", "Combining all classifiers' predictions in a linear ensemble obtains best results when compared to each individual category.", "Encoding Class Structure In our previous experiments, we uncovered that certain relationships exist between the seven groups.", "For example, extreme conservatives and liberals both demonstrate strong political engagement.", "Therefore, this class structure can be exploited to improve classification performance.", "To this end, we deploy the sparse graph regularized approach (Argyriou et al., 2007; Zhou et al., 2011) to encode the structure of the seven classes as a graph regularizer in a logistic regression framework.", "In particular, we employed a multi-task learning paradigm, where each task is a one-vs-all classification.", "Multi-task learning (MTL) is a learning paradigm that jointly learns multiple related tasks and can achieve better generalization performance than learning each task individually, especially when presented with insufficient training samples (Liu et al., 2015 (Liu et al., , 2016b .", "The group structure is encoded into a matrix R which codes the groups which are considered similar.", "The objective of the sparse graph regularized multi-task learning problem is: min W,c τ t=1 N i=1 log(1 + exp(−Y t,i (W T i,t X t,i + c t ))) + γ WR 2 F + λ W 1 , where τ is the number of tasks, |N | the number of samples, X the feature matrix, Y the outcome matrix, W i,t and c t is the model for task t and R is the structure matrix.", "We define three R matrices: (1) codes that groups with similar political engagement are similar (i.e.", "1-7, 2-6, 3-5); (2) codes that groups from each ideological side are similar (i.e.", "1-2, 1-3, 2-3, 5-6, 5-7, 6-7); (3) learnt from the data.", "Results are presented in Table 4 .", "Regular logistic regression performs slightly better than the majority class baseline, which demonstrates that the 7class classification is a very hard problem although most miss-classifications are within one ideology point.", "The graph regularization (GR) improves the classification performance over logistic regression (LR) in all cases, with political leaning based matrix (GR-Leaning) obtaining 2% in accuracy higher than the political engagement one (GR-Engagement) and the learnt matrix (GR-Learnt) obtaining best results.", "Conclusions This study analyzed user-level political ideology through Twitter posts.", "In contrast to previous work, we made use of a novel data set where finegrained user political ideology labels are obtained through surveys as opposed to binary self-reports.", "We showed that users in our data set are far less likely to post about politics and real-world finegrained political ideology prediction is harder and more nuanced than previously reported.", "We analyzed language differences between the ideological groups and uncovered a dimension of political engagement separate from political leaning.", "Our work has implications for pollsters or marketers, who are most interested to identify and persuade moderate users.", "With respect to political conclusions, researchers commonly conceptualize ideology as a single, left-right dimension similar to what we observe in the U.S. Congress (Ansolabehere et al., 2008; Bafumi and Herron, 2010) .", "Our results suggest a different direction: self-reported political extremity is more an indication of political engagement than of ideological self-placement (Abramowitz, 2010) .", "In fact, only self-reported extremists appear to devote much of their Twitter activity to politics at all.", "While our study focused solely on text posted by the user, follow-up work can use other modalities such as images or social network analysis to improve prediction performance.", "In addition, our work on user-level modeling can be integrated with work on message-level political bias to study how this is revealed across users with various levels of engagement.", "Another direction of future study will look at political ideology prediction in other countries and cultures, where ideology has different or multiple dimensions." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "5.1", "5.2", "5.3", "5.4", "6", "6.1", "6.2", "6.3", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Data Set", "Features", "Analysis", "Very Conservatives vs. Very Liberals", "Moderate Conservatives vs. Moderate Liberals", "Moderates vs. Extremists", "Political Terms", "Prediction", "Cross-Group Prediction", "Political Leaning and Engagement Prediction", "Encoding Class Structure", "Conclusions" ] }
GEM-SciDuet-train-94#paper-1239#slide-0
Motivation
User attribute prediction from text is successful: I Gender (Burger et al. 2011 EMNLP) I Location (Eisenstein et al. 2010 EMNLP) I Personality (Schwartz et al. 2013 PLoS One) I Impact (Lampos et al. 2014 EACL) I Political Orientation (Volkova et al. 2014 ACL) I Mental Illness (Coppersmith et al. 2014 ACL) I Occupation (Preotiuc-Pietro et al. 2015 ACL) I Income (Preotiuc-Pietro et al. 2015 PLoS One) ... and useful in many applications.
User attribute prediction from text is successful: I Gender (Burger et al. 2011 EMNLP) I Location (Eisenstein et al. 2010 EMNLP) I Personality (Schwartz et al. 2013 PLoS One) I Impact (Lampos et al. 2014 EACL) I Political Orientation (Volkova et al. 2014 ACL) I Mental Illness (Coppersmith et al. 2014 ACL) I Occupation (Preotiuc-Pietro et al. 2015 ACL) I Income (Preotiuc-Pietro et al. 2015 PLoS One) ... and useful in many applications.
[]
GEM-SciDuet-train-94#paper-1239#slide-1
1239
Beyond Binary Labels: Political Ideology Prediction of Twitter Users
Automatic political preference prediction from social media posts has to date proven successful only in distinguishing between publicly declared liberals and conservatives in the US. This study examines users' political ideology using a sevenpoint scale which enables us to identify politically moderate and neutral usersgroups which are of particular interest to political scientists and pollsters. Using a novel data set with political ideology labels self-reported through surveys, our goal is two-fold: a) to characterize the political groups of users through language use on Twitter; b) to build a fine-grained model that predicts political ideology of unseen users. Our results identify differences in both political leaning and engagement and the extent to which each group tweets using political keywords. Finally, we demonstrate how to improve ideology prediction accuracy by exploiting the relationships between the user groups.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175 ], "paper_content_text": [ "Automatic political preference prediction from social media posts has to date proven successful only in distinguishing between publicly declared liberals and conservatives in the US.", "This study examines users' political ideology using a sevenpoint scale which enables us to identify politically moderate and neutral usersgroups which are of particular interest to political scientists and pollsters.", "Using a novel data set with political ideology labels self-reported through surveys, our goal is two-fold: a) to characterize the political groups of users through language use on Twitter; b) to build a fine-grained model that predicts political ideology of unseen users.", "Our results identify differences in both political leaning and engagement and the extent to which each group tweets using political keywords.", "Finally, we demonstrate how to improve ideology prediction accuracy by exploiting the relationships between the user groups.", "Introduction Social media is used by people to share their opinions and views.", "Unsurprisingly, an important part of the population shares opinions and news related to politics or causes they support, thus offering strong cues about their political preferences and ideologies.", "In addition, political membership is also predictable purely from one's interests or demographics -it is much more likely for a religious person to be conservative or for a younger person to lean liberal (Ellis and Stimson, 2012) .", "* Work carried out during a research visit at the University of Pennsylvania User trait prediction from text is based on the assumption that language use reflects a user's demographics, psychological states or preferences.", "Applications include prediction of age (Rao et al., 2010; Flekova et al., 2016b) , gender (Burger et al., 2011; Sap et al., 2014) , personality (Schwartz et al., 2013; , socioeconomic status (Preoţiuc-Pietro et al., 2015a,b; Liu et al., 2016c) , popularity (Lampos et al., 2014) or location (Cheng et al., 2010) .", "Research on predicting political orientation has focused on methodological improvements (Pennacchiotti and Popescu, 2011) and used data sets with publicly stated dichotomous political orientation labels due to their easy accessibility (Sylwester and Purver, 2015) .", "However, these data sets are not representative samples of the entire population (Cohen and Ruths, 2013) and do not accurately reflect the variety of political attitudes and engagement (Kam et al., 2007) .", "For example, we expect users who state their political affiliation in their profile description, tweet with partisan hashtags or appear in public party lists to use social media as a means of popularizing and supporting their political beliefs (Bar-berASa, 2015) .", "Many users may choose not to publicly post about their political preference for various social goals or perhaps this preference may not be strong or representative enough to be disclosed online.", "Dichotomous political preference also ignores users who do not have a political ideology.", "All of these types of users are very important for researchers aiming to understand group preferences, traits or moral values (Lewis and Reiley, 2014; Hersh, 2015) .", "The most common political ideology spectrum in the US is the conservative -liberal (Ellis and Stimson, 2012) .", "We collect a novel data set of Twitter users mapped to this seven-point spectrum which allows us to: 1.", "Uncover the differences in language use between ideological groups; 2.", "Develop a user-level political ideology prediction algorithm that classifies all levels of engagement and leverages the structure in the political ideology spectrum.", "First, using a broad range of language features including unigrams, word clusters and emotions, we study the linguistic differences between the two ideologically extreme groups, the two ideologically moderate groups and between both extremes and moderates in order to provide insight into the content they post on Twitter.", "In addition, we examine the extent to which the ideological groups in our data set post about politics and compare it to a data set obtained similarly to previous work.", "In prediction experiments, we show how accurately we can distinguish between opposing ideological groups in various scenarios and that previous binary political orientation prediction has been oversimplified.", "Then, we measure the extent to which we can predict the two dimensions of political leaning and engagement.", "Finally, we build an ideology classifier in a multi-task learning setup that leverages the relationships between groups.", "1 Related Work Automatically inferring user traits from their online footprints is a prolific topic of research, enabled by the increasing availability of user generated data and advances in machine learning.", "Beyond its research oriented goals, user profiling has important industry applications in online marketing, personalization or large-scale audience profiling.", "To this end, researchers have used a wide range of types of online footprints, including video (Subramanian et al., 2013) , audio (Alam and Riccardi, 2014 ), text (Preoţiuc-Pietro et al., 2015a) , profile images (Liu et al., 2016a) , social data (Van Der Heide et al., 2012; Hall et al., 2014) , social networks (Perozzi and Skiena, 2015; Rout et al., 2013) , payment data (Wang et al., 2016) and endorsements .", "Political orientation prediction has been studied in two related, albeit crucially different scenarios, as also identified in (Zafar et al., 2016) .", "First, researchers aimed to identify and quantify orientation of words (Monroe et al., 2008) , hashtags (Weber et al., 2013) or documents (Iyyer et al., 2014) , or to detect bias (Yano et al., 2010) or impartiality (Zafar et al., 2016) at a document level.", "Our study belongs to the second category, where political orientation is inferred at a user-level.", "All previous studies study labeling US conservatives vs. liberals using either text (Rao et al., 2010) , social network connections (Zamal et al., 2012) , platform-specific features (Conover et al., 2011) or a combination of these (Pennacchiotti and Popescu, 2011; Volkova et al., 2014) , with very high reported accuracies of up to 94.9% (Conover et al., 2011) .", "However, all previous work on predicting userlevel political preferences are limited to a binary prediction between liberal/democrat and conservative/republican, disregarding any nuances in political ideology.", "In addition, as the focus of the studies is more on the methodological or interpretation aspects of the problem, another downside is that the user labels were obtained in simple, albeit biased ways.", "These include users who explicitly state their political orientation on user lists of party supporters (Zamal et al., 2012; Pennacchiotti and Popescu, 2011) , supporting partisan causes (Rao et al., 2010) , by following political figures (Volkova et al., 2014) or party accounts (Sylwester and Purver, 2015) or that retweet partisan hashtags (Conover et al., 2011) .", "As also identified in (Cohen and Ruths, 2013) and further confirmed later in this study, these data sets are biased: most people do not clearly state their political preference online -fewer than 5% according to Priante et al.", "(2016) -and those that state their preference are very likely to be political activists.", "Cohen and Ruths (2013) demonstrated that predictive accuracy of classifiers is significantly lower when confronted with users that do not explicitly mention their political orientation.", "Despite this, their study is limited because in their hardest classification task, they use crowdsourced political orientation labels, which may not correspond to reality and suffer from biases (Flekova et al., 2016a; .", "Further, they still only look at predicting binary political orientation.", "To date, no other research on this topic has taken into account these findings.", "Data Set The main data set used in this study consists of 3,938 users recruited through the Qualtrics platform (D 1 ).", "Each participant was compensated with 3 USD for 15 minutes of their time.", "All participants first answered the same demographic questions (including political ideology), then were directed to one of four sets of psychological questionnaires unrelated to the political ideology question.", "They were asked to self-report their political ideology on a seven point scale: Very conservative (1), Conservative (2), Moderately conservative (3), Moderate (4), Moderately liberal (5), Liberal (6), Very liberal (7).", "In addition, participants had the option of choosing Apathetic and Other, which have ambiguous fits on the conservative -liberal spectrum and were removed from our analysis (399 users).", "We also asked participants to self-report their gender (2322 female, 1205 male, 12 other) and age.", "Participants were all from the US in order to limit the impact of cultural and political factors.", "The political ideology distribution in our sample is presented in Figure 1 .", "We asked users their Twitter handle and downloaded their most recent 3,200 tweets, leading to a total of 4,833,133 tweets.", "Before adding users to our 3,938 user data set, we performed the following checks to ensure that the Twitter handle was the user's own: 1) after compensation, users were if they were truthful in reporting their handle and if not, we removed their data from analysis; 2) we manually examined all handles marked as verified by Twitter or that had over 2000 followers and eliminated them if they were celebrities or corporate/news accounts, as these were unlikely the users who participated in the survey.", "This study received approval from the Institutional Review Board (IRB) of the University of Pennsylvania.", "In addition, to facilitate comparison to previous work, we also use a data set of 13,651 users with overt political orientation (D 2 ).", "We selected popular political figures unambiguously associated with US liberal politics (@SenSanders, @JoeBiden, @CoryBooker, @JohnKerry) or US conservative politics (@marcorubio, @tedcruz, @RandPaul, @RealBenCarson).", "Liberals in our set (N l = 7417) had to follow on Twitter all of the liberal political figures and none of the conservative figures.", "Likewise, conservative users (N c = 6234) had to follow all of the conservative figures and no liberal figures.", "We downloaded up to 3,200 of each user's most recent tweets, leading to a total of 25,493,407 tweets.", "All tweets were downloaded around 10 August 2016.", "Features In our analysis, we use a broad range of linguistic features described below.", "Unigrams We use the bag-of-words representation to reduce each user's posting history to a normalised frequency distribution over the vocabulary consisting of all words used by at least 10% of the users (6,060 words).", "LIWC Traditional psychological studies use a dictionary-based approach to representing text.", "The most popular method is based on Linguistic Inquiry and Word Count (LIWC) (Pennebaker et al., 2001) , and automatically counts word frequencies for 64 different categories manually constructed based on psychological theory.", "These include different parts-of-speech, topical categories and emotions.", "Each user is thereby represented as a frequency distribution over these categories.", "Word2Vec Topics An alternative to LIWC is to use automatically generated word clusters i.e., groups of words that are semantically and/or syntactically similar.", "The clusters help reducing the feature space and provides additional interpretability.", "To create these groups of words, we use an automatic method that leverages word co-occurrence patterns in large corpora by making use of the distributional hypothesis: similar words tend to cooccur in similar contexts (Harris, 1954) .", "Based on co-occurrence statistics, each word is represented as a low dimensional vector of numbers with words closer in this space being more similar (Deerwester et al., 1990) .", "We use the method from (Preoţiuc-Pietro et al., 2015a) to compute topics using word2vec similarity (Mikolov et al., 2013a,b) and spectral clustering (Shi and Malik, 2000; von Luxburg, 2007) of different sizes (from 30 to 2000).", "We have tried other alternatives to building clusters: using other word similarities to generate clusters -such as NPMI (Lampos et al., 2014) or GloVe as proposed in (Preoţiuc-Pietro et al., 2015a) -or using standard topic modelling approached to create soft clusters of words e.g., Latent Dirichlet Allocation (Blei et al., 2003) .", "For brevity, we present experiments with the best performing feature set containing 500 Word2Vec clusters.", "We aggregate all the words posted in a users' tweets and represent each user as a distribution of the fraction of words belonging to each cluster.", "Sentiment & Emotions We hypothesise that different political ideologies differ in the type and amount of emotions the users express through their posts.", "The most studied model of discrete emotions is the Ekman model (Ekman, 1992; Strapparava and Mihalcea, 2008; Strapparava et al., 2004) which posits the existence of six basic emotions: anger, disgust, fear, joy, sadness and surprise.", "We automatically quantify these emotions from our Twitter data set using a publicly available crowd-sourcing derived lexicon of words associated with any of the six emotions, as well as general positive and negative sentiment Turney, 2010, 2013) .", "Using these lexicons, we assign a predicted emotion to each message and then average across all users' posts to obtain user level emotion expression scores.", "Political Terms In order to select unigrams pertaining to politics, we assigned the most frequent 12,000 unigrams in our data set to three categories: • Political words: mentions of political terms (234); • Political NEs: mentions of politician proper names out of the political terms (39); • Media NEs: mentions of political media sources and pundits out of the political terms (20).", "This coding was initially performed by a research assistant studying political science with good knowledge of US politics and were further filtered and checked by one of the authors.", "Analysis First, we explore the relationships between language use and political ideological groups within each feature set and pairs of opposing user groups.", "To illustrate differences between ideological groups we compare the two political extremes (Very Conservative -Very Liberal) and the political moderates (Moderate Conservative -Moderate Liberal).", "We further compare outright moderates with a group combining the two political extremes to study if we can uncover differences in political engagement and extremity, regardless of the conservative-liberal leaning.", "We use univariate partial linear correlations with age and gender as co-variates to factor out the influence of basic demographics.", "For example, in D 1 , users who reported themselves as very conservative are older and more likely males (µ age = 35.1, pct male = 44%) than the data average (µ age = 31.2, pct male = 35%).", "Additionally, prior to combining the two ideologically extreme groups, we sub-sampled the larger class (Very Liberal) to match the smaller class (Very Conservative) in age and gender.", "In the later prediction experiments, we do not perform matching, as this represents useful signal for classification (Ellis and Stimson, 2012) .", "Results with unigrams are presented in Figure 2 and with the other features in Table 1 .", "These are selected using standard statistical significance tests.", "Very Conservatives vs.", "Very Liberals The comparison between the extreme categories reveals the largest number of significant differences.", "The unigrams and Word2Vec clusters specific to conservatives are dominated by religion specific terms ('praying', 'god', W2V-485, W2V-018, W2V-099, L-RELIG), confirming a well-documented relationship (Gelman, 2009) and words describing family relationships ('uncle', 'son', L-FAMILY), another conservative value (Lakoff, 1997) .", "The emphasis on religious terms among conservatives is consistent with the claim that many Americans associate 'conservative' with 'religious' (Ellis and Stimson, 2012) .", "Extreme liberals show a tendency to use more adjectives (W2V-075, W2V-110), adverbs (L-ADVERB), conjunctions (L-CONJ) and comparisons (L-COMPARE) which indicate more nuanced and complex posts.", "Extreme conservatives post tweets higher in all positive emotions than liberals (L-POSEMO, Emot-Joy, Emot-Positive), confirming a previously hypothesised relationship (Napier and Jost, 2008) .", "However, extreme liberals are not associated with posting negative emotions either, only using words that reflect more anxiety (L-ANX), which is related to neuroticism in which the liberals are higher (Gerber et al., 2010) .", "Political term analysis reveals the partisan terms Figure 2 : Unigrams with the highest 80 Pearson correlations shown as word clouds in three vertical panels with a binary variable representing the two ideological groups compared.", "The size of the unigram is scaled by its correlation with the ideological group in bold.", "The color indexes relative frequency, from light blue (rarely used) to dark blue (frequently used).", "All correlations are significant at p < .05 and controlled for age and gender.", "', 'racism', 'feminism', 'transgender') .", "This perhaps reflects the desire for conservatives on Twitter to identify like-minded individuals, as extreme conservatives are a minority on the platform.", "Liberals, by contrast, use the platform to discuss and popularize their causes.", "Moderate Conservatives vs.", "Moderate Liberals Comparing the two sides of moderate users reveals a slightly more nuanced view of the two ideologies.", "While moderate conservatives still make heavy use of religious terms and express positive emotions (Emot-Joy, L-DRIVES), they also use affiliative language (L-AFFILIATION) and plural pronouns (L-WE).", "Moderate liberals are identified by very different features compared to their more extreme counterparts.", "Most striking is the use of swear and sex words (L-SEXUAL, L-ANGER, W2V-316), also highlighted by Sylwester and Purver (2015) .", "Two word clusters relating to British culture (W2V-458) and art (W2V-373) reflect that liberals are more inclined towards arts (Dollinger, 2007) .", "Statistically significant political terms are very few compared to the previous comparison, probably due to their lower overall usage, which we further investigate later.", "Moderates vs. Extremists Our final comparison looks at outright moderates compared to the two extreme groups combined, as we hypothesise the existence of a difference in overall political engagement.", "Moderates are not characterized by many features besides a topic of casual words (W2V-098), indicating the heterogeneity of this group of users.", "However, regardless of their orientation, the ideological extremists stand out from moderates.", "They use words and word clusters related to political actors (W2V-309), issues (W2V-237) and laws (W2V-296, W2V-288).", "LIWC analysis uncovers differences in article use (L-ARTICLE) or power words (L-POWER) specific of political tweets.", "The overall sentiment of these users is negative (Emot-Fear, Emot-Disgust, Emot-Sadness, L-DEATH) compared to moderates.", "This reveals -combined with the finding from the first comparison -that while extreme conservatives are overall more positive than liberals, both groups share negative expression.", "Political terms are almost all significantly correlated with the extreme ideological groups, Con.", "(1) Con.", "(2) M.Con.", "(3) Mod.", "(4) confirming the existence of a difference in political engagement which we study in detail next.", "Figure 3 presents the use of the three types of political terms across the 7 ideological groups in D 1 and the two political groups from D 2 .", "We notice the following: Political Terms • D 2 has a huge skew towards political words, with an average of more than three times more political terms across all three categories than our extreme classes from D 1 ; • Within the groups in D 1 , we observe an almost perfectly symmetrical U-shape across all three types of political terms, confirming our hypothesis about political engagement; • The difference between 1-2/6-7 is larger than 2-3/5-6.", "The extreme liberals and conservatives are disproportionately political, and have the potential to give Twitter's political discussions an unrepresentative, extremist hue (Fiorina, 1999) .", "It is also possible, however, that characterizing one as an extreme liberal or conservative indicates as much about her level of political engagement as it does about her placement on a left-right scale (Converse, 1964; Broockman, 2016) .", "Prediction In this section we build predictive models of political ideology and compare them to data sets obtained using previous work.", "Cross-Group Prediction First, we experiment with classifying between conservatives and liberals across various levels of political engagement in D 1 and between the two polarized groups in D 2 .", "We use logistic regression classification to compare three setups in Table 2 with results measured with ROC AUC as the classes are slightly inbalanced: • 10-fold cross-validation where training is performed on the same task as the testing (principal diagonal); • A train-test setup where training is performed on one task (presented in rows) and testing is performed on another (presented in columns); • A domain adaptation setup (results in brackets) where on each of the 10 folds, the 9 training folds (presented in rows) are supplemented with all the data from a different task (presented in columns) using the EasyAdapt algorithm (Daumé III, 2007) as a proof on concept on the effects of using additional distantly supervised data.", "Data pooling lead to worse results than EasyAdapt.", "Each of the three tasks from D 1 have a similar number of training samples, hence we do not expect that data set size has any effects in comparing the results across tasks.", "The results with both sets of features show that: • Prediction performance is much higher for D 2 than for D 1 , with the more extreme groups in D 1 being easier to predict than the moderate groups.", "This confirms that the very high accuracies reported by previous research are an artifact of user label collection and that on regular users, the expected accuracy is much lower (Cohen and Ruths, 2013) .", "We further show that, as the level of political engagement decreases, the classification problem becomes even harder; • The model trained on D 2 and Word2Vec word clusters performs significantly worse on D 1 tasks even if the training data is over 10 times larger.", "When using political words, the D 2 trained classifier performs relatively well on all tasks from D 1 ; • Overall, using political words as features performs better than Word2Vec clusters in the binary classification tasks; • Domain adaptation helps in the majority of cases, leading to improvements of up to .03 in AUC (predicting 2v6 supplemented with 3v5 data).", "Political Leaning and Engagement Prediction Political leaning (Conservative -Liberal, excluding the Moderate group) can be considered an ordinal variable and the prediction problem framed as one of regression.", "In addition to the political leaning prediction, based on analysis and previous prediction results, we hypothesize the existence of a separate dimension of political engagement regardless of the partisan side.", "Thus, we merge users from classes 3-5, 2-6, 1-7 and create a variable with four values, where the lowest value is represented by moderate users (4) and the highest value is represented by either very conservative (1) or very liberal (7) users.", "We use a linear regression algorithm with an Elastic Net regularizer (Zou and Hastie, 2005) as implemented in ScikitLearn (Pedregosa et al., 2011) .", "To evaluate our results, we split our data into 10 stratified folds and performed crossvalidation on one held-out fold at a time.", "For all our methods we tune the parameters of our models on a separate validation fold.", "The overall performance is assessed using Pearson correlation between the set of predicted values and the userreported score.", "Results are presented in Table 3 .", "735 The same patterns hold when evaluating the results with Root Mean Squared Error (RMSE).", "Table 3 : Pearson correlations between the predictions and self-reported ideologies using linear regression with each feature category and a linear combination of their predictions in a 10-fold cross-validation setup.", "Political leaning is represented on the 1-7 scale removing the moderates (4).", "Political engagement is a scale ranging from 4 through 3-5 and 2-6 to 1-7.", "The results show that both dimensions can be predicted well above chance, with political leaning being easier to predict than engagement.", "Word2Vec clusters obtain the highest predictive accuracy for political leaning, even though they did not perform as well in the previous classification tasks.", "For political engagement, political terms and Word2Vec clusters obtain similar predictive accuracy.", "This result is expected based on the results from Figure 3 , which showed how political term usage varies across groups, and how it is especially dependent on political engagement.", "While political terms are very effective at distinguishing between two opposing political groups, they can not discriminate as well between levels of engagement within the same ideological orientation.", "Combining all classifiers' predictions in a linear ensemble obtains best results when compared to each individual category.", "Encoding Class Structure In our previous experiments, we uncovered that certain relationships exist between the seven groups.", "For example, extreme conservatives and liberals both demonstrate strong political engagement.", "Therefore, this class structure can be exploited to improve classification performance.", "To this end, we deploy the sparse graph regularized approach (Argyriou et al., 2007; Zhou et al., 2011) to encode the structure of the seven classes as a graph regularizer in a logistic regression framework.", "In particular, we employed a multi-task learning paradigm, where each task is a one-vs-all classification.", "Multi-task learning (MTL) is a learning paradigm that jointly learns multiple related tasks and can achieve better generalization performance than learning each task individually, especially when presented with insufficient training samples (Liu et al., 2015 (Liu et al., , 2016b .", "The group structure is encoded into a matrix R which codes the groups which are considered similar.", "The objective of the sparse graph regularized multi-task learning problem is: min W,c τ t=1 N i=1 log(1 + exp(−Y t,i (W T i,t X t,i + c t ))) + γ WR 2 F + λ W 1 , where τ is the number of tasks, |N | the number of samples, X the feature matrix, Y the outcome matrix, W i,t and c t is the model for task t and R is the structure matrix.", "We define three R matrices: (1) codes that groups with similar political engagement are similar (i.e.", "1-7, 2-6, 3-5); (2) codes that groups from each ideological side are similar (i.e.", "1-2, 1-3, 2-3, 5-6, 5-7, 6-7); (3) learnt from the data.", "Results are presented in Table 4 .", "Regular logistic regression performs slightly better than the majority class baseline, which demonstrates that the 7class classification is a very hard problem although most miss-classifications are within one ideology point.", "The graph regularization (GR) improves the classification performance over logistic regression (LR) in all cases, with political leaning based matrix (GR-Leaning) obtaining 2% in accuracy higher than the political engagement one (GR-Engagement) and the learnt matrix (GR-Learnt) obtaining best results.", "Conclusions This study analyzed user-level political ideology through Twitter posts.", "In contrast to previous work, we made use of a novel data set where finegrained user political ideology labels are obtained through surveys as opposed to binary self-reports.", "We showed that users in our data set are far less likely to post about politics and real-world finegrained political ideology prediction is harder and more nuanced than previously reported.", "We analyzed language differences between the ideological groups and uncovered a dimension of political engagement separate from political leaning.", "Our work has implications for pollsters or marketers, who are most interested to identify and persuade moderate users.", "With respect to political conclusions, researchers commonly conceptualize ideology as a single, left-right dimension similar to what we observe in the U.S. Congress (Ansolabehere et al., 2008; Bafumi and Herron, 2010) .", "Our results suggest a different direction: self-reported political extremity is more an indication of political engagement than of ideological self-placement (Abramowitz, 2010) .", "In fact, only self-reported extremists appear to devote much of their Twitter activity to politics at all.", "While our study focused solely on text posted by the user, follow-up work can use other modalities such as images or social network analysis to improve prediction performance.", "In addition, our work on user-level modeling can be integrated with work on message-level political bias to study how this is revealed across users with various levels of engagement.", "Another direction of future study will look at political ideology prediction in other countries and cultures, where ideology has different or multiple dimensions." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "5.1", "5.2", "5.3", "5.4", "6", "6.1", "6.2", "6.3", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Data Set", "Features", "Analysis", "Very Conservatives vs. Very Liberals", "Moderate Conservatives vs. Moderate Liberals", "Moderates vs. Extremists", "Political Terms", "Prediction", "Cross-Group Prediction", "Political Leaning and Engagement Prediction", "Encoding Class Structure", "Conclusions" ] }
GEM-SciDuet-train-94#paper-1239#slide-1
Political Ideology and Text
Political ideology of a user is disclosed through language use I partisan political mentions or issues Previous CS/NLP research used data sets with user labels identified through: H1 Users are far more likely to be politically engaged H2 The prediction problem was so far over-simplified 3. Lists of Conservative/Liberal users 4. Followers of partisan accounts H4 Differences in language use exist between moderate and extreme users
Political ideology of a user is disclosed through language use I partisan political mentions or issues Previous CS/NLP research used data sets with user labels identified through: H1 Users are far more likely to be politically engaged H2 The prediction problem was so far over-simplified 3. Lists of Conservative/Liberal users 4. Followers of partisan accounts H4 Differences in language use exist between moderate and extreme users
[]
GEM-SciDuet-train-94#paper-1239#slide-2
1239
Beyond Binary Labels: Political Ideology Prediction of Twitter Users
Automatic political preference prediction from social media posts has to date proven successful only in distinguishing between publicly declared liberals and conservatives in the US. This study examines users' political ideology using a sevenpoint scale which enables us to identify politically moderate and neutral usersgroups which are of particular interest to political scientists and pollsters. Using a novel data set with political ideology labels self-reported through surveys, our goal is two-fold: a) to characterize the political groups of users through language use on Twitter; b) to build a fine-grained model that predicts political ideology of unseen users. Our results identify differences in both political leaning and engagement and the extent to which each group tweets using political keywords. Finally, we demonstrate how to improve ideology prediction accuracy by exploiting the relationships between the user groups.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175 ], "paper_content_text": [ "Automatic political preference prediction from social media posts has to date proven successful only in distinguishing between publicly declared liberals and conservatives in the US.", "This study examines users' political ideology using a sevenpoint scale which enables us to identify politically moderate and neutral usersgroups which are of particular interest to political scientists and pollsters.", "Using a novel data set with political ideology labels self-reported through surveys, our goal is two-fold: a) to characterize the political groups of users through language use on Twitter; b) to build a fine-grained model that predicts political ideology of unseen users.", "Our results identify differences in both political leaning and engagement and the extent to which each group tweets using political keywords.", "Finally, we demonstrate how to improve ideology prediction accuracy by exploiting the relationships between the user groups.", "Introduction Social media is used by people to share their opinions and views.", "Unsurprisingly, an important part of the population shares opinions and news related to politics or causes they support, thus offering strong cues about their political preferences and ideologies.", "In addition, political membership is also predictable purely from one's interests or demographics -it is much more likely for a religious person to be conservative or for a younger person to lean liberal (Ellis and Stimson, 2012) .", "* Work carried out during a research visit at the University of Pennsylvania User trait prediction from text is based on the assumption that language use reflects a user's demographics, psychological states or preferences.", "Applications include prediction of age (Rao et al., 2010; Flekova et al., 2016b) , gender (Burger et al., 2011; Sap et al., 2014) , personality (Schwartz et al., 2013; , socioeconomic status (Preoţiuc-Pietro et al., 2015a,b; Liu et al., 2016c) , popularity (Lampos et al., 2014) or location (Cheng et al., 2010) .", "Research on predicting political orientation has focused on methodological improvements (Pennacchiotti and Popescu, 2011) and used data sets with publicly stated dichotomous political orientation labels due to their easy accessibility (Sylwester and Purver, 2015) .", "However, these data sets are not representative samples of the entire population (Cohen and Ruths, 2013) and do not accurately reflect the variety of political attitudes and engagement (Kam et al., 2007) .", "For example, we expect users who state their political affiliation in their profile description, tweet with partisan hashtags or appear in public party lists to use social media as a means of popularizing and supporting their political beliefs (Bar-berASa, 2015) .", "Many users may choose not to publicly post about their political preference for various social goals or perhaps this preference may not be strong or representative enough to be disclosed online.", "Dichotomous political preference also ignores users who do not have a political ideology.", "All of these types of users are very important for researchers aiming to understand group preferences, traits or moral values (Lewis and Reiley, 2014; Hersh, 2015) .", "The most common political ideology spectrum in the US is the conservative -liberal (Ellis and Stimson, 2012) .", "We collect a novel data set of Twitter users mapped to this seven-point spectrum which allows us to: 1.", "Uncover the differences in language use between ideological groups; 2.", "Develop a user-level political ideology prediction algorithm that classifies all levels of engagement and leverages the structure in the political ideology spectrum.", "First, using a broad range of language features including unigrams, word clusters and emotions, we study the linguistic differences between the two ideologically extreme groups, the two ideologically moderate groups and between both extremes and moderates in order to provide insight into the content they post on Twitter.", "In addition, we examine the extent to which the ideological groups in our data set post about politics and compare it to a data set obtained similarly to previous work.", "In prediction experiments, we show how accurately we can distinguish between opposing ideological groups in various scenarios and that previous binary political orientation prediction has been oversimplified.", "Then, we measure the extent to which we can predict the two dimensions of political leaning and engagement.", "Finally, we build an ideology classifier in a multi-task learning setup that leverages the relationships between groups.", "1 Related Work Automatically inferring user traits from their online footprints is a prolific topic of research, enabled by the increasing availability of user generated data and advances in machine learning.", "Beyond its research oriented goals, user profiling has important industry applications in online marketing, personalization or large-scale audience profiling.", "To this end, researchers have used a wide range of types of online footprints, including video (Subramanian et al., 2013) , audio (Alam and Riccardi, 2014 ), text (Preoţiuc-Pietro et al., 2015a) , profile images (Liu et al., 2016a) , social data (Van Der Heide et al., 2012; Hall et al., 2014) , social networks (Perozzi and Skiena, 2015; Rout et al., 2013) , payment data (Wang et al., 2016) and endorsements .", "Political orientation prediction has been studied in two related, albeit crucially different scenarios, as also identified in (Zafar et al., 2016) .", "First, researchers aimed to identify and quantify orientation of words (Monroe et al., 2008) , hashtags (Weber et al., 2013) or documents (Iyyer et al., 2014) , or to detect bias (Yano et al., 2010) or impartiality (Zafar et al., 2016) at a document level.", "Our study belongs to the second category, where political orientation is inferred at a user-level.", "All previous studies study labeling US conservatives vs. liberals using either text (Rao et al., 2010) , social network connections (Zamal et al., 2012) , platform-specific features (Conover et al., 2011) or a combination of these (Pennacchiotti and Popescu, 2011; Volkova et al., 2014) , with very high reported accuracies of up to 94.9% (Conover et al., 2011) .", "However, all previous work on predicting userlevel political preferences are limited to a binary prediction between liberal/democrat and conservative/republican, disregarding any nuances in political ideology.", "In addition, as the focus of the studies is more on the methodological or interpretation aspects of the problem, another downside is that the user labels were obtained in simple, albeit biased ways.", "These include users who explicitly state their political orientation on user lists of party supporters (Zamal et al., 2012; Pennacchiotti and Popescu, 2011) , supporting partisan causes (Rao et al., 2010) , by following political figures (Volkova et al., 2014) or party accounts (Sylwester and Purver, 2015) or that retweet partisan hashtags (Conover et al., 2011) .", "As also identified in (Cohen and Ruths, 2013) and further confirmed later in this study, these data sets are biased: most people do not clearly state their political preference online -fewer than 5% according to Priante et al.", "(2016) -and those that state their preference are very likely to be political activists.", "Cohen and Ruths (2013) demonstrated that predictive accuracy of classifiers is significantly lower when confronted with users that do not explicitly mention their political orientation.", "Despite this, their study is limited because in their hardest classification task, they use crowdsourced political orientation labels, which may not correspond to reality and suffer from biases (Flekova et al., 2016a; .", "Further, they still only look at predicting binary political orientation.", "To date, no other research on this topic has taken into account these findings.", "Data Set The main data set used in this study consists of 3,938 users recruited through the Qualtrics platform (D 1 ).", "Each participant was compensated with 3 USD for 15 minutes of their time.", "All participants first answered the same demographic questions (including political ideology), then were directed to one of four sets of psychological questionnaires unrelated to the political ideology question.", "They were asked to self-report their political ideology on a seven point scale: Very conservative (1), Conservative (2), Moderately conservative (3), Moderate (4), Moderately liberal (5), Liberal (6), Very liberal (7).", "In addition, participants had the option of choosing Apathetic and Other, which have ambiguous fits on the conservative -liberal spectrum and were removed from our analysis (399 users).", "We also asked participants to self-report their gender (2322 female, 1205 male, 12 other) and age.", "Participants were all from the US in order to limit the impact of cultural and political factors.", "The political ideology distribution in our sample is presented in Figure 1 .", "We asked users their Twitter handle and downloaded their most recent 3,200 tweets, leading to a total of 4,833,133 tweets.", "Before adding users to our 3,938 user data set, we performed the following checks to ensure that the Twitter handle was the user's own: 1) after compensation, users were if they were truthful in reporting their handle and if not, we removed their data from analysis; 2) we manually examined all handles marked as verified by Twitter or that had over 2000 followers and eliminated them if they were celebrities or corporate/news accounts, as these were unlikely the users who participated in the survey.", "This study received approval from the Institutional Review Board (IRB) of the University of Pennsylvania.", "In addition, to facilitate comparison to previous work, we also use a data set of 13,651 users with overt political orientation (D 2 ).", "We selected popular political figures unambiguously associated with US liberal politics (@SenSanders, @JoeBiden, @CoryBooker, @JohnKerry) or US conservative politics (@marcorubio, @tedcruz, @RandPaul, @RealBenCarson).", "Liberals in our set (N l = 7417) had to follow on Twitter all of the liberal political figures and none of the conservative figures.", "Likewise, conservative users (N c = 6234) had to follow all of the conservative figures and no liberal figures.", "We downloaded up to 3,200 of each user's most recent tweets, leading to a total of 25,493,407 tweets.", "All tweets were downloaded around 10 August 2016.", "Features In our analysis, we use a broad range of linguistic features described below.", "Unigrams We use the bag-of-words representation to reduce each user's posting history to a normalised frequency distribution over the vocabulary consisting of all words used by at least 10% of the users (6,060 words).", "LIWC Traditional psychological studies use a dictionary-based approach to representing text.", "The most popular method is based on Linguistic Inquiry and Word Count (LIWC) (Pennebaker et al., 2001) , and automatically counts word frequencies for 64 different categories manually constructed based on psychological theory.", "These include different parts-of-speech, topical categories and emotions.", "Each user is thereby represented as a frequency distribution over these categories.", "Word2Vec Topics An alternative to LIWC is to use automatically generated word clusters i.e., groups of words that are semantically and/or syntactically similar.", "The clusters help reducing the feature space and provides additional interpretability.", "To create these groups of words, we use an automatic method that leverages word co-occurrence patterns in large corpora by making use of the distributional hypothesis: similar words tend to cooccur in similar contexts (Harris, 1954) .", "Based on co-occurrence statistics, each word is represented as a low dimensional vector of numbers with words closer in this space being more similar (Deerwester et al., 1990) .", "We use the method from (Preoţiuc-Pietro et al., 2015a) to compute topics using word2vec similarity (Mikolov et al., 2013a,b) and spectral clustering (Shi and Malik, 2000; von Luxburg, 2007) of different sizes (from 30 to 2000).", "We have tried other alternatives to building clusters: using other word similarities to generate clusters -such as NPMI (Lampos et al., 2014) or GloVe as proposed in (Preoţiuc-Pietro et al., 2015a) -or using standard topic modelling approached to create soft clusters of words e.g., Latent Dirichlet Allocation (Blei et al., 2003) .", "For brevity, we present experiments with the best performing feature set containing 500 Word2Vec clusters.", "We aggregate all the words posted in a users' tweets and represent each user as a distribution of the fraction of words belonging to each cluster.", "Sentiment & Emotions We hypothesise that different political ideologies differ in the type and amount of emotions the users express through their posts.", "The most studied model of discrete emotions is the Ekman model (Ekman, 1992; Strapparava and Mihalcea, 2008; Strapparava et al., 2004) which posits the existence of six basic emotions: anger, disgust, fear, joy, sadness and surprise.", "We automatically quantify these emotions from our Twitter data set using a publicly available crowd-sourcing derived lexicon of words associated with any of the six emotions, as well as general positive and negative sentiment Turney, 2010, 2013) .", "Using these lexicons, we assign a predicted emotion to each message and then average across all users' posts to obtain user level emotion expression scores.", "Political Terms In order to select unigrams pertaining to politics, we assigned the most frequent 12,000 unigrams in our data set to three categories: • Political words: mentions of political terms (234); • Political NEs: mentions of politician proper names out of the political terms (39); • Media NEs: mentions of political media sources and pundits out of the political terms (20).", "This coding was initially performed by a research assistant studying political science with good knowledge of US politics and were further filtered and checked by one of the authors.", "Analysis First, we explore the relationships between language use and political ideological groups within each feature set and pairs of opposing user groups.", "To illustrate differences between ideological groups we compare the two political extremes (Very Conservative -Very Liberal) and the political moderates (Moderate Conservative -Moderate Liberal).", "We further compare outright moderates with a group combining the two political extremes to study if we can uncover differences in political engagement and extremity, regardless of the conservative-liberal leaning.", "We use univariate partial linear correlations with age and gender as co-variates to factor out the influence of basic demographics.", "For example, in D 1 , users who reported themselves as very conservative are older and more likely males (µ age = 35.1, pct male = 44%) than the data average (µ age = 31.2, pct male = 35%).", "Additionally, prior to combining the two ideologically extreme groups, we sub-sampled the larger class (Very Liberal) to match the smaller class (Very Conservative) in age and gender.", "In the later prediction experiments, we do not perform matching, as this represents useful signal for classification (Ellis and Stimson, 2012) .", "Results with unigrams are presented in Figure 2 and with the other features in Table 1 .", "These are selected using standard statistical significance tests.", "Very Conservatives vs.", "Very Liberals The comparison between the extreme categories reveals the largest number of significant differences.", "The unigrams and Word2Vec clusters specific to conservatives are dominated by religion specific terms ('praying', 'god', W2V-485, W2V-018, W2V-099, L-RELIG), confirming a well-documented relationship (Gelman, 2009) and words describing family relationships ('uncle', 'son', L-FAMILY), another conservative value (Lakoff, 1997) .", "The emphasis on religious terms among conservatives is consistent with the claim that many Americans associate 'conservative' with 'religious' (Ellis and Stimson, 2012) .", "Extreme liberals show a tendency to use more adjectives (W2V-075, W2V-110), adverbs (L-ADVERB), conjunctions (L-CONJ) and comparisons (L-COMPARE) which indicate more nuanced and complex posts.", "Extreme conservatives post tweets higher in all positive emotions than liberals (L-POSEMO, Emot-Joy, Emot-Positive), confirming a previously hypothesised relationship (Napier and Jost, 2008) .", "However, extreme liberals are not associated with posting negative emotions either, only using words that reflect more anxiety (L-ANX), which is related to neuroticism in which the liberals are higher (Gerber et al., 2010) .", "Political term analysis reveals the partisan terms Figure 2 : Unigrams with the highest 80 Pearson correlations shown as word clouds in three vertical panels with a binary variable representing the two ideological groups compared.", "The size of the unigram is scaled by its correlation with the ideological group in bold.", "The color indexes relative frequency, from light blue (rarely used) to dark blue (frequently used).", "All correlations are significant at p < .05 and controlled for age and gender.", "', 'racism', 'feminism', 'transgender') .", "This perhaps reflects the desire for conservatives on Twitter to identify like-minded individuals, as extreme conservatives are a minority on the platform.", "Liberals, by contrast, use the platform to discuss and popularize their causes.", "Moderate Conservatives vs.", "Moderate Liberals Comparing the two sides of moderate users reveals a slightly more nuanced view of the two ideologies.", "While moderate conservatives still make heavy use of religious terms and express positive emotions (Emot-Joy, L-DRIVES), they also use affiliative language (L-AFFILIATION) and plural pronouns (L-WE).", "Moderate liberals are identified by very different features compared to their more extreme counterparts.", "Most striking is the use of swear and sex words (L-SEXUAL, L-ANGER, W2V-316), also highlighted by Sylwester and Purver (2015) .", "Two word clusters relating to British culture (W2V-458) and art (W2V-373) reflect that liberals are more inclined towards arts (Dollinger, 2007) .", "Statistically significant political terms are very few compared to the previous comparison, probably due to their lower overall usage, which we further investigate later.", "Moderates vs. Extremists Our final comparison looks at outright moderates compared to the two extreme groups combined, as we hypothesise the existence of a difference in overall political engagement.", "Moderates are not characterized by many features besides a topic of casual words (W2V-098), indicating the heterogeneity of this group of users.", "However, regardless of their orientation, the ideological extremists stand out from moderates.", "They use words and word clusters related to political actors (W2V-309), issues (W2V-237) and laws (W2V-296, W2V-288).", "LIWC analysis uncovers differences in article use (L-ARTICLE) or power words (L-POWER) specific of political tweets.", "The overall sentiment of these users is negative (Emot-Fear, Emot-Disgust, Emot-Sadness, L-DEATH) compared to moderates.", "This reveals -combined with the finding from the first comparison -that while extreme conservatives are overall more positive than liberals, both groups share negative expression.", "Political terms are almost all significantly correlated with the extreme ideological groups, Con.", "(1) Con.", "(2) M.Con.", "(3) Mod.", "(4) confirming the existence of a difference in political engagement which we study in detail next.", "Figure 3 presents the use of the three types of political terms across the 7 ideological groups in D 1 and the two political groups from D 2 .", "We notice the following: Political Terms • D 2 has a huge skew towards political words, with an average of more than three times more political terms across all three categories than our extreme classes from D 1 ; • Within the groups in D 1 , we observe an almost perfectly symmetrical U-shape across all three types of political terms, confirming our hypothesis about political engagement; • The difference between 1-2/6-7 is larger than 2-3/5-6.", "The extreme liberals and conservatives are disproportionately political, and have the potential to give Twitter's political discussions an unrepresentative, extremist hue (Fiorina, 1999) .", "It is also possible, however, that characterizing one as an extreme liberal or conservative indicates as much about her level of political engagement as it does about her placement on a left-right scale (Converse, 1964; Broockman, 2016) .", "Prediction In this section we build predictive models of political ideology and compare them to data sets obtained using previous work.", "Cross-Group Prediction First, we experiment with classifying between conservatives and liberals across various levels of political engagement in D 1 and between the two polarized groups in D 2 .", "We use logistic regression classification to compare three setups in Table 2 with results measured with ROC AUC as the classes are slightly inbalanced: • 10-fold cross-validation where training is performed on the same task as the testing (principal diagonal); • A train-test setup where training is performed on one task (presented in rows) and testing is performed on another (presented in columns); • A domain adaptation setup (results in brackets) where on each of the 10 folds, the 9 training folds (presented in rows) are supplemented with all the data from a different task (presented in columns) using the EasyAdapt algorithm (Daumé III, 2007) as a proof on concept on the effects of using additional distantly supervised data.", "Data pooling lead to worse results than EasyAdapt.", "Each of the three tasks from D 1 have a similar number of training samples, hence we do not expect that data set size has any effects in comparing the results across tasks.", "The results with both sets of features show that: • Prediction performance is much higher for D 2 than for D 1 , with the more extreme groups in D 1 being easier to predict than the moderate groups.", "This confirms that the very high accuracies reported by previous research are an artifact of user label collection and that on regular users, the expected accuracy is much lower (Cohen and Ruths, 2013) .", "We further show that, as the level of political engagement decreases, the classification problem becomes even harder; • The model trained on D 2 and Word2Vec word clusters performs significantly worse on D 1 tasks even if the training data is over 10 times larger.", "When using political words, the D 2 trained classifier performs relatively well on all tasks from D 1 ; • Overall, using political words as features performs better than Word2Vec clusters in the binary classification tasks; • Domain adaptation helps in the majority of cases, leading to improvements of up to .03 in AUC (predicting 2v6 supplemented with 3v5 data).", "Political Leaning and Engagement Prediction Political leaning (Conservative -Liberal, excluding the Moderate group) can be considered an ordinal variable and the prediction problem framed as one of regression.", "In addition to the political leaning prediction, based on analysis and previous prediction results, we hypothesize the existence of a separate dimension of political engagement regardless of the partisan side.", "Thus, we merge users from classes 3-5, 2-6, 1-7 and create a variable with four values, where the lowest value is represented by moderate users (4) and the highest value is represented by either very conservative (1) or very liberal (7) users.", "We use a linear regression algorithm with an Elastic Net regularizer (Zou and Hastie, 2005) as implemented in ScikitLearn (Pedregosa et al., 2011) .", "To evaluate our results, we split our data into 10 stratified folds and performed crossvalidation on one held-out fold at a time.", "For all our methods we tune the parameters of our models on a separate validation fold.", "The overall performance is assessed using Pearson correlation between the set of predicted values and the userreported score.", "Results are presented in Table 3 .", "735 The same patterns hold when evaluating the results with Root Mean Squared Error (RMSE).", "Table 3 : Pearson correlations between the predictions and self-reported ideologies using linear regression with each feature category and a linear combination of their predictions in a 10-fold cross-validation setup.", "Political leaning is represented on the 1-7 scale removing the moderates (4).", "Political engagement is a scale ranging from 4 through 3-5 and 2-6 to 1-7.", "The results show that both dimensions can be predicted well above chance, with political leaning being easier to predict than engagement.", "Word2Vec clusters obtain the highest predictive accuracy for political leaning, even though they did not perform as well in the previous classification tasks.", "For political engagement, political terms and Word2Vec clusters obtain similar predictive accuracy.", "This result is expected based on the results from Figure 3 , which showed how political term usage varies across groups, and how it is especially dependent on political engagement.", "While political terms are very effective at distinguishing between two opposing political groups, they can not discriminate as well between levels of engagement within the same ideological orientation.", "Combining all classifiers' predictions in a linear ensemble obtains best results when compared to each individual category.", "Encoding Class Structure In our previous experiments, we uncovered that certain relationships exist between the seven groups.", "For example, extreme conservatives and liberals both demonstrate strong political engagement.", "Therefore, this class structure can be exploited to improve classification performance.", "To this end, we deploy the sparse graph regularized approach (Argyriou et al., 2007; Zhou et al., 2011) to encode the structure of the seven classes as a graph regularizer in a logistic regression framework.", "In particular, we employed a multi-task learning paradigm, where each task is a one-vs-all classification.", "Multi-task learning (MTL) is a learning paradigm that jointly learns multiple related tasks and can achieve better generalization performance than learning each task individually, especially when presented with insufficient training samples (Liu et al., 2015 (Liu et al., , 2016b .", "The group structure is encoded into a matrix R which codes the groups which are considered similar.", "The objective of the sparse graph regularized multi-task learning problem is: min W,c τ t=1 N i=1 log(1 + exp(−Y t,i (W T i,t X t,i + c t ))) + γ WR 2 F + λ W 1 , where τ is the number of tasks, |N | the number of samples, X the feature matrix, Y the outcome matrix, W i,t and c t is the model for task t and R is the structure matrix.", "We define three R matrices: (1) codes that groups with similar political engagement are similar (i.e.", "1-7, 2-6, 3-5); (2) codes that groups from each ideological side are similar (i.e.", "1-2, 1-3, 2-3, 5-6, 5-7, 6-7); (3) learnt from the data.", "Results are presented in Table 4 .", "Regular logistic regression performs slightly better than the majority class baseline, which demonstrates that the 7class classification is a very hard problem although most miss-classifications are within one ideology point.", "The graph regularization (GR) improves the classification performance over logistic regression (LR) in all cases, with political leaning based matrix (GR-Leaning) obtaining 2% in accuracy higher than the political engagement one (GR-Engagement) and the learnt matrix (GR-Learnt) obtaining best results.", "Conclusions This study analyzed user-level political ideology through Twitter posts.", "In contrast to previous work, we made use of a novel data set where finegrained user political ideology labels are obtained through surveys as opposed to binary self-reports.", "We showed that users in our data set are far less likely to post about politics and real-world finegrained political ideology prediction is harder and more nuanced than previously reported.", "We analyzed language differences between the ideological groups and uncovered a dimension of political engagement separate from political leaning.", "Our work has implications for pollsters or marketers, who are most interested to identify and persuade moderate users.", "With respect to political conclusions, researchers commonly conceptualize ideology as a single, left-right dimension similar to what we observe in the U.S. Congress (Ansolabehere et al., 2008; Bafumi and Herron, 2010) .", "Our results suggest a different direction: self-reported political extremity is more an indication of political engagement than of ideological self-placement (Abramowitz, 2010) .", "In fact, only self-reported extremists appear to devote much of their Twitter activity to politics at all.", "While our study focused solely on text posted by the user, follow-up work can use other modalities such as images or social network analysis to improve prediction performance.", "In addition, our work on user-level modeling can be integrated with work on message-level political bias to study how this is revealed across users with various levels of engagement.", "Another direction of future study will look at political ideology prediction in other countries and cultures, where ideology has different or multiple dimensions." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "5.1", "5.2", "5.3", "5.4", "6", "6.1", "6.2", "6.3", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Data Set", "Features", "Analysis", "Very Conservatives vs. Very Liberals", "Moderate Conservatives vs. Moderate Liberals", "Moderates vs. Extremists", "Political Terms", "Prediction", "Cross-Group Prediction", "Political Leaning and Engagement Prediction", "Encoding Class Structure", "Conclusions" ] }
GEM-SciDuet-train-94#paper-1239#slide-2
Data
I specific of country and culture I our use case is US politics (similar to all previous work) I the major US ideology spectrum is Conservative Liberal I seven point scale We collect a new data set: I public Twitter handle with >100 posts Political ideology is reported through an online survey I only way to obtain unbiased ground truth labels (Flekova et al. I additionally reported age, gender and other demographics I full data for research purposes I aggregate for replicability I Twitter Developer Agreement & Policy VII.A4 Twitter Content, and information derived from Twitter Content, may not be used by, or knowingly displayed, distributed, or otherwise made available to any entity to target, segment, or profile individuals based on [...] political I Study approved by the Internal Review Board (IRB) of the For comparison to previous work, we collect a data set: I follow liberal/conservative politicians on Twitter
I specific of country and culture I our use case is US politics (similar to all previous work) I the major US ideology spectrum is Conservative Liberal I seven point scale We collect a new data set: I public Twitter handle with >100 posts Political ideology is reported through an online survey I only way to obtain unbiased ground truth labels (Flekova et al. I additionally reported age, gender and other demographics I full data for research purposes I aggregate for replicability I Twitter Developer Agreement & Policy VII.A4 Twitter Content, and information derived from Twitter Content, may not be used by, or knowingly displayed, distributed, or otherwise made available to any entity to target, segment, or profile individuals based on [...] political I Study approved by the Internal Review Board (IRB) of the For comparison to previous work, we collect a data set: I follow liberal/conservative politicians on Twitter
[]
GEM-SciDuet-train-94#paper-1239#slide-4
1239
Beyond Binary Labels: Political Ideology Prediction of Twitter Users
Automatic political preference prediction from social media posts has to date proven successful only in distinguishing between publicly declared liberals and conservatives in the US. This study examines users' political ideology using a sevenpoint scale which enables us to identify politically moderate and neutral usersgroups which are of particular interest to political scientists and pollsters. Using a novel data set with political ideology labels self-reported through surveys, our goal is two-fold: a) to characterize the political groups of users through language use on Twitter; b) to build a fine-grained model that predicts political ideology of unseen users. Our results identify differences in both political leaning and engagement and the extent to which each group tweets using political keywords. Finally, we demonstrate how to improve ideology prediction accuracy by exploiting the relationships between the user groups.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175 ], "paper_content_text": [ "Automatic political preference prediction from social media posts has to date proven successful only in distinguishing between publicly declared liberals and conservatives in the US.", "This study examines users' political ideology using a sevenpoint scale which enables us to identify politically moderate and neutral usersgroups which are of particular interest to political scientists and pollsters.", "Using a novel data set with political ideology labels self-reported through surveys, our goal is two-fold: a) to characterize the political groups of users through language use on Twitter; b) to build a fine-grained model that predicts political ideology of unseen users.", "Our results identify differences in both political leaning and engagement and the extent to which each group tweets using political keywords.", "Finally, we demonstrate how to improve ideology prediction accuracy by exploiting the relationships between the user groups.", "Introduction Social media is used by people to share their opinions and views.", "Unsurprisingly, an important part of the population shares opinions and news related to politics or causes they support, thus offering strong cues about their political preferences and ideologies.", "In addition, political membership is also predictable purely from one's interests or demographics -it is much more likely for a religious person to be conservative or for a younger person to lean liberal (Ellis and Stimson, 2012) .", "* Work carried out during a research visit at the University of Pennsylvania User trait prediction from text is based on the assumption that language use reflects a user's demographics, psychological states or preferences.", "Applications include prediction of age (Rao et al., 2010; Flekova et al., 2016b) , gender (Burger et al., 2011; Sap et al., 2014) , personality (Schwartz et al., 2013; , socioeconomic status (Preoţiuc-Pietro et al., 2015a,b; Liu et al., 2016c) , popularity (Lampos et al., 2014) or location (Cheng et al., 2010) .", "Research on predicting political orientation has focused on methodological improvements (Pennacchiotti and Popescu, 2011) and used data sets with publicly stated dichotomous political orientation labels due to their easy accessibility (Sylwester and Purver, 2015) .", "However, these data sets are not representative samples of the entire population (Cohen and Ruths, 2013) and do not accurately reflect the variety of political attitudes and engagement (Kam et al., 2007) .", "For example, we expect users who state their political affiliation in their profile description, tweet with partisan hashtags or appear in public party lists to use social media as a means of popularizing and supporting their political beliefs (Bar-berASa, 2015) .", "Many users may choose not to publicly post about their political preference for various social goals or perhaps this preference may not be strong or representative enough to be disclosed online.", "Dichotomous political preference also ignores users who do not have a political ideology.", "All of these types of users are very important for researchers aiming to understand group preferences, traits or moral values (Lewis and Reiley, 2014; Hersh, 2015) .", "The most common political ideology spectrum in the US is the conservative -liberal (Ellis and Stimson, 2012) .", "We collect a novel data set of Twitter users mapped to this seven-point spectrum which allows us to: 1.", "Uncover the differences in language use between ideological groups; 2.", "Develop a user-level political ideology prediction algorithm that classifies all levels of engagement and leverages the structure in the political ideology spectrum.", "First, using a broad range of language features including unigrams, word clusters and emotions, we study the linguistic differences between the two ideologically extreme groups, the two ideologically moderate groups and between both extremes and moderates in order to provide insight into the content they post on Twitter.", "In addition, we examine the extent to which the ideological groups in our data set post about politics and compare it to a data set obtained similarly to previous work.", "In prediction experiments, we show how accurately we can distinguish between opposing ideological groups in various scenarios and that previous binary political orientation prediction has been oversimplified.", "Then, we measure the extent to which we can predict the two dimensions of political leaning and engagement.", "Finally, we build an ideology classifier in a multi-task learning setup that leverages the relationships between groups.", "1 Related Work Automatically inferring user traits from their online footprints is a prolific topic of research, enabled by the increasing availability of user generated data and advances in machine learning.", "Beyond its research oriented goals, user profiling has important industry applications in online marketing, personalization or large-scale audience profiling.", "To this end, researchers have used a wide range of types of online footprints, including video (Subramanian et al., 2013) , audio (Alam and Riccardi, 2014 ), text (Preoţiuc-Pietro et al., 2015a) , profile images (Liu et al., 2016a) , social data (Van Der Heide et al., 2012; Hall et al., 2014) , social networks (Perozzi and Skiena, 2015; Rout et al., 2013) , payment data (Wang et al., 2016) and endorsements .", "Political orientation prediction has been studied in two related, albeit crucially different scenarios, as also identified in (Zafar et al., 2016) .", "First, researchers aimed to identify and quantify orientation of words (Monroe et al., 2008) , hashtags (Weber et al., 2013) or documents (Iyyer et al., 2014) , or to detect bias (Yano et al., 2010) or impartiality (Zafar et al., 2016) at a document level.", "Our study belongs to the second category, where political orientation is inferred at a user-level.", "All previous studies study labeling US conservatives vs. liberals using either text (Rao et al., 2010) , social network connections (Zamal et al., 2012) , platform-specific features (Conover et al., 2011) or a combination of these (Pennacchiotti and Popescu, 2011; Volkova et al., 2014) , with very high reported accuracies of up to 94.9% (Conover et al., 2011) .", "However, all previous work on predicting userlevel political preferences are limited to a binary prediction between liberal/democrat and conservative/republican, disregarding any nuances in political ideology.", "In addition, as the focus of the studies is more on the methodological or interpretation aspects of the problem, another downside is that the user labels were obtained in simple, albeit biased ways.", "These include users who explicitly state their political orientation on user lists of party supporters (Zamal et al., 2012; Pennacchiotti and Popescu, 2011) , supporting partisan causes (Rao et al., 2010) , by following political figures (Volkova et al., 2014) or party accounts (Sylwester and Purver, 2015) or that retweet partisan hashtags (Conover et al., 2011) .", "As also identified in (Cohen and Ruths, 2013) and further confirmed later in this study, these data sets are biased: most people do not clearly state their political preference online -fewer than 5% according to Priante et al.", "(2016) -and those that state their preference are very likely to be political activists.", "Cohen and Ruths (2013) demonstrated that predictive accuracy of classifiers is significantly lower when confronted with users that do not explicitly mention their political orientation.", "Despite this, their study is limited because in their hardest classification task, they use crowdsourced political orientation labels, which may not correspond to reality and suffer from biases (Flekova et al., 2016a; .", "Further, they still only look at predicting binary political orientation.", "To date, no other research on this topic has taken into account these findings.", "Data Set The main data set used in this study consists of 3,938 users recruited through the Qualtrics platform (D 1 ).", "Each participant was compensated with 3 USD for 15 minutes of their time.", "All participants first answered the same demographic questions (including political ideology), then were directed to one of four sets of psychological questionnaires unrelated to the political ideology question.", "They were asked to self-report their political ideology on a seven point scale: Very conservative (1), Conservative (2), Moderately conservative (3), Moderate (4), Moderately liberal (5), Liberal (6), Very liberal (7).", "In addition, participants had the option of choosing Apathetic and Other, which have ambiguous fits on the conservative -liberal spectrum and were removed from our analysis (399 users).", "We also asked participants to self-report their gender (2322 female, 1205 male, 12 other) and age.", "Participants were all from the US in order to limit the impact of cultural and political factors.", "The political ideology distribution in our sample is presented in Figure 1 .", "We asked users their Twitter handle and downloaded their most recent 3,200 tweets, leading to a total of 4,833,133 tweets.", "Before adding users to our 3,938 user data set, we performed the following checks to ensure that the Twitter handle was the user's own: 1) after compensation, users were if they were truthful in reporting their handle and if not, we removed their data from analysis; 2) we manually examined all handles marked as verified by Twitter or that had over 2000 followers and eliminated them if they were celebrities or corporate/news accounts, as these were unlikely the users who participated in the survey.", "This study received approval from the Institutional Review Board (IRB) of the University of Pennsylvania.", "In addition, to facilitate comparison to previous work, we also use a data set of 13,651 users with overt political orientation (D 2 ).", "We selected popular political figures unambiguously associated with US liberal politics (@SenSanders, @JoeBiden, @CoryBooker, @JohnKerry) or US conservative politics (@marcorubio, @tedcruz, @RandPaul, @RealBenCarson).", "Liberals in our set (N l = 7417) had to follow on Twitter all of the liberal political figures and none of the conservative figures.", "Likewise, conservative users (N c = 6234) had to follow all of the conservative figures and no liberal figures.", "We downloaded up to 3,200 of each user's most recent tweets, leading to a total of 25,493,407 tweets.", "All tweets were downloaded around 10 August 2016.", "Features In our analysis, we use a broad range of linguistic features described below.", "Unigrams We use the bag-of-words representation to reduce each user's posting history to a normalised frequency distribution over the vocabulary consisting of all words used by at least 10% of the users (6,060 words).", "LIWC Traditional psychological studies use a dictionary-based approach to representing text.", "The most popular method is based on Linguistic Inquiry and Word Count (LIWC) (Pennebaker et al., 2001) , and automatically counts word frequencies for 64 different categories manually constructed based on psychological theory.", "These include different parts-of-speech, topical categories and emotions.", "Each user is thereby represented as a frequency distribution over these categories.", "Word2Vec Topics An alternative to LIWC is to use automatically generated word clusters i.e., groups of words that are semantically and/or syntactically similar.", "The clusters help reducing the feature space and provides additional interpretability.", "To create these groups of words, we use an automatic method that leverages word co-occurrence patterns in large corpora by making use of the distributional hypothesis: similar words tend to cooccur in similar contexts (Harris, 1954) .", "Based on co-occurrence statistics, each word is represented as a low dimensional vector of numbers with words closer in this space being more similar (Deerwester et al., 1990) .", "We use the method from (Preoţiuc-Pietro et al., 2015a) to compute topics using word2vec similarity (Mikolov et al., 2013a,b) and spectral clustering (Shi and Malik, 2000; von Luxburg, 2007) of different sizes (from 30 to 2000).", "We have tried other alternatives to building clusters: using other word similarities to generate clusters -such as NPMI (Lampos et al., 2014) or GloVe as proposed in (Preoţiuc-Pietro et al., 2015a) -or using standard topic modelling approached to create soft clusters of words e.g., Latent Dirichlet Allocation (Blei et al., 2003) .", "For brevity, we present experiments with the best performing feature set containing 500 Word2Vec clusters.", "We aggregate all the words posted in a users' tweets and represent each user as a distribution of the fraction of words belonging to each cluster.", "Sentiment & Emotions We hypothesise that different political ideologies differ in the type and amount of emotions the users express through their posts.", "The most studied model of discrete emotions is the Ekman model (Ekman, 1992; Strapparava and Mihalcea, 2008; Strapparava et al., 2004) which posits the existence of six basic emotions: anger, disgust, fear, joy, sadness and surprise.", "We automatically quantify these emotions from our Twitter data set using a publicly available crowd-sourcing derived lexicon of words associated with any of the six emotions, as well as general positive and negative sentiment Turney, 2010, 2013) .", "Using these lexicons, we assign a predicted emotion to each message and then average across all users' posts to obtain user level emotion expression scores.", "Political Terms In order to select unigrams pertaining to politics, we assigned the most frequent 12,000 unigrams in our data set to three categories: • Political words: mentions of political terms (234); • Political NEs: mentions of politician proper names out of the political terms (39); • Media NEs: mentions of political media sources and pundits out of the political terms (20).", "This coding was initially performed by a research assistant studying political science with good knowledge of US politics and were further filtered and checked by one of the authors.", "Analysis First, we explore the relationships between language use and political ideological groups within each feature set and pairs of opposing user groups.", "To illustrate differences between ideological groups we compare the two political extremes (Very Conservative -Very Liberal) and the political moderates (Moderate Conservative -Moderate Liberal).", "We further compare outright moderates with a group combining the two political extremes to study if we can uncover differences in political engagement and extremity, regardless of the conservative-liberal leaning.", "We use univariate partial linear correlations with age and gender as co-variates to factor out the influence of basic demographics.", "For example, in D 1 , users who reported themselves as very conservative are older and more likely males (µ age = 35.1, pct male = 44%) than the data average (µ age = 31.2, pct male = 35%).", "Additionally, prior to combining the two ideologically extreme groups, we sub-sampled the larger class (Very Liberal) to match the smaller class (Very Conservative) in age and gender.", "In the later prediction experiments, we do not perform matching, as this represents useful signal for classification (Ellis and Stimson, 2012) .", "Results with unigrams are presented in Figure 2 and with the other features in Table 1 .", "These are selected using standard statistical significance tests.", "Very Conservatives vs.", "Very Liberals The comparison between the extreme categories reveals the largest number of significant differences.", "The unigrams and Word2Vec clusters specific to conservatives are dominated by religion specific terms ('praying', 'god', W2V-485, W2V-018, W2V-099, L-RELIG), confirming a well-documented relationship (Gelman, 2009) and words describing family relationships ('uncle', 'son', L-FAMILY), another conservative value (Lakoff, 1997) .", "The emphasis on religious terms among conservatives is consistent with the claim that many Americans associate 'conservative' with 'religious' (Ellis and Stimson, 2012) .", "Extreme liberals show a tendency to use more adjectives (W2V-075, W2V-110), adverbs (L-ADVERB), conjunctions (L-CONJ) and comparisons (L-COMPARE) which indicate more nuanced and complex posts.", "Extreme conservatives post tweets higher in all positive emotions than liberals (L-POSEMO, Emot-Joy, Emot-Positive), confirming a previously hypothesised relationship (Napier and Jost, 2008) .", "However, extreme liberals are not associated with posting negative emotions either, only using words that reflect more anxiety (L-ANX), which is related to neuroticism in which the liberals are higher (Gerber et al., 2010) .", "Political term analysis reveals the partisan terms Figure 2 : Unigrams with the highest 80 Pearson correlations shown as word clouds in three vertical panels with a binary variable representing the two ideological groups compared.", "The size of the unigram is scaled by its correlation with the ideological group in bold.", "The color indexes relative frequency, from light blue (rarely used) to dark blue (frequently used).", "All correlations are significant at p < .05 and controlled for age and gender.", "', 'racism', 'feminism', 'transgender') .", "This perhaps reflects the desire for conservatives on Twitter to identify like-minded individuals, as extreme conservatives are a minority on the platform.", "Liberals, by contrast, use the platform to discuss and popularize their causes.", "Moderate Conservatives vs.", "Moderate Liberals Comparing the two sides of moderate users reveals a slightly more nuanced view of the two ideologies.", "While moderate conservatives still make heavy use of religious terms and express positive emotions (Emot-Joy, L-DRIVES), they also use affiliative language (L-AFFILIATION) and plural pronouns (L-WE).", "Moderate liberals are identified by very different features compared to their more extreme counterparts.", "Most striking is the use of swear and sex words (L-SEXUAL, L-ANGER, W2V-316), also highlighted by Sylwester and Purver (2015) .", "Two word clusters relating to British culture (W2V-458) and art (W2V-373) reflect that liberals are more inclined towards arts (Dollinger, 2007) .", "Statistically significant political terms are very few compared to the previous comparison, probably due to their lower overall usage, which we further investigate later.", "Moderates vs. Extremists Our final comparison looks at outright moderates compared to the two extreme groups combined, as we hypothesise the existence of a difference in overall political engagement.", "Moderates are not characterized by many features besides a topic of casual words (W2V-098), indicating the heterogeneity of this group of users.", "However, regardless of their orientation, the ideological extremists stand out from moderates.", "They use words and word clusters related to political actors (W2V-309), issues (W2V-237) and laws (W2V-296, W2V-288).", "LIWC analysis uncovers differences in article use (L-ARTICLE) or power words (L-POWER) specific of political tweets.", "The overall sentiment of these users is negative (Emot-Fear, Emot-Disgust, Emot-Sadness, L-DEATH) compared to moderates.", "This reveals -combined with the finding from the first comparison -that while extreme conservatives are overall more positive than liberals, both groups share negative expression.", "Political terms are almost all significantly correlated with the extreme ideological groups, Con.", "(1) Con.", "(2) M.Con.", "(3) Mod.", "(4) confirming the existence of a difference in political engagement which we study in detail next.", "Figure 3 presents the use of the three types of political terms across the 7 ideological groups in D 1 and the two political groups from D 2 .", "We notice the following: Political Terms • D 2 has a huge skew towards political words, with an average of more than three times more political terms across all three categories than our extreme classes from D 1 ; • Within the groups in D 1 , we observe an almost perfectly symmetrical U-shape across all three types of political terms, confirming our hypothesis about political engagement; • The difference between 1-2/6-7 is larger than 2-3/5-6.", "The extreme liberals and conservatives are disproportionately political, and have the potential to give Twitter's political discussions an unrepresentative, extremist hue (Fiorina, 1999) .", "It is also possible, however, that characterizing one as an extreme liberal or conservative indicates as much about her level of political engagement as it does about her placement on a left-right scale (Converse, 1964; Broockman, 2016) .", "Prediction In this section we build predictive models of political ideology and compare them to data sets obtained using previous work.", "Cross-Group Prediction First, we experiment with classifying between conservatives and liberals across various levels of political engagement in D 1 and between the two polarized groups in D 2 .", "We use logistic regression classification to compare three setups in Table 2 with results measured with ROC AUC as the classes are slightly inbalanced: • 10-fold cross-validation where training is performed on the same task as the testing (principal diagonal); • A train-test setup where training is performed on one task (presented in rows) and testing is performed on another (presented in columns); • A domain adaptation setup (results in brackets) where on each of the 10 folds, the 9 training folds (presented in rows) are supplemented with all the data from a different task (presented in columns) using the EasyAdapt algorithm (Daumé III, 2007) as a proof on concept on the effects of using additional distantly supervised data.", "Data pooling lead to worse results than EasyAdapt.", "Each of the three tasks from D 1 have a similar number of training samples, hence we do not expect that data set size has any effects in comparing the results across tasks.", "The results with both sets of features show that: • Prediction performance is much higher for D 2 than for D 1 , with the more extreme groups in D 1 being easier to predict than the moderate groups.", "This confirms that the very high accuracies reported by previous research are an artifact of user label collection and that on regular users, the expected accuracy is much lower (Cohen and Ruths, 2013) .", "We further show that, as the level of political engagement decreases, the classification problem becomes even harder; • The model trained on D 2 and Word2Vec word clusters performs significantly worse on D 1 tasks even if the training data is over 10 times larger.", "When using political words, the D 2 trained classifier performs relatively well on all tasks from D 1 ; • Overall, using political words as features performs better than Word2Vec clusters in the binary classification tasks; • Domain adaptation helps in the majority of cases, leading to improvements of up to .03 in AUC (predicting 2v6 supplemented with 3v5 data).", "Political Leaning and Engagement Prediction Political leaning (Conservative -Liberal, excluding the Moderate group) can be considered an ordinal variable and the prediction problem framed as one of regression.", "In addition to the political leaning prediction, based on analysis and previous prediction results, we hypothesize the existence of a separate dimension of political engagement regardless of the partisan side.", "Thus, we merge users from classes 3-5, 2-6, 1-7 and create a variable with four values, where the lowest value is represented by moderate users (4) and the highest value is represented by either very conservative (1) or very liberal (7) users.", "We use a linear regression algorithm with an Elastic Net regularizer (Zou and Hastie, 2005) as implemented in ScikitLearn (Pedregosa et al., 2011) .", "To evaluate our results, we split our data into 10 stratified folds and performed crossvalidation on one held-out fold at a time.", "For all our methods we tune the parameters of our models on a separate validation fold.", "The overall performance is assessed using Pearson correlation between the set of predicted values and the userreported score.", "Results are presented in Table 3 .", "735 The same patterns hold when evaluating the results with Root Mean Squared Error (RMSE).", "Table 3 : Pearson correlations between the predictions and self-reported ideologies using linear regression with each feature category and a linear combination of their predictions in a 10-fold cross-validation setup.", "Political leaning is represented on the 1-7 scale removing the moderates (4).", "Political engagement is a scale ranging from 4 through 3-5 and 2-6 to 1-7.", "The results show that both dimensions can be predicted well above chance, with political leaning being easier to predict than engagement.", "Word2Vec clusters obtain the highest predictive accuracy for political leaning, even though they did not perform as well in the previous classification tasks.", "For political engagement, political terms and Word2Vec clusters obtain similar predictive accuracy.", "This result is expected based on the results from Figure 3 , which showed how political term usage varies across groups, and how it is especially dependent on political engagement.", "While political terms are very effective at distinguishing between two opposing political groups, they can not discriminate as well between levels of engagement within the same ideological orientation.", "Combining all classifiers' predictions in a linear ensemble obtains best results when compared to each individual category.", "Encoding Class Structure In our previous experiments, we uncovered that certain relationships exist between the seven groups.", "For example, extreme conservatives and liberals both demonstrate strong political engagement.", "Therefore, this class structure can be exploited to improve classification performance.", "To this end, we deploy the sparse graph regularized approach (Argyriou et al., 2007; Zhou et al., 2011) to encode the structure of the seven classes as a graph regularizer in a logistic regression framework.", "In particular, we employed a multi-task learning paradigm, where each task is a one-vs-all classification.", "Multi-task learning (MTL) is a learning paradigm that jointly learns multiple related tasks and can achieve better generalization performance than learning each task individually, especially when presented with insufficient training samples (Liu et al., 2015 (Liu et al., , 2016b .", "The group structure is encoded into a matrix R which codes the groups which are considered similar.", "The objective of the sparse graph regularized multi-task learning problem is: min W,c τ t=1 N i=1 log(1 + exp(−Y t,i (W T i,t X t,i + c t ))) + γ WR 2 F + λ W 1 , where τ is the number of tasks, |N | the number of samples, X the feature matrix, Y the outcome matrix, W i,t and c t is the model for task t and R is the structure matrix.", "We define three R matrices: (1) codes that groups with similar political engagement are similar (i.e.", "1-7, 2-6, 3-5); (2) codes that groups from each ideological side are similar (i.e.", "1-2, 1-3, 2-3, 5-6, 5-7, 6-7); (3) learnt from the data.", "Results are presented in Table 4 .", "Regular logistic regression performs slightly better than the majority class baseline, which demonstrates that the 7class classification is a very hard problem although most miss-classifications are within one ideology point.", "The graph regularization (GR) improves the classification performance over logistic regression (LR) in all cases, with political leaning based matrix (GR-Leaning) obtaining 2% in accuracy higher than the political engagement one (GR-Engagement) and the learnt matrix (GR-Learnt) obtaining best results.", "Conclusions This study analyzed user-level political ideology through Twitter posts.", "In contrast to previous work, we made use of a novel data set where finegrained user political ideology labels are obtained through surveys as opposed to binary self-reports.", "We showed that users in our data set are far less likely to post about politics and real-world finegrained political ideology prediction is harder and more nuanced than previously reported.", "We analyzed language differences between the ideological groups and uncovered a dimension of political engagement separate from political leaning.", "Our work has implications for pollsters or marketers, who are most interested to identify and persuade moderate users.", "With respect to political conclusions, researchers commonly conceptualize ideology as a single, left-right dimension similar to what we observe in the U.S. Congress (Ansolabehere et al., 2008; Bafumi and Herron, 2010) .", "Our results suggest a different direction: self-reported political extremity is more an indication of political engagement than of ideological self-placement (Abramowitz, 2010) .", "In fact, only self-reported extremists appear to devote much of their Twitter activity to politics at all.", "While our study focused solely on text posted by the user, follow-up work can use other modalities such as images or social network analysis to improve prediction performance.", "In addition, our work on user-level modeling can be integrated with work on message-level political bias to study how this is revealed across users with various levels of engagement.", "Another direction of future study will look at political ideology prediction in other countries and cultures, where ideology has different or multiple dimensions." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "5.1", "5.2", "5.3", "5.4", "6", "6.1", "6.2", "6.3", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Data Set", "Features", "Analysis", "Very Conservatives vs. Very Liberals", "Moderate Conservatives vs. Moderate Liberals", "Moderates vs. Extremists", "Political Terms", "Prediction", "Cross-Group Prediction", "Political Leaning and Engagement Prediction", "Encoding Class Structure", "Conclusions" ] }
GEM-SciDuet-train-94#paper-1239#slide-4
Hypotheses
H1 Previous studies used users far more likely to be politically engaged H2 The prediction problem was so far over-simplified H3 Neutral users can be identified H4 Differences in language use exist between moderate and extreme users
H1 Previous studies used users far more likely to be politically engaged H2 The prediction problem was so far over-simplified H3 Neutral users can be identified H4 Differences in language use exist between moderate and extreme users
[]
GEM-SciDuet-train-94#paper-1239#slide-5
1239
Beyond Binary Labels: Political Ideology Prediction of Twitter Users
Automatic political preference prediction from social media posts has to date proven successful only in distinguishing between publicly declared liberals and conservatives in the US. This study examines users' political ideology using a sevenpoint scale which enables us to identify politically moderate and neutral usersgroups which are of particular interest to political scientists and pollsters. Using a novel data set with political ideology labels self-reported through surveys, our goal is two-fold: a) to characterize the political groups of users through language use on Twitter; b) to build a fine-grained model that predicts political ideology of unseen users. Our results identify differences in both political leaning and engagement and the extent to which each group tweets using political keywords. Finally, we demonstrate how to improve ideology prediction accuracy by exploiting the relationships between the user groups.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175 ], "paper_content_text": [ "Automatic political preference prediction from social media posts has to date proven successful only in distinguishing between publicly declared liberals and conservatives in the US.", "This study examines users' political ideology using a sevenpoint scale which enables us to identify politically moderate and neutral usersgroups which are of particular interest to political scientists and pollsters.", "Using a novel data set with political ideology labels self-reported through surveys, our goal is two-fold: a) to characterize the political groups of users through language use on Twitter; b) to build a fine-grained model that predicts political ideology of unseen users.", "Our results identify differences in both political leaning and engagement and the extent to which each group tweets using political keywords.", "Finally, we demonstrate how to improve ideology prediction accuracy by exploiting the relationships between the user groups.", "Introduction Social media is used by people to share their opinions and views.", "Unsurprisingly, an important part of the population shares opinions and news related to politics or causes they support, thus offering strong cues about their political preferences and ideologies.", "In addition, political membership is also predictable purely from one's interests or demographics -it is much more likely for a religious person to be conservative or for a younger person to lean liberal (Ellis and Stimson, 2012) .", "* Work carried out during a research visit at the University of Pennsylvania User trait prediction from text is based on the assumption that language use reflects a user's demographics, psychological states or preferences.", "Applications include prediction of age (Rao et al., 2010; Flekova et al., 2016b) , gender (Burger et al., 2011; Sap et al., 2014) , personality (Schwartz et al., 2013; , socioeconomic status (Preoţiuc-Pietro et al., 2015a,b; Liu et al., 2016c) , popularity (Lampos et al., 2014) or location (Cheng et al., 2010) .", "Research on predicting political orientation has focused on methodological improvements (Pennacchiotti and Popescu, 2011) and used data sets with publicly stated dichotomous political orientation labels due to their easy accessibility (Sylwester and Purver, 2015) .", "However, these data sets are not representative samples of the entire population (Cohen and Ruths, 2013) and do not accurately reflect the variety of political attitudes and engagement (Kam et al., 2007) .", "For example, we expect users who state their political affiliation in their profile description, tweet with partisan hashtags or appear in public party lists to use social media as a means of popularizing and supporting their political beliefs (Bar-berASa, 2015) .", "Many users may choose not to publicly post about their political preference for various social goals or perhaps this preference may not be strong or representative enough to be disclosed online.", "Dichotomous political preference also ignores users who do not have a political ideology.", "All of these types of users are very important for researchers aiming to understand group preferences, traits or moral values (Lewis and Reiley, 2014; Hersh, 2015) .", "The most common political ideology spectrum in the US is the conservative -liberal (Ellis and Stimson, 2012) .", "We collect a novel data set of Twitter users mapped to this seven-point spectrum which allows us to: 1.", "Uncover the differences in language use between ideological groups; 2.", "Develop a user-level political ideology prediction algorithm that classifies all levels of engagement and leverages the structure in the political ideology spectrum.", "First, using a broad range of language features including unigrams, word clusters and emotions, we study the linguistic differences between the two ideologically extreme groups, the two ideologically moderate groups and between both extremes and moderates in order to provide insight into the content they post on Twitter.", "In addition, we examine the extent to which the ideological groups in our data set post about politics and compare it to a data set obtained similarly to previous work.", "In prediction experiments, we show how accurately we can distinguish between opposing ideological groups in various scenarios and that previous binary political orientation prediction has been oversimplified.", "Then, we measure the extent to which we can predict the two dimensions of political leaning and engagement.", "Finally, we build an ideology classifier in a multi-task learning setup that leverages the relationships between groups.", "1 Related Work Automatically inferring user traits from their online footprints is a prolific topic of research, enabled by the increasing availability of user generated data and advances in machine learning.", "Beyond its research oriented goals, user profiling has important industry applications in online marketing, personalization or large-scale audience profiling.", "To this end, researchers have used a wide range of types of online footprints, including video (Subramanian et al., 2013) , audio (Alam and Riccardi, 2014 ), text (Preoţiuc-Pietro et al., 2015a) , profile images (Liu et al., 2016a) , social data (Van Der Heide et al., 2012; Hall et al., 2014) , social networks (Perozzi and Skiena, 2015; Rout et al., 2013) , payment data (Wang et al., 2016) and endorsements .", "Political orientation prediction has been studied in two related, albeit crucially different scenarios, as also identified in (Zafar et al., 2016) .", "First, researchers aimed to identify and quantify orientation of words (Monroe et al., 2008) , hashtags (Weber et al., 2013) or documents (Iyyer et al., 2014) , or to detect bias (Yano et al., 2010) or impartiality (Zafar et al., 2016) at a document level.", "Our study belongs to the second category, where political orientation is inferred at a user-level.", "All previous studies study labeling US conservatives vs. liberals using either text (Rao et al., 2010) , social network connections (Zamal et al., 2012) , platform-specific features (Conover et al., 2011) or a combination of these (Pennacchiotti and Popescu, 2011; Volkova et al., 2014) , with very high reported accuracies of up to 94.9% (Conover et al., 2011) .", "However, all previous work on predicting userlevel political preferences are limited to a binary prediction between liberal/democrat and conservative/republican, disregarding any nuances in political ideology.", "In addition, as the focus of the studies is more on the methodological or interpretation aspects of the problem, another downside is that the user labels were obtained in simple, albeit biased ways.", "These include users who explicitly state their political orientation on user lists of party supporters (Zamal et al., 2012; Pennacchiotti and Popescu, 2011) , supporting partisan causes (Rao et al., 2010) , by following political figures (Volkova et al., 2014) or party accounts (Sylwester and Purver, 2015) or that retweet partisan hashtags (Conover et al., 2011) .", "As also identified in (Cohen and Ruths, 2013) and further confirmed later in this study, these data sets are biased: most people do not clearly state their political preference online -fewer than 5% according to Priante et al.", "(2016) -and those that state their preference are very likely to be political activists.", "Cohen and Ruths (2013) demonstrated that predictive accuracy of classifiers is significantly lower when confronted with users that do not explicitly mention their political orientation.", "Despite this, their study is limited because in their hardest classification task, they use crowdsourced political orientation labels, which may not correspond to reality and suffer from biases (Flekova et al., 2016a; .", "Further, they still only look at predicting binary political orientation.", "To date, no other research on this topic has taken into account these findings.", "Data Set The main data set used in this study consists of 3,938 users recruited through the Qualtrics platform (D 1 ).", "Each participant was compensated with 3 USD for 15 minutes of their time.", "All participants first answered the same demographic questions (including political ideology), then were directed to one of four sets of psychological questionnaires unrelated to the political ideology question.", "They were asked to self-report their political ideology on a seven point scale: Very conservative (1), Conservative (2), Moderately conservative (3), Moderate (4), Moderately liberal (5), Liberal (6), Very liberal (7).", "In addition, participants had the option of choosing Apathetic and Other, which have ambiguous fits on the conservative -liberal spectrum and were removed from our analysis (399 users).", "We also asked participants to self-report their gender (2322 female, 1205 male, 12 other) and age.", "Participants were all from the US in order to limit the impact of cultural and political factors.", "The political ideology distribution in our sample is presented in Figure 1 .", "We asked users their Twitter handle and downloaded their most recent 3,200 tweets, leading to a total of 4,833,133 tweets.", "Before adding users to our 3,938 user data set, we performed the following checks to ensure that the Twitter handle was the user's own: 1) after compensation, users were if they were truthful in reporting their handle and if not, we removed their data from analysis; 2) we manually examined all handles marked as verified by Twitter or that had over 2000 followers and eliminated them if they were celebrities or corporate/news accounts, as these were unlikely the users who participated in the survey.", "This study received approval from the Institutional Review Board (IRB) of the University of Pennsylvania.", "In addition, to facilitate comparison to previous work, we also use a data set of 13,651 users with overt political orientation (D 2 ).", "We selected popular political figures unambiguously associated with US liberal politics (@SenSanders, @JoeBiden, @CoryBooker, @JohnKerry) or US conservative politics (@marcorubio, @tedcruz, @RandPaul, @RealBenCarson).", "Liberals in our set (N l = 7417) had to follow on Twitter all of the liberal political figures and none of the conservative figures.", "Likewise, conservative users (N c = 6234) had to follow all of the conservative figures and no liberal figures.", "We downloaded up to 3,200 of each user's most recent tweets, leading to a total of 25,493,407 tweets.", "All tweets were downloaded around 10 August 2016.", "Features In our analysis, we use a broad range of linguistic features described below.", "Unigrams We use the bag-of-words representation to reduce each user's posting history to a normalised frequency distribution over the vocabulary consisting of all words used by at least 10% of the users (6,060 words).", "LIWC Traditional psychological studies use a dictionary-based approach to representing text.", "The most popular method is based on Linguistic Inquiry and Word Count (LIWC) (Pennebaker et al., 2001) , and automatically counts word frequencies for 64 different categories manually constructed based on psychological theory.", "These include different parts-of-speech, topical categories and emotions.", "Each user is thereby represented as a frequency distribution over these categories.", "Word2Vec Topics An alternative to LIWC is to use automatically generated word clusters i.e., groups of words that are semantically and/or syntactically similar.", "The clusters help reducing the feature space and provides additional interpretability.", "To create these groups of words, we use an automatic method that leverages word co-occurrence patterns in large corpora by making use of the distributional hypothesis: similar words tend to cooccur in similar contexts (Harris, 1954) .", "Based on co-occurrence statistics, each word is represented as a low dimensional vector of numbers with words closer in this space being more similar (Deerwester et al., 1990) .", "We use the method from (Preoţiuc-Pietro et al., 2015a) to compute topics using word2vec similarity (Mikolov et al., 2013a,b) and spectral clustering (Shi and Malik, 2000; von Luxburg, 2007) of different sizes (from 30 to 2000).", "We have tried other alternatives to building clusters: using other word similarities to generate clusters -such as NPMI (Lampos et al., 2014) or GloVe as proposed in (Preoţiuc-Pietro et al., 2015a) -or using standard topic modelling approached to create soft clusters of words e.g., Latent Dirichlet Allocation (Blei et al., 2003) .", "For brevity, we present experiments with the best performing feature set containing 500 Word2Vec clusters.", "We aggregate all the words posted in a users' tweets and represent each user as a distribution of the fraction of words belonging to each cluster.", "Sentiment & Emotions We hypothesise that different political ideologies differ in the type and amount of emotions the users express through their posts.", "The most studied model of discrete emotions is the Ekman model (Ekman, 1992; Strapparava and Mihalcea, 2008; Strapparava et al., 2004) which posits the existence of six basic emotions: anger, disgust, fear, joy, sadness and surprise.", "We automatically quantify these emotions from our Twitter data set using a publicly available crowd-sourcing derived lexicon of words associated with any of the six emotions, as well as general positive and negative sentiment Turney, 2010, 2013) .", "Using these lexicons, we assign a predicted emotion to each message and then average across all users' posts to obtain user level emotion expression scores.", "Political Terms In order to select unigrams pertaining to politics, we assigned the most frequent 12,000 unigrams in our data set to three categories: • Political words: mentions of political terms (234); • Political NEs: mentions of politician proper names out of the political terms (39); • Media NEs: mentions of political media sources and pundits out of the political terms (20).", "This coding was initially performed by a research assistant studying political science with good knowledge of US politics and were further filtered and checked by one of the authors.", "Analysis First, we explore the relationships between language use and political ideological groups within each feature set and pairs of opposing user groups.", "To illustrate differences between ideological groups we compare the two political extremes (Very Conservative -Very Liberal) and the political moderates (Moderate Conservative -Moderate Liberal).", "We further compare outright moderates with a group combining the two political extremes to study if we can uncover differences in political engagement and extremity, regardless of the conservative-liberal leaning.", "We use univariate partial linear correlations with age and gender as co-variates to factor out the influence of basic demographics.", "For example, in D 1 , users who reported themselves as very conservative are older and more likely males (µ age = 35.1, pct male = 44%) than the data average (µ age = 31.2, pct male = 35%).", "Additionally, prior to combining the two ideologically extreme groups, we sub-sampled the larger class (Very Liberal) to match the smaller class (Very Conservative) in age and gender.", "In the later prediction experiments, we do not perform matching, as this represents useful signal for classification (Ellis and Stimson, 2012) .", "Results with unigrams are presented in Figure 2 and with the other features in Table 1 .", "These are selected using standard statistical significance tests.", "Very Conservatives vs.", "Very Liberals The comparison between the extreme categories reveals the largest number of significant differences.", "The unigrams and Word2Vec clusters specific to conservatives are dominated by religion specific terms ('praying', 'god', W2V-485, W2V-018, W2V-099, L-RELIG), confirming a well-documented relationship (Gelman, 2009) and words describing family relationships ('uncle', 'son', L-FAMILY), another conservative value (Lakoff, 1997) .", "The emphasis on religious terms among conservatives is consistent with the claim that many Americans associate 'conservative' with 'religious' (Ellis and Stimson, 2012) .", "Extreme liberals show a tendency to use more adjectives (W2V-075, W2V-110), adverbs (L-ADVERB), conjunctions (L-CONJ) and comparisons (L-COMPARE) which indicate more nuanced and complex posts.", "Extreme conservatives post tweets higher in all positive emotions than liberals (L-POSEMO, Emot-Joy, Emot-Positive), confirming a previously hypothesised relationship (Napier and Jost, 2008) .", "However, extreme liberals are not associated with posting negative emotions either, only using words that reflect more anxiety (L-ANX), which is related to neuroticism in which the liberals are higher (Gerber et al., 2010) .", "Political term analysis reveals the partisan terms Figure 2 : Unigrams with the highest 80 Pearson correlations shown as word clouds in three vertical panels with a binary variable representing the two ideological groups compared.", "The size of the unigram is scaled by its correlation with the ideological group in bold.", "The color indexes relative frequency, from light blue (rarely used) to dark blue (frequently used).", "All correlations are significant at p < .05 and controlled for age and gender.", "', 'racism', 'feminism', 'transgender') .", "This perhaps reflects the desire for conservatives on Twitter to identify like-minded individuals, as extreme conservatives are a minority on the platform.", "Liberals, by contrast, use the platform to discuss and popularize their causes.", "Moderate Conservatives vs.", "Moderate Liberals Comparing the two sides of moderate users reveals a slightly more nuanced view of the two ideologies.", "While moderate conservatives still make heavy use of religious terms and express positive emotions (Emot-Joy, L-DRIVES), they also use affiliative language (L-AFFILIATION) and plural pronouns (L-WE).", "Moderate liberals are identified by very different features compared to their more extreme counterparts.", "Most striking is the use of swear and sex words (L-SEXUAL, L-ANGER, W2V-316), also highlighted by Sylwester and Purver (2015) .", "Two word clusters relating to British culture (W2V-458) and art (W2V-373) reflect that liberals are more inclined towards arts (Dollinger, 2007) .", "Statistically significant political terms are very few compared to the previous comparison, probably due to their lower overall usage, which we further investigate later.", "Moderates vs. Extremists Our final comparison looks at outright moderates compared to the two extreme groups combined, as we hypothesise the existence of a difference in overall political engagement.", "Moderates are not characterized by many features besides a topic of casual words (W2V-098), indicating the heterogeneity of this group of users.", "However, regardless of their orientation, the ideological extremists stand out from moderates.", "They use words and word clusters related to political actors (W2V-309), issues (W2V-237) and laws (W2V-296, W2V-288).", "LIWC analysis uncovers differences in article use (L-ARTICLE) or power words (L-POWER) specific of political tweets.", "The overall sentiment of these users is negative (Emot-Fear, Emot-Disgust, Emot-Sadness, L-DEATH) compared to moderates.", "This reveals -combined with the finding from the first comparison -that while extreme conservatives are overall more positive than liberals, both groups share negative expression.", "Political terms are almost all significantly correlated with the extreme ideological groups, Con.", "(1) Con.", "(2) M.Con.", "(3) Mod.", "(4) confirming the existence of a difference in political engagement which we study in detail next.", "Figure 3 presents the use of the three types of political terms across the 7 ideological groups in D 1 and the two political groups from D 2 .", "We notice the following: Political Terms • D 2 has a huge skew towards political words, with an average of more than three times more political terms across all three categories than our extreme classes from D 1 ; • Within the groups in D 1 , we observe an almost perfectly symmetrical U-shape across all three types of political terms, confirming our hypothesis about political engagement; • The difference between 1-2/6-7 is larger than 2-3/5-6.", "The extreme liberals and conservatives are disproportionately political, and have the potential to give Twitter's political discussions an unrepresentative, extremist hue (Fiorina, 1999) .", "It is also possible, however, that characterizing one as an extreme liberal or conservative indicates as much about her level of political engagement as it does about her placement on a left-right scale (Converse, 1964; Broockman, 2016) .", "Prediction In this section we build predictive models of political ideology and compare them to data sets obtained using previous work.", "Cross-Group Prediction First, we experiment with classifying between conservatives and liberals across various levels of political engagement in D 1 and between the two polarized groups in D 2 .", "We use logistic regression classification to compare three setups in Table 2 with results measured with ROC AUC as the classes are slightly inbalanced: • 10-fold cross-validation where training is performed on the same task as the testing (principal diagonal); • A train-test setup where training is performed on one task (presented in rows) and testing is performed on another (presented in columns); • A domain adaptation setup (results in brackets) where on each of the 10 folds, the 9 training folds (presented in rows) are supplemented with all the data from a different task (presented in columns) using the EasyAdapt algorithm (Daumé III, 2007) as a proof on concept on the effects of using additional distantly supervised data.", "Data pooling lead to worse results than EasyAdapt.", "Each of the three tasks from D 1 have a similar number of training samples, hence we do not expect that data set size has any effects in comparing the results across tasks.", "The results with both sets of features show that: • Prediction performance is much higher for D 2 than for D 1 , with the more extreme groups in D 1 being easier to predict than the moderate groups.", "This confirms that the very high accuracies reported by previous research are an artifact of user label collection and that on regular users, the expected accuracy is much lower (Cohen and Ruths, 2013) .", "We further show that, as the level of political engagement decreases, the classification problem becomes even harder; • The model trained on D 2 and Word2Vec word clusters performs significantly worse on D 1 tasks even if the training data is over 10 times larger.", "When using political words, the D 2 trained classifier performs relatively well on all tasks from D 1 ; • Overall, using political words as features performs better than Word2Vec clusters in the binary classification tasks; • Domain adaptation helps in the majority of cases, leading to improvements of up to .03 in AUC (predicting 2v6 supplemented with 3v5 data).", "Political Leaning and Engagement Prediction Political leaning (Conservative -Liberal, excluding the Moderate group) can be considered an ordinal variable and the prediction problem framed as one of regression.", "In addition to the political leaning prediction, based on analysis and previous prediction results, we hypothesize the existence of a separate dimension of political engagement regardless of the partisan side.", "Thus, we merge users from classes 3-5, 2-6, 1-7 and create a variable with four values, where the lowest value is represented by moderate users (4) and the highest value is represented by either very conservative (1) or very liberal (7) users.", "We use a linear regression algorithm with an Elastic Net regularizer (Zou and Hastie, 2005) as implemented in ScikitLearn (Pedregosa et al., 2011) .", "To evaluate our results, we split our data into 10 stratified folds and performed crossvalidation on one held-out fold at a time.", "For all our methods we tune the parameters of our models on a separate validation fold.", "The overall performance is assessed using Pearson correlation between the set of predicted values and the userreported score.", "Results are presented in Table 3 .", "735 The same patterns hold when evaluating the results with Root Mean Squared Error (RMSE).", "Table 3 : Pearson correlations between the predictions and self-reported ideologies using linear regression with each feature category and a linear combination of their predictions in a 10-fold cross-validation setup.", "Political leaning is represented on the 1-7 scale removing the moderates (4).", "Political engagement is a scale ranging from 4 through 3-5 and 2-6 to 1-7.", "The results show that both dimensions can be predicted well above chance, with political leaning being easier to predict than engagement.", "Word2Vec clusters obtain the highest predictive accuracy for political leaning, even though they did not perform as well in the previous classification tasks.", "For political engagement, political terms and Word2Vec clusters obtain similar predictive accuracy.", "This result is expected based on the results from Figure 3 , which showed how political term usage varies across groups, and how it is especially dependent on political engagement.", "While political terms are very effective at distinguishing between two opposing political groups, they can not discriminate as well between levels of engagement within the same ideological orientation.", "Combining all classifiers' predictions in a linear ensemble obtains best results when compared to each individual category.", "Encoding Class Structure In our previous experiments, we uncovered that certain relationships exist between the seven groups.", "For example, extreme conservatives and liberals both demonstrate strong political engagement.", "Therefore, this class structure can be exploited to improve classification performance.", "To this end, we deploy the sparse graph regularized approach (Argyriou et al., 2007; Zhou et al., 2011) to encode the structure of the seven classes as a graph regularizer in a logistic regression framework.", "In particular, we employed a multi-task learning paradigm, where each task is a one-vs-all classification.", "Multi-task learning (MTL) is a learning paradigm that jointly learns multiple related tasks and can achieve better generalization performance than learning each task individually, especially when presented with insufficient training samples (Liu et al., 2015 (Liu et al., , 2016b .", "The group structure is encoded into a matrix R which codes the groups which are considered similar.", "The objective of the sparse graph regularized multi-task learning problem is: min W,c τ t=1 N i=1 log(1 + exp(−Y t,i (W T i,t X t,i + c t ))) + γ WR 2 F + λ W 1 , where τ is the number of tasks, |N | the number of samples, X the feature matrix, Y the outcome matrix, W i,t and c t is the model for task t and R is the structure matrix.", "We define three R matrices: (1) codes that groups with similar political engagement are similar (i.e.", "1-7, 2-6, 3-5); (2) codes that groups from each ideological side are similar (i.e.", "1-2, 1-3, 2-3, 5-6, 5-7, 6-7); (3) learnt from the data.", "Results are presented in Table 4 .", "Regular logistic regression performs slightly better than the majority class baseline, which demonstrates that the 7class classification is a very hard problem although most miss-classifications are within one ideology point.", "The graph regularization (GR) improves the classification performance over logistic regression (LR) in all cases, with political leaning based matrix (GR-Leaning) obtaining 2% in accuracy higher than the political engagement one (GR-Engagement) and the learnt matrix (GR-Learnt) obtaining best results.", "Conclusions This study analyzed user-level political ideology through Twitter posts.", "In contrast to previous work, we made use of a novel data set where finegrained user political ideology labels are obtained through surveys as opposed to binary self-reports.", "We showed that users in our data set are far less likely to post about politics and real-world finegrained political ideology prediction is harder and more nuanced than previously reported.", "We analyzed language differences between the ideological groups and uncovered a dimension of political engagement separate from political leaning.", "Our work has implications for pollsters or marketers, who are most interested to identify and persuade moderate users.", "With respect to political conclusions, researchers commonly conceptualize ideology as a single, left-right dimension similar to what we observe in the U.S. Congress (Ansolabehere et al., 2008; Bafumi and Herron, 2010) .", "Our results suggest a different direction: self-reported political extremity is more an indication of political engagement than of ideological self-placement (Abramowitz, 2010) .", "In fact, only self-reported extremists appear to devote much of their Twitter activity to politics at all.", "While our study focused solely on text posted by the user, follow-up work can use other modalities such as images or social network analysis to improve prediction performance.", "In addition, our work on user-level modeling can be integrated with work on message-level political bias to study how this is revealed across users with various levels of engagement.", "Another direction of future study will look at political ideology prediction in other countries and cultures, where ideology has different or multiple dimensions." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "5.1", "5.2", "5.3", "5.4", "6", "6.1", "6.2", "6.3", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Data Set", "Features", "Analysis", "Very Conservatives vs. Very Liberals", "Moderate Conservatives vs. Moderate Liberals", "Moderates vs. Extremists", "Political Terms", "Prediction", "Cross-Group Prediction", "Political Leaning and Engagement Prediction", "Encoding Class Structure", "Conclusions" ] }
GEM-SciDuet-train-94#paper-1239#slide-5
Engagement
H1 Previous studies used users far more likely to be politically engaged I Political words (234) I Political NEs: mentions of politician proper names (39) I Media NEs: mentions of political media sources and Data set obtained using previous methods Political word usage across user groups Average percentage of political word usage I 3x more political terms for automatically identified users compared to the highest survey-based scores I almost perfectly symmetrical U-shape across all three types of political terms
H1 Previous studies used users far more likely to be politically engaged I Political words (234) I Political NEs: mentions of politician proper names (39) I Media NEs: mentions of political media sources and Data set obtained using previous methods Political word usage across user groups Average percentage of political word usage I 3x more political terms for automatically identified users compared to the highest survey-based scores I almost perfectly symmetrical U-shape across all three types of political terms
[]
GEM-SciDuet-train-94#paper-1239#slide-6
1239
Beyond Binary Labels: Political Ideology Prediction of Twitter Users
Automatic political preference prediction from social media posts has to date proven successful only in distinguishing between publicly declared liberals and conservatives in the US. This study examines users' political ideology using a sevenpoint scale which enables us to identify politically moderate and neutral usersgroups which are of particular interest to political scientists and pollsters. Using a novel data set with political ideology labels self-reported through surveys, our goal is two-fold: a) to characterize the political groups of users through language use on Twitter; b) to build a fine-grained model that predicts political ideology of unseen users. Our results identify differences in both political leaning and engagement and the extent to which each group tweets using political keywords. Finally, we demonstrate how to improve ideology prediction accuracy by exploiting the relationships between the user groups.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175 ], "paper_content_text": [ "Automatic political preference prediction from social media posts has to date proven successful only in distinguishing between publicly declared liberals and conservatives in the US.", "This study examines users' political ideology using a sevenpoint scale which enables us to identify politically moderate and neutral usersgroups which are of particular interest to political scientists and pollsters.", "Using a novel data set with political ideology labels self-reported through surveys, our goal is two-fold: a) to characterize the political groups of users through language use on Twitter; b) to build a fine-grained model that predicts political ideology of unseen users.", "Our results identify differences in both political leaning and engagement and the extent to which each group tweets using political keywords.", "Finally, we demonstrate how to improve ideology prediction accuracy by exploiting the relationships between the user groups.", "Introduction Social media is used by people to share their opinions and views.", "Unsurprisingly, an important part of the population shares opinions and news related to politics or causes they support, thus offering strong cues about their political preferences and ideologies.", "In addition, political membership is also predictable purely from one's interests or demographics -it is much more likely for a religious person to be conservative or for a younger person to lean liberal (Ellis and Stimson, 2012) .", "* Work carried out during a research visit at the University of Pennsylvania User trait prediction from text is based on the assumption that language use reflects a user's demographics, psychological states or preferences.", "Applications include prediction of age (Rao et al., 2010; Flekova et al., 2016b) , gender (Burger et al., 2011; Sap et al., 2014) , personality (Schwartz et al., 2013; , socioeconomic status (Preoţiuc-Pietro et al., 2015a,b; Liu et al., 2016c) , popularity (Lampos et al., 2014) or location (Cheng et al., 2010) .", "Research on predicting political orientation has focused on methodological improvements (Pennacchiotti and Popescu, 2011) and used data sets with publicly stated dichotomous political orientation labels due to their easy accessibility (Sylwester and Purver, 2015) .", "However, these data sets are not representative samples of the entire population (Cohen and Ruths, 2013) and do not accurately reflect the variety of political attitudes and engagement (Kam et al., 2007) .", "For example, we expect users who state their political affiliation in their profile description, tweet with partisan hashtags or appear in public party lists to use social media as a means of popularizing and supporting their political beliefs (Bar-berASa, 2015) .", "Many users may choose not to publicly post about their political preference for various social goals or perhaps this preference may not be strong or representative enough to be disclosed online.", "Dichotomous political preference also ignores users who do not have a political ideology.", "All of these types of users are very important for researchers aiming to understand group preferences, traits or moral values (Lewis and Reiley, 2014; Hersh, 2015) .", "The most common political ideology spectrum in the US is the conservative -liberal (Ellis and Stimson, 2012) .", "We collect a novel data set of Twitter users mapped to this seven-point spectrum which allows us to: 1.", "Uncover the differences in language use between ideological groups; 2.", "Develop a user-level political ideology prediction algorithm that classifies all levels of engagement and leverages the structure in the political ideology spectrum.", "First, using a broad range of language features including unigrams, word clusters and emotions, we study the linguistic differences between the two ideologically extreme groups, the two ideologically moderate groups and between both extremes and moderates in order to provide insight into the content they post on Twitter.", "In addition, we examine the extent to which the ideological groups in our data set post about politics and compare it to a data set obtained similarly to previous work.", "In prediction experiments, we show how accurately we can distinguish between opposing ideological groups in various scenarios and that previous binary political orientation prediction has been oversimplified.", "Then, we measure the extent to which we can predict the two dimensions of political leaning and engagement.", "Finally, we build an ideology classifier in a multi-task learning setup that leverages the relationships between groups.", "1 Related Work Automatically inferring user traits from their online footprints is a prolific topic of research, enabled by the increasing availability of user generated data and advances in machine learning.", "Beyond its research oriented goals, user profiling has important industry applications in online marketing, personalization or large-scale audience profiling.", "To this end, researchers have used a wide range of types of online footprints, including video (Subramanian et al., 2013) , audio (Alam and Riccardi, 2014 ), text (Preoţiuc-Pietro et al., 2015a) , profile images (Liu et al., 2016a) , social data (Van Der Heide et al., 2012; Hall et al., 2014) , social networks (Perozzi and Skiena, 2015; Rout et al., 2013) , payment data (Wang et al., 2016) and endorsements .", "Political orientation prediction has been studied in two related, albeit crucially different scenarios, as also identified in (Zafar et al., 2016) .", "First, researchers aimed to identify and quantify orientation of words (Monroe et al., 2008) , hashtags (Weber et al., 2013) or documents (Iyyer et al., 2014) , or to detect bias (Yano et al., 2010) or impartiality (Zafar et al., 2016) at a document level.", "Our study belongs to the second category, where political orientation is inferred at a user-level.", "All previous studies study labeling US conservatives vs. liberals using either text (Rao et al., 2010) , social network connections (Zamal et al., 2012) , platform-specific features (Conover et al., 2011) or a combination of these (Pennacchiotti and Popescu, 2011; Volkova et al., 2014) , with very high reported accuracies of up to 94.9% (Conover et al., 2011) .", "However, all previous work on predicting userlevel political preferences are limited to a binary prediction between liberal/democrat and conservative/republican, disregarding any nuances in political ideology.", "In addition, as the focus of the studies is more on the methodological or interpretation aspects of the problem, another downside is that the user labels were obtained in simple, albeit biased ways.", "These include users who explicitly state their political orientation on user lists of party supporters (Zamal et al., 2012; Pennacchiotti and Popescu, 2011) , supporting partisan causes (Rao et al., 2010) , by following political figures (Volkova et al., 2014) or party accounts (Sylwester and Purver, 2015) or that retweet partisan hashtags (Conover et al., 2011) .", "As also identified in (Cohen and Ruths, 2013) and further confirmed later in this study, these data sets are biased: most people do not clearly state their political preference online -fewer than 5% according to Priante et al.", "(2016) -and those that state their preference are very likely to be political activists.", "Cohen and Ruths (2013) demonstrated that predictive accuracy of classifiers is significantly lower when confronted with users that do not explicitly mention their political orientation.", "Despite this, their study is limited because in their hardest classification task, they use crowdsourced political orientation labels, which may not correspond to reality and suffer from biases (Flekova et al., 2016a; .", "Further, they still only look at predicting binary political orientation.", "To date, no other research on this topic has taken into account these findings.", "Data Set The main data set used in this study consists of 3,938 users recruited through the Qualtrics platform (D 1 ).", "Each participant was compensated with 3 USD for 15 minutes of their time.", "All participants first answered the same demographic questions (including political ideology), then were directed to one of four sets of psychological questionnaires unrelated to the political ideology question.", "They were asked to self-report their political ideology on a seven point scale: Very conservative (1), Conservative (2), Moderately conservative (3), Moderate (4), Moderately liberal (5), Liberal (6), Very liberal (7).", "In addition, participants had the option of choosing Apathetic and Other, which have ambiguous fits on the conservative -liberal spectrum and were removed from our analysis (399 users).", "We also asked participants to self-report their gender (2322 female, 1205 male, 12 other) and age.", "Participants were all from the US in order to limit the impact of cultural and political factors.", "The political ideology distribution in our sample is presented in Figure 1 .", "We asked users their Twitter handle and downloaded their most recent 3,200 tweets, leading to a total of 4,833,133 tweets.", "Before adding users to our 3,938 user data set, we performed the following checks to ensure that the Twitter handle was the user's own: 1) after compensation, users were if they were truthful in reporting their handle and if not, we removed their data from analysis; 2) we manually examined all handles marked as verified by Twitter or that had over 2000 followers and eliminated them if they were celebrities or corporate/news accounts, as these were unlikely the users who participated in the survey.", "This study received approval from the Institutional Review Board (IRB) of the University of Pennsylvania.", "In addition, to facilitate comparison to previous work, we also use a data set of 13,651 users with overt political orientation (D 2 ).", "We selected popular political figures unambiguously associated with US liberal politics (@SenSanders, @JoeBiden, @CoryBooker, @JohnKerry) or US conservative politics (@marcorubio, @tedcruz, @RandPaul, @RealBenCarson).", "Liberals in our set (N l = 7417) had to follow on Twitter all of the liberal political figures and none of the conservative figures.", "Likewise, conservative users (N c = 6234) had to follow all of the conservative figures and no liberal figures.", "We downloaded up to 3,200 of each user's most recent tweets, leading to a total of 25,493,407 tweets.", "All tweets were downloaded around 10 August 2016.", "Features In our analysis, we use a broad range of linguistic features described below.", "Unigrams We use the bag-of-words representation to reduce each user's posting history to a normalised frequency distribution over the vocabulary consisting of all words used by at least 10% of the users (6,060 words).", "LIWC Traditional psychological studies use a dictionary-based approach to representing text.", "The most popular method is based on Linguistic Inquiry and Word Count (LIWC) (Pennebaker et al., 2001) , and automatically counts word frequencies for 64 different categories manually constructed based on psychological theory.", "These include different parts-of-speech, topical categories and emotions.", "Each user is thereby represented as a frequency distribution over these categories.", "Word2Vec Topics An alternative to LIWC is to use automatically generated word clusters i.e., groups of words that are semantically and/or syntactically similar.", "The clusters help reducing the feature space and provides additional interpretability.", "To create these groups of words, we use an automatic method that leverages word co-occurrence patterns in large corpora by making use of the distributional hypothesis: similar words tend to cooccur in similar contexts (Harris, 1954) .", "Based on co-occurrence statistics, each word is represented as a low dimensional vector of numbers with words closer in this space being more similar (Deerwester et al., 1990) .", "We use the method from (Preoţiuc-Pietro et al., 2015a) to compute topics using word2vec similarity (Mikolov et al., 2013a,b) and spectral clustering (Shi and Malik, 2000; von Luxburg, 2007) of different sizes (from 30 to 2000).", "We have tried other alternatives to building clusters: using other word similarities to generate clusters -such as NPMI (Lampos et al., 2014) or GloVe as proposed in (Preoţiuc-Pietro et al., 2015a) -or using standard topic modelling approached to create soft clusters of words e.g., Latent Dirichlet Allocation (Blei et al., 2003) .", "For brevity, we present experiments with the best performing feature set containing 500 Word2Vec clusters.", "We aggregate all the words posted in a users' tweets and represent each user as a distribution of the fraction of words belonging to each cluster.", "Sentiment & Emotions We hypothesise that different political ideologies differ in the type and amount of emotions the users express through their posts.", "The most studied model of discrete emotions is the Ekman model (Ekman, 1992; Strapparava and Mihalcea, 2008; Strapparava et al., 2004) which posits the existence of six basic emotions: anger, disgust, fear, joy, sadness and surprise.", "We automatically quantify these emotions from our Twitter data set using a publicly available crowd-sourcing derived lexicon of words associated with any of the six emotions, as well as general positive and negative sentiment Turney, 2010, 2013) .", "Using these lexicons, we assign a predicted emotion to each message and then average across all users' posts to obtain user level emotion expression scores.", "Political Terms In order to select unigrams pertaining to politics, we assigned the most frequent 12,000 unigrams in our data set to three categories: • Political words: mentions of political terms (234); • Political NEs: mentions of politician proper names out of the political terms (39); • Media NEs: mentions of political media sources and pundits out of the political terms (20).", "This coding was initially performed by a research assistant studying political science with good knowledge of US politics and were further filtered and checked by one of the authors.", "Analysis First, we explore the relationships between language use and political ideological groups within each feature set and pairs of opposing user groups.", "To illustrate differences between ideological groups we compare the two political extremes (Very Conservative -Very Liberal) and the political moderates (Moderate Conservative -Moderate Liberal).", "We further compare outright moderates with a group combining the two political extremes to study if we can uncover differences in political engagement and extremity, regardless of the conservative-liberal leaning.", "We use univariate partial linear correlations with age and gender as co-variates to factor out the influence of basic demographics.", "For example, in D 1 , users who reported themselves as very conservative are older and more likely males (µ age = 35.1, pct male = 44%) than the data average (µ age = 31.2, pct male = 35%).", "Additionally, prior to combining the two ideologically extreme groups, we sub-sampled the larger class (Very Liberal) to match the smaller class (Very Conservative) in age and gender.", "In the later prediction experiments, we do not perform matching, as this represents useful signal for classification (Ellis and Stimson, 2012) .", "Results with unigrams are presented in Figure 2 and with the other features in Table 1 .", "These are selected using standard statistical significance tests.", "Very Conservatives vs.", "Very Liberals The comparison between the extreme categories reveals the largest number of significant differences.", "The unigrams and Word2Vec clusters specific to conservatives are dominated by religion specific terms ('praying', 'god', W2V-485, W2V-018, W2V-099, L-RELIG), confirming a well-documented relationship (Gelman, 2009) and words describing family relationships ('uncle', 'son', L-FAMILY), another conservative value (Lakoff, 1997) .", "The emphasis on religious terms among conservatives is consistent with the claim that many Americans associate 'conservative' with 'religious' (Ellis and Stimson, 2012) .", "Extreme liberals show a tendency to use more adjectives (W2V-075, W2V-110), adverbs (L-ADVERB), conjunctions (L-CONJ) and comparisons (L-COMPARE) which indicate more nuanced and complex posts.", "Extreme conservatives post tweets higher in all positive emotions than liberals (L-POSEMO, Emot-Joy, Emot-Positive), confirming a previously hypothesised relationship (Napier and Jost, 2008) .", "However, extreme liberals are not associated with posting negative emotions either, only using words that reflect more anxiety (L-ANX), which is related to neuroticism in which the liberals are higher (Gerber et al., 2010) .", "Political term analysis reveals the partisan terms Figure 2 : Unigrams with the highest 80 Pearson correlations shown as word clouds in three vertical panels with a binary variable representing the two ideological groups compared.", "The size of the unigram is scaled by its correlation with the ideological group in bold.", "The color indexes relative frequency, from light blue (rarely used) to dark blue (frequently used).", "All correlations are significant at p < .05 and controlled for age and gender.", "', 'racism', 'feminism', 'transgender') .", "This perhaps reflects the desire for conservatives on Twitter to identify like-minded individuals, as extreme conservatives are a minority on the platform.", "Liberals, by contrast, use the platform to discuss and popularize their causes.", "Moderate Conservatives vs.", "Moderate Liberals Comparing the two sides of moderate users reveals a slightly more nuanced view of the two ideologies.", "While moderate conservatives still make heavy use of religious terms and express positive emotions (Emot-Joy, L-DRIVES), they also use affiliative language (L-AFFILIATION) and plural pronouns (L-WE).", "Moderate liberals are identified by very different features compared to their more extreme counterparts.", "Most striking is the use of swear and sex words (L-SEXUAL, L-ANGER, W2V-316), also highlighted by Sylwester and Purver (2015) .", "Two word clusters relating to British culture (W2V-458) and art (W2V-373) reflect that liberals are more inclined towards arts (Dollinger, 2007) .", "Statistically significant political terms are very few compared to the previous comparison, probably due to their lower overall usage, which we further investigate later.", "Moderates vs. Extremists Our final comparison looks at outright moderates compared to the two extreme groups combined, as we hypothesise the existence of a difference in overall political engagement.", "Moderates are not characterized by many features besides a topic of casual words (W2V-098), indicating the heterogeneity of this group of users.", "However, regardless of their orientation, the ideological extremists stand out from moderates.", "They use words and word clusters related to political actors (W2V-309), issues (W2V-237) and laws (W2V-296, W2V-288).", "LIWC analysis uncovers differences in article use (L-ARTICLE) or power words (L-POWER) specific of political tweets.", "The overall sentiment of these users is negative (Emot-Fear, Emot-Disgust, Emot-Sadness, L-DEATH) compared to moderates.", "This reveals -combined with the finding from the first comparison -that while extreme conservatives are overall more positive than liberals, both groups share negative expression.", "Political terms are almost all significantly correlated with the extreme ideological groups, Con.", "(1) Con.", "(2) M.Con.", "(3) Mod.", "(4) confirming the existence of a difference in political engagement which we study in detail next.", "Figure 3 presents the use of the three types of political terms across the 7 ideological groups in D 1 and the two political groups from D 2 .", "We notice the following: Political Terms • D 2 has a huge skew towards political words, with an average of more than three times more political terms across all three categories than our extreme classes from D 1 ; • Within the groups in D 1 , we observe an almost perfectly symmetrical U-shape across all three types of political terms, confirming our hypothesis about political engagement; • The difference between 1-2/6-7 is larger than 2-3/5-6.", "The extreme liberals and conservatives are disproportionately political, and have the potential to give Twitter's political discussions an unrepresentative, extremist hue (Fiorina, 1999) .", "It is also possible, however, that characterizing one as an extreme liberal or conservative indicates as much about her level of political engagement as it does about her placement on a left-right scale (Converse, 1964; Broockman, 2016) .", "Prediction In this section we build predictive models of political ideology and compare them to data sets obtained using previous work.", "Cross-Group Prediction First, we experiment with classifying between conservatives and liberals across various levels of political engagement in D 1 and between the two polarized groups in D 2 .", "We use logistic regression classification to compare three setups in Table 2 with results measured with ROC AUC as the classes are slightly inbalanced: • 10-fold cross-validation where training is performed on the same task as the testing (principal diagonal); • A train-test setup where training is performed on one task (presented in rows) and testing is performed on another (presented in columns); • A domain adaptation setup (results in brackets) where on each of the 10 folds, the 9 training folds (presented in rows) are supplemented with all the data from a different task (presented in columns) using the EasyAdapt algorithm (Daumé III, 2007) as a proof on concept on the effects of using additional distantly supervised data.", "Data pooling lead to worse results than EasyAdapt.", "Each of the three tasks from D 1 have a similar number of training samples, hence we do not expect that data set size has any effects in comparing the results across tasks.", "The results with both sets of features show that: • Prediction performance is much higher for D 2 than for D 1 , with the more extreme groups in D 1 being easier to predict than the moderate groups.", "This confirms that the very high accuracies reported by previous research are an artifact of user label collection and that on regular users, the expected accuracy is much lower (Cohen and Ruths, 2013) .", "We further show that, as the level of political engagement decreases, the classification problem becomes even harder; • The model trained on D 2 and Word2Vec word clusters performs significantly worse on D 1 tasks even if the training data is over 10 times larger.", "When using political words, the D 2 trained classifier performs relatively well on all tasks from D 1 ; • Overall, using political words as features performs better than Word2Vec clusters in the binary classification tasks; • Domain adaptation helps in the majority of cases, leading to improvements of up to .03 in AUC (predicting 2v6 supplemented with 3v5 data).", "Political Leaning and Engagement Prediction Political leaning (Conservative -Liberal, excluding the Moderate group) can be considered an ordinal variable and the prediction problem framed as one of regression.", "In addition to the political leaning prediction, based on analysis and previous prediction results, we hypothesize the existence of a separate dimension of political engagement regardless of the partisan side.", "Thus, we merge users from classes 3-5, 2-6, 1-7 and create a variable with four values, where the lowest value is represented by moderate users (4) and the highest value is represented by either very conservative (1) or very liberal (7) users.", "We use a linear regression algorithm with an Elastic Net regularizer (Zou and Hastie, 2005) as implemented in ScikitLearn (Pedregosa et al., 2011) .", "To evaluate our results, we split our data into 10 stratified folds and performed crossvalidation on one held-out fold at a time.", "For all our methods we tune the parameters of our models on a separate validation fold.", "The overall performance is assessed using Pearson correlation between the set of predicted values and the userreported score.", "Results are presented in Table 3 .", "735 The same patterns hold when evaluating the results with Root Mean Squared Error (RMSE).", "Table 3 : Pearson correlations between the predictions and self-reported ideologies using linear regression with each feature category and a linear combination of their predictions in a 10-fold cross-validation setup.", "Political leaning is represented on the 1-7 scale removing the moderates (4).", "Political engagement is a scale ranging from 4 through 3-5 and 2-6 to 1-7.", "The results show that both dimensions can be predicted well above chance, with political leaning being easier to predict than engagement.", "Word2Vec clusters obtain the highest predictive accuracy for political leaning, even though they did not perform as well in the previous classification tasks.", "For political engagement, political terms and Word2Vec clusters obtain similar predictive accuracy.", "This result is expected based on the results from Figure 3 , which showed how political term usage varies across groups, and how it is especially dependent on political engagement.", "While political terms are very effective at distinguishing between two opposing political groups, they can not discriminate as well between levels of engagement within the same ideological orientation.", "Combining all classifiers' predictions in a linear ensemble obtains best results when compared to each individual category.", "Encoding Class Structure In our previous experiments, we uncovered that certain relationships exist between the seven groups.", "For example, extreme conservatives and liberals both demonstrate strong political engagement.", "Therefore, this class structure can be exploited to improve classification performance.", "To this end, we deploy the sparse graph regularized approach (Argyriou et al., 2007; Zhou et al., 2011) to encode the structure of the seven classes as a graph regularizer in a logistic regression framework.", "In particular, we employed a multi-task learning paradigm, where each task is a one-vs-all classification.", "Multi-task learning (MTL) is a learning paradigm that jointly learns multiple related tasks and can achieve better generalization performance than learning each task individually, especially when presented with insufficient training samples (Liu et al., 2015 (Liu et al., , 2016b .", "The group structure is encoded into a matrix R which codes the groups which are considered similar.", "The objective of the sparse graph regularized multi-task learning problem is: min W,c τ t=1 N i=1 log(1 + exp(−Y t,i (W T i,t X t,i + c t ))) + γ WR 2 F + λ W 1 , where τ is the number of tasks, |N | the number of samples, X the feature matrix, Y the outcome matrix, W i,t and c t is the model for task t and R is the structure matrix.", "We define three R matrices: (1) codes that groups with similar political engagement are similar (i.e.", "1-7, 2-6, 3-5); (2) codes that groups from each ideological side are similar (i.e.", "1-2, 1-3, 2-3, 5-6, 5-7, 6-7); (3) learnt from the data.", "Results are presented in Table 4 .", "Regular logistic regression performs slightly better than the majority class baseline, which demonstrates that the 7class classification is a very hard problem although most miss-classifications are within one ideology point.", "The graph regularization (GR) improves the classification performance over logistic regression (LR) in all cases, with political leaning based matrix (GR-Leaning) obtaining 2% in accuracy higher than the political engagement one (GR-Engagement) and the learnt matrix (GR-Learnt) obtaining best results.", "Conclusions This study analyzed user-level political ideology through Twitter posts.", "In contrast to previous work, we made use of a novel data set where finegrained user political ideology labels are obtained through surveys as opposed to binary self-reports.", "We showed that users in our data set are far less likely to post about politics and real-world finegrained political ideology prediction is harder and more nuanced than previously reported.", "We analyzed language differences between the ideological groups and uncovered a dimension of political engagement separate from political leaning.", "Our work has implications for pollsters or marketers, who are most interested to identify and persuade moderate users.", "With respect to political conclusions, researchers commonly conceptualize ideology as a single, left-right dimension similar to what we observe in the U.S. Congress (Ansolabehere et al., 2008; Bafumi and Herron, 2010) .", "Our results suggest a different direction: self-reported political extremity is more an indication of political engagement than of ideological self-placement (Abramowitz, 2010) .", "In fact, only self-reported extremists appear to devote much of their Twitter activity to politics at all.", "While our study focused solely on text posted by the user, follow-up work can use other modalities such as images or social network analysis to improve prediction performance.", "In addition, our work on user-level modeling can be integrated with work on message-level political bias to study how this is revealed across users with various levels of engagement.", "Another direction of future study will look at political ideology prediction in other countries and cultures, where ideology has different or multiple dimensions." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "5.1", "5.2", "5.3", "5.4", "6", "6.1", "6.2", "6.3", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Data Set", "Features", "Analysis", "Very Conservatives vs. Very Liberals", "Moderate Conservatives vs. Moderate Liberals", "Moderates vs. Extremists", "Political Terms", "Prediction", "Cross-Group Prediction", "Political Leaning and Engagement Prediction", "Encoding Class Structure", "Conclusions" ] }
GEM-SciDuet-train-94#paper-1239#slide-6
Over simplification
H2 The prediction problem was so far over-simplified Topics Political Terms Domain Adaptation ROC AUC, Logistic Regression, 10-fold cross-validation Predicting continuous political leaning (1 7) Unigrams LIWC Topics Emotions Political All Pearson R between predictions and true labels, Linear Regression, GR Logistic regression with Group Lasso regularisation
H2 The prediction problem was so far over-simplified Topics Political Terms Domain Adaptation ROC AUC, Logistic Regression, 10-fold cross-validation Predicting continuous political leaning (1 7) Unigrams LIWC Topics Emotions Political All Pearson R between predictions and true labels, Linear Regression, GR Logistic regression with Group Lasso regularisation
[]
GEM-SciDuet-train-94#paper-1239#slide-7
1239
Beyond Binary Labels: Political Ideology Prediction of Twitter Users
Automatic political preference prediction from social media posts has to date proven successful only in distinguishing between publicly declared liberals and conservatives in the US. This study examines users' political ideology using a sevenpoint scale which enables us to identify politically moderate and neutral usersgroups which are of particular interest to political scientists and pollsters. Using a novel data set with political ideology labels self-reported through surveys, our goal is two-fold: a) to characterize the political groups of users through language use on Twitter; b) to build a fine-grained model that predicts political ideology of unseen users. Our results identify differences in both political leaning and engagement and the extent to which each group tweets using political keywords. Finally, we demonstrate how to improve ideology prediction accuracy by exploiting the relationships between the user groups.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175 ], "paper_content_text": [ "Automatic political preference prediction from social media posts has to date proven successful only in distinguishing between publicly declared liberals and conservatives in the US.", "This study examines users' political ideology using a sevenpoint scale which enables us to identify politically moderate and neutral usersgroups which are of particular interest to political scientists and pollsters.", "Using a novel data set with political ideology labels self-reported through surveys, our goal is two-fold: a) to characterize the political groups of users through language use on Twitter; b) to build a fine-grained model that predicts political ideology of unseen users.", "Our results identify differences in both political leaning and engagement and the extent to which each group tweets using political keywords.", "Finally, we demonstrate how to improve ideology prediction accuracy by exploiting the relationships between the user groups.", "Introduction Social media is used by people to share their opinions and views.", "Unsurprisingly, an important part of the population shares opinions and news related to politics or causes they support, thus offering strong cues about their political preferences and ideologies.", "In addition, political membership is also predictable purely from one's interests or demographics -it is much more likely for a religious person to be conservative or for a younger person to lean liberal (Ellis and Stimson, 2012) .", "* Work carried out during a research visit at the University of Pennsylvania User trait prediction from text is based on the assumption that language use reflects a user's demographics, psychological states or preferences.", "Applications include prediction of age (Rao et al., 2010; Flekova et al., 2016b) , gender (Burger et al., 2011; Sap et al., 2014) , personality (Schwartz et al., 2013; , socioeconomic status (Preoţiuc-Pietro et al., 2015a,b; Liu et al., 2016c) , popularity (Lampos et al., 2014) or location (Cheng et al., 2010) .", "Research on predicting political orientation has focused on methodological improvements (Pennacchiotti and Popescu, 2011) and used data sets with publicly stated dichotomous political orientation labels due to their easy accessibility (Sylwester and Purver, 2015) .", "However, these data sets are not representative samples of the entire population (Cohen and Ruths, 2013) and do not accurately reflect the variety of political attitudes and engagement (Kam et al., 2007) .", "For example, we expect users who state their political affiliation in their profile description, tweet with partisan hashtags or appear in public party lists to use social media as a means of popularizing and supporting their political beliefs (Bar-berASa, 2015) .", "Many users may choose not to publicly post about their political preference for various social goals or perhaps this preference may not be strong or representative enough to be disclosed online.", "Dichotomous political preference also ignores users who do not have a political ideology.", "All of these types of users are very important for researchers aiming to understand group preferences, traits or moral values (Lewis and Reiley, 2014; Hersh, 2015) .", "The most common political ideology spectrum in the US is the conservative -liberal (Ellis and Stimson, 2012) .", "We collect a novel data set of Twitter users mapped to this seven-point spectrum which allows us to: 1.", "Uncover the differences in language use between ideological groups; 2.", "Develop a user-level political ideology prediction algorithm that classifies all levels of engagement and leverages the structure in the political ideology spectrum.", "First, using a broad range of language features including unigrams, word clusters and emotions, we study the linguistic differences between the two ideologically extreme groups, the two ideologically moderate groups and between both extremes and moderates in order to provide insight into the content they post on Twitter.", "In addition, we examine the extent to which the ideological groups in our data set post about politics and compare it to a data set obtained similarly to previous work.", "In prediction experiments, we show how accurately we can distinguish between opposing ideological groups in various scenarios and that previous binary political orientation prediction has been oversimplified.", "Then, we measure the extent to which we can predict the two dimensions of political leaning and engagement.", "Finally, we build an ideology classifier in a multi-task learning setup that leverages the relationships between groups.", "1 Related Work Automatically inferring user traits from their online footprints is a prolific topic of research, enabled by the increasing availability of user generated data and advances in machine learning.", "Beyond its research oriented goals, user profiling has important industry applications in online marketing, personalization or large-scale audience profiling.", "To this end, researchers have used a wide range of types of online footprints, including video (Subramanian et al., 2013) , audio (Alam and Riccardi, 2014 ), text (Preoţiuc-Pietro et al., 2015a) , profile images (Liu et al., 2016a) , social data (Van Der Heide et al., 2012; Hall et al., 2014) , social networks (Perozzi and Skiena, 2015; Rout et al., 2013) , payment data (Wang et al., 2016) and endorsements .", "Political orientation prediction has been studied in two related, albeit crucially different scenarios, as also identified in (Zafar et al., 2016) .", "First, researchers aimed to identify and quantify orientation of words (Monroe et al., 2008) , hashtags (Weber et al., 2013) or documents (Iyyer et al., 2014) , or to detect bias (Yano et al., 2010) or impartiality (Zafar et al., 2016) at a document level.", "Our study belongs to the second category, where political orientation is inferred at a user-level.", "All previous studies study labeling US conservatives vs. liberals using either text (Rao et al., 2010) , social network connections (Zamal et al., 2012) , platform-specific features (Conover et al., 2011) or a combination of these (Pennacchiotti and Popescu, 2011; Volkova et al., 2014) , with very high reported accuracies of up to 94.9% (Conover et al., 2011) .", "However, all previous work on predicting userlevel political preferences are limited to a binary prediction between liberal/democrat and conservative/republican, disregarding any nuances in political ideology.", "In addition, as the focus of the studies is more on the methodological or interpretation aspects of the problem, another downside is that the user labels were obtained in simple, albeit biased ways.", "These include users who explicitly state their political orientation on user lists of party supporters (Zamal et al., 2012; Pennacchiotti and Popescu, 2011) , supporting partisan causes (Rao et al., 2010) , by following political figures (Volkova et al., 2014) or party accounts (Sylwester and Purver, 2015) or that retweet partisan hashtags (Conover et al., 2011) .", "As also identified in (Cohen and Ruths, 2013) and further confirmed later in this study, these data sets are biased: most people do not clearly state their political preference online -fewer than 5% according to Priante et al.", "(2016) -and those that state their preference are very likely to be political activists.", "Cohen and Ruths (2013) demonstrated that predictive accuracy of classifiers is significantly lower when confronted with users that do not explicitly mention their political orientation.", "Despite this, their study is limited because in their hardest classification task, they use crowdsourced political orientation labels, which may not correspond to reality and suffer from biases (Flekova et al., 2016a; .", "Further, they still only look at predicting binary political orientation.", "To date, no other research on this topic has taken into account these findings.", "Data Set The main data set used in this study consists of 3,938 users recruited through the Qualtrics platform (D 1 ).", "Each participant was compensated with 3 USD for 15 minutes of their time.", "All participants first answered the same demographic questions (including political ideology), then were directed to one of four sets of psychological questionnaires unrelated to the political ideology question.", "They were asked to self-report their political ideology on a seven point scale: Very conservative (1), Conservative (2), Moderately conservative (3), Moderate (4), Moderately liberal (5), Liberal (6), Very liberal (7).", "In addition, participants had the option of choosing Apathetic and Other, which have ambiguous fits on the conservative -liberal spectrum and were removed from our analysis (399 users).", "We also asked participants to self-report their gender (2322 female, 1205 male, 12 other) and age.", "Participants were all from the US in order to limit the impact of cultural and political factors.", "The political ideology distribution in our sample is presented in Figure 1 .", "We asked users their Twitter handle and downloaded their most recent 3,200 tweets, leading to a total of 4,833,133 tweets.", "Before adding users to our 3,938 user data set, we performed the following checks to ensure that the Twitter handle was the user's own: 1) after compensation, users were if they were truthful in reporting their handle and if not, we removed their data from analysis; 2) we manually examined all handles marked as verified by Twitter or that had over 2000 followers and eliminated them if they were celebrities or corporate/news accounts, as these were unlikely the users who participated in the survey.", "This study received approval from the Institutional Review Board (IRB) of the University of Pennsylvania.", "In addition, to facilitate comparison to previous work, we also use a data set of 13,651 users with overt political orientation (D 2 ).", "We selected popular political figures unambiguously associated with US liberal politics (@SenSanders, @JoeBiden, @CoryBooker, @JohnKerry) or US conservative politics (@marcorubio, @tedcruz, @RandPaul, @RealBenCarson).", "Liberals in our set (N l = 7417) had to follow on Twitter all of the liberal political figures and none of the conservative figures.", "Likewise, conservative users (N c = 6234) had to follow all of the conservative figures and no liberal figures.", "We downloaded up to 3,200 of each user's most recent tweets, leading to a total of 25,493,407 tweets.", "All tweets were downloaded around 10 August 2016.", "Features In our analysis, we use a broad range of linguistic features described below.", "Unigrams We use the bag-of-words representation to reduce each user's posting history to a normalised frequency distribution over the vocabulary consisting of all words used by at least 10% of the users (6,060 words).", "LIWC Traditional psychological studies use a dictionary-based approach to representing text.", "The most popular method is based on Linguistic Inquiry and Word Count (LIWC) (Pennebaker et al., 2001) , and automatically counts word frequencies for 64 different categories manually constructed based on psychological theory.", "These include different parts-of-speech, topical categories and emotions.", "Each user is thereby represented as a frequency distribution over these categories.", "Word2Vec Topics An alternative to LIWC is to use automatically generated word clusters i.e., groups of words that are semantically and/or syntactically similar.", "The clusters help reducing the feature space and provides additional interpretability.", "To create these groups of words, we use an automatic method that leverages word co-occurrence patterns in large corpora by making use of the distributional hypothesis: similar words tend to cooccur in similar contexts (Harris, 1954) .", "Based on co-occurrence statistics, each word is represented as a low dimensional vector of numbers with words closer in this space being more similar (Deerwester et al., 1990) .", "We use the method from (Preoţiuc-Pietro et al., 2015a) to compute topics using word2vec similarity (Mikolov et al., 2013a,b) and spectral clustering (Shi and Malik, 2000; von Luxburg, 2007) of different sizes (from 30 to 2000).", "We have tried other alternatives to building clusters: using other word similarities to generate clusters -such as NPMI (Lampos et al., 2014) or GloVe as proposed in (Preoţiuc-Pietro et al., 2015a) -or using standard topic modelling approached to create soft clusters of words e.g., Latent Dirichlet Allocation (Blei et al., 2003) .", "For brevity, we present experiments with the best performing feature set containing 500 Word2Vec clusters.", "We aggregate all the words posted in a users' tweets and represent each user as a distribution of the fraction of words belonging to each cluster.", "Sentiment & Emotions We hypothesise that different political ideologies differ in the type and amount of emotions the users express through their posts.", "The most studied model of discrete emotions is the Ekman model (Ekman, 1992; Strapparava and Mihalcea, 2008; Strapparava et al., 2004) which posits the existence of six basic emotions: anger, disgust, fear, joy, sadness and surprise.", "We automatically quantify these emotions from our Twitter data set using a publicly available crowd-sourcing derived lexicon of words associated with any of the six emotions, as well as general positive and negative sentiment Turney, 2010, 2013) .", "Using these lexicons, we assign a predicted emotion to each message and then average across all users' posts to obtain user level emotion expression scores.", "Political Terms In order to select unigrams pertaining to politics, we assigned the most frequent 12,000 unigrams in our data set to three categories: • Political words: mentions of political terms (234); • Political NEs: mentions of politician proper names out of the political terms (39); • Media NEs: mentions of political media sources and pundits out of the political terms (20).", "This coding was initially performed by a research assistant studying political science with good knowledge of US politics and were further filtered and checked by one of the authors.", "Analysis First, we explore the relationships between language use and political ideological groups within each feature set and pairs of opposing user groups.", "To illustrate differences between ideological groups we compare the two political extremes (Very Conservative -Very Liberal) and the political moderates (Moderate Conservative -Moderate Liberal).", "We further compare outright moderates with a group combining the two political extremes to study if we can uncover differences in political engagement and extremity, regardless of the conservative-liberal leaning.", "We use univariate partial linear correlations with age and gender as co-variates to factor out the influence of basic demographics.", "For example, in D 1 , users who reported themselves as very conservative are older and more likely males (µ age = 35.1, pct male = 44%) than the data average (µ age = 31.2, pct male = 35%).", "Additionally, prior to combining the two ideologically extreme groups, we sub-sampled the larger class (Very Liberal) to match the smaller class (Very Conservative) in age and gender.", "In the later prediction experiments, we do not perform matching, as this represents useful signal for classification (Ellis and Stimson, 2012) .", "Results with unigrams are presented in Figure 2 and with the other features in Table 1 .", "These are selected using standard statistical significance tests.", "Very Conservatives vs.", "Very Liberals The comparison between the extreme categories reveals the largest number of significant differences.", "The unigrams and Word2Vec clusters specific to conservatives are dominated by religion specific terms ('praying', 'god', W2V-485, W2V-018, W2V-099, L-RELIG), confirming a well-documented relationship (Gelman, 2009) and words describing family relationships ('uncle', 'son', L-FAMILY), another conservative value (Lakoff, 1997) .", "The emphasis on religious terms among conservatives is consistent with the claim that many Americans associate 'conservative' with 'religious' (Ellis and Stimson, 2012) .", "Extreme liberals show a tendency to use more adjectives (W2V-075, W2V-110), adverbs (L-ADVERB), conjunctions (L-CONJ) and comparisons (L-COMPARE) which indicate more nuanced and complex posts.", "Extreme conservatives post tweets higher in all positive emotions than liberals (L-POSEMO, Emot-Joy, Emot-Positive), confirming a previously hypothesised relationship (Napier and Jost, 2008) .", "However, extreme liberals are not associated with posting negative emotions either, only using words that reflect more anxiety (L-ANX), which is related to neuroticism in which the liberals are higher (Gerber et al., 2010) .", "Political term analysis reveals the partisan terms Figure 2 : Unigrams with the highest 80 Pearson correlations shown as word clouds in three vertical panels with a binary variable representing the two ideological groups compared.", "The size of the unigram is scaled by its correlation with the ideological group in bold.", "The color indexes relative frequency, from light blue (rarely used) to dark blue (frequently used).", "All correlations are significant at p < .05 and controlled for age and gender.", "', 'racism', 'feminism', 'transgender') .", "This perhaps reflects the desire for conservatives on Twitter to identify like-minded individuals, as extreme conservatives are a minority on the platform.", "Liberals, by contrast, use the platform to discuss and popularize their causes.", "Moderate Conservatives vs.", "Moderate Liberals Comparing the two sides of moderate users reveals a slightly more nuanced view of the two ideologies.", "While moderate conservatives still make heavy use of religious terms and express positive emotions (Emot-Joy, L-DRIVES), they also use affiliative language (L-AFFILIATION) and plural pronouns (L-WE).", "Moderate liberals are identified by very different features compared to their more extreme counterparts.", "Most striking is the use of swear and sex words (L-SEXUAL, L-ANGER, W2V-316), also highlighted by Sylwester and Purver (2015) .", "Two word clusters relating to British culture (W2V-458) and art (W2V-373) reflect that liberals are more inclined towards arts (Dollinger, 2007) .", "Statistically significant political terms are very few compared to the previous comparison, probably due to their lower overall usage, which we further investigate later.", "Moderates vs. Extremists Our final comparison looks at outright moderates compared to the two extreme groups combined, as we hypothesise the existence of a difference in overall political engagement.", "Moderates are not characterized by many features besides a topic of casual words (W2V-098), indicating the heterogeneity of this group of users.", "However, regardless of their orientation, the ideological extremists stand out from moderates.", "They use words and word clusters related to political actors (W2V-309), issues (W2V-237) and laws (W2V-296, W2V-288).", "LIWC analysis uncovers differences in article use (L-ARTICLE) or power words (L-POWER) specific of political tweets.", "The overall sentiment of these users is negative (Emot-Fear, Emot-Disgust, Emot-Sadness, L-DEATH) compared to moderates.", "This reveals -combined with the finding from the first comparison -that while extreme conservatives are overall more positive than liberals, both groups share negative expression.", "Political terms are almost all significantly correlated with the extreme ideological groups, Con.", "(1) Con.", "(2) M.Con.", "(3) Mod.", "(4) confirming the existence of a difference in political engagement which we study in detail next.", "Figure 3 presents the use of the three types of political terms across the 7 ideological groups in D 1 and the two political groups from D 2 .", "We notice the following: Political Terms • D 2 has a huge skew towards political words, with an average of more than three times more political terms across all three categories than our extreme classes from D 1 ; • Within the groups in D 1 , we observe an almost perfectly symmetrical U-shape across all three types of political terms, confirming our hypothesis about political engagement; • The difference between 1-2/6-7 is larger than 2-3/5-6.", "The extreme liberals and conservatives are disproportionately political, and have the potential to give Twitter's political discussions an unrepresentative, extremist hue (Fiorina, 1999) .", "It is also possible, however, that characterizing one as an extreme liberal or conservative indicates as much about her level of political engagement as it does about her placement on a left-right scale (Converse, 1964; Broockman, 2016) .", "Prediction In this section we build predictive models of political ideology and compare them to data sets obtained using previous work.", "Cross-Group Prediction First, we experiment with classifying between conservatives and liberals across various levels of political engagement in D 1 and between the two polarized groups in D 2 .", "We use logistic regression classification to compare three setups in Table 2 with results measured with ROC AUC as the classes are slightly inbalanced: • 10-fold cross-validation where training is performed on the same task as the testing (principal diagonal); • A train-test setup where training is performed on one task (presented in rows) and testing is performed on another (presented in columns); • A domain adaptation setup (results in brackets) where on each of the 10 folds, the 9 training folds (presented in rows) are supplemented with all the data from a different task (presented in columns) using the EasyAdapt algorithm (Daumé III, 2007) as a proof on concept on the effects of using additional distantly supervised data.", "Data pooling lead to worse results than EasyAdapt.", "Each of the three tasks from D 1 have a similar number of training samples, hence we do not expect that data set size has any effects in comparing the results across tasks.", "The results with both sets of features show that: • Prediction performance is much higher for D 2 than for D 1 , with the more extreme groups in D 1 being easier to predict than the moderate groups.", "This confirms that the very high accuracies reported by previous research are an artifact of user label collection and that on regular users, the expected accuracy is much lower (Cohen and Ruths, 2013) .", "We further show that, as the level of political engagement decreases, the classification problem becomes even harder; • The model trained on D 2 and Word2Vec word clusters performs significantly worse on D 1 tasks even if the training data is over 10 times larger.", "When using political words, the D 2 trained classifier performs relatively well on all tasks from D 1 ; • Overall, using political words as features performs better than Word2Vec clusters in the binary classification tasks; • Domain adaptation helps in the majority of cases, leading to improvements of up to .03 in AUC (predicting 2v6 supplemented with 3v5 data).", "Political Leaning and Engagement Prediction Political leaning (Conservative -Liberal, excluding the Moderate group) can be considered an ordinal variable and the prediction problem framed as one of regression.", "In addition to the political leaning prediction, based on analysis and previous prediction results, we hypothesize the existence of a separate dimension of political engagement regardless of the partisan side.", "Thus, we merge users from classes 3-5, 2-6, 1-7 and create a variable with four values, where the lowest value is represented by moderate users (4) and the highest value is represented by either very conservative (1) or very liberal (7) users.", "We use a linear regression algorithm with an Elastic Net regularizer (Zou and Hastie, 2005) as implemented in ScikitLearn (Pedregosa et al., 2011) .", "To evaluate our results, we split our data into 10 stratified folds and performed crossvalidation on one held-out fold at a time.", "For all our methods we tune the parameters of our models on a separate validation fold.", "The overall performance is assessed using Pearson correlation between the set of predicted values and the userreported score.", "Results are presented in Table 3 .", "735 The same patterns hold when evaluating the results with Root Mean Squared Error (RMSE).", "Table 3 : Pearson correlations between the predictions and self-reported ideologies using linear regression with each feature category and a linear combination of their predictions in a 10-fold cross-validation setup.", "Political leaning is represented on the 1-7 scale removing the moderates (4).", "Political engagement is a scale ranging from 4 through 3-5 and 2-6 to 1-7.", "The results show that both dimensions can be predicted well above chance, with political leaning being easier to predict than engagement.", "Word2Vec clusters obtain the highest predictive accuracy for political leaning, even though they did not perform as well in the previous classification tasks.", "For political engagement, political terms and Word2Vec clusters obtain similar predictive accuracy.", "This result is expected based on the results from Figure 3 , which showed how political term usage varies across groups, and how it is especially dependent on political engagement.", "While political terms are very effective at distinguishing between two opposing political groups, they can not discriminate as well between levels of engagement within the same ideological orientation.", "Combining all classifiers' predictions in a linear ensemble obtains best results when compared to each individual category.", "Encoding Class Structure In our previous experiments, we uncovered that certain relationships exist between the seven groups.", "For example, extreme conservatives and liberals both demonstrate strong political engagement.", "Therefore, this class structure can be exploited to improve classification performance.", "To this end, we deploy the sparse graph regularized approach (Argyriou et al., 2007; Zhou et al., 2011) to encode the structure of the seven classes as a graph regularizer in a logistic regression framework.", "In particular, we employed a multi-task learning paradigm, where each task is a one-vs-all classification.", "Multi-task learning (MTL) is a learning paradigm that jointly learns multiple related tasks and can achieve better generalization performance than learning each task individually, especially when presented with insufficient training samples (Liu et al., 2015 (Liu et al., , 2016b .", "The group structure is encoded into a matrix R which codes the groups which are considered similar.", "The objective of the sparse graph regularized multi-task learning problem is: min W,c τ t=1 N i=1 log(1 + exp(−Y t,i (W T i,t X t,i + c t ))) + γ WR 2 F + λ W 1 , where τ is the number of tasks, |N | the number of samples, X the feature matrix, Y the outcome matrix, W i,t and c t is the model for task t and R is the structure matrix.", "We define three R matrices: (1) codes that groups with similar political engagement are similar (i.e.", "1-7, 2-6, 3-5); (2) codes that groups from each ideological side are similar (i.e.", "1-2, 1-3, 2-3, 5-6, 5-7, 6-7); (3) learnt from the data.", "Results are presented in Table 4 .", "Regular logistic regression performs slightly better than the majority class baseline, which demonstrates that the 7class classification is a very hard problem although most miss-classifications are within one ideology point.", "The graph regularization (GR) improves the classification performance over logistic regression (LR) in all cases, with political leaning based matrix (GR-Leaning) obtaining 2% in accuracy higher than the political engagement one (GR-Engagement) and the learnt matrix (GR-Learnt) obtaining best results.", "Conclusions This study analyzed user-level political ideology through Twitter posts.", "In contrast to previous work, we made use of a novel data set where finegrained user political ideology labels are obtained through surveys as opposed to binary self-reports.", "We showed that users in our data set are far less likely to post about politics and real-world finegrained political ideology prediction is harder and more nuanced than previously reported.", "We analyzed language differences between the ideological groups and uncovered a dimension of political engagement separate from political leaning.", "Our work has implications for pollsters or marketers, who are most interested to identify and persuade moderate users.", "With respect to political conclusions, researchers commonly conceptualize ideology as a single, left-right dimension similar to what we observe in the U.S. Congress (Ansolabehere et al., 2008; Bafumi and Herron, 2010) .", "Our results suggest a different direction: self-reported political extremity is more an indication of political engagement than of ideological self-placement (Abramowitz, 2010) .", "In fact, only self-reported extremists appear to devote much of their Twitter activity to politics at all.", "While our study focused solely on text posted by the user, follow-up work can use other modalities such as images or social network analysis to improve prediction performance.", "In addition, our work on user-level modeling can be integrated with work on message-level political bias to study how this is revealed across users with various levels of engagement.", "Another direction of future study will look at political ideology prediction in other countries and cultures, where ideology has different or multiple dimensions." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "5.1", "5.2", "5.3", "5.4", "6", "6.1", "6.2", "6.3", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Data Set", "Features", "Analysis", "Very Conservatives vs. Very Liberals", "Moderate Conservatives vs. Moderate Liberals", "Moderates vs. Extremists", "Political Terms", "Prediction", "Cross-Group Prediction", "Political Leaning and Engagement Prediction", "Encoding Class Structure", "Conclusions" ] }
GEM-SciDuet-train-94#paper-1239#slide-7
Neutral Users
Words associated with either Words associated with neutral extreme conservative or liberal users Correlations are age and gender controlled. Extreme groups are combined using matched age and gender distributions.
Words associated with either Words associated with neutral extreme conservative or liberal users Correlations are age and gender controlled. Extreme groups are combined using matched age and gender distributions.
[]
GEM-SciDuet-train-94#paper-1239#slide-8
1239
Beyond Binary Labels: Political Ideology Prediction of Twitter Users
Automatic political preference prediction from social media posts has to date proven successful only in distinguishing between publicly declared liberals and conservatives in the US. This study examines users' political ideology using a sevenpoint scale which enables us to identify politically moderate and neutral usersgroups which are of particular interest to political scientists and pollsters. Using a novel data set with political ideology labels self-reported through surveys, our goal is two-fold: a) to characterize the political groups of users through language use on Twitter; b) to build a fine-grained model that predicts political ideology of unseen users. Our results identify differences in both political leaning and engagement and the extent to which each group tweets using political keywords. Finally, we demonstrate how to improve ideology prediction accuracy by exploiting the relationships between the user groups.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175 ], "paper_content_text": [ "Automatic political preference prediction from social media posts has to date proven successful only in distinguishing between publicly declared liberals and conservatives in the US.", "This study examines users' political ideology using a sevenpoint scale which enables us to identify politically moderate and neutral usersgroups which are of particular interest to political scientists and pollsters.", "Using a novel data set with political ideology labels self-reported through surveys, our goal is two-fold: a) to characterize the political groups of users through language use on Twitter; b) to build a fine-grained model that predicts political ideology of unseen users.", "Our results identify differences in both political leaning and engagement and the extent to which each group tweets using political keywords.", "Finally, we demonstrate how to improve ideology prediction accuracy by exploiting the relationships between the user groups.", "Introduction Social media is used by people to share their opinions and views.", "Unsurprisingly, an important part of the population shares opinions and news related to politics or causes they support, thus offering strong cues about their political preferences and ideologies.", "In addition, political membership is also predictable purely from one's interests or demographics -it is much more likely for a religious person to be conservative or for a younger person to lean liberal (Ellis and Stimson, 2012) .", "* Work carried out during a research visit at the University of Pennsylvania User trait prediction from text is based on the assumption that language use reflects a user's demographics, psychological states or preferences.", "Applications include prediction of age (Rao et al., 2010; Flekova et al., 2016b) , gender (Burger et al., 2011; Sap et al., 2014) , personality (Schwartz et al., 2013; , socioeconomic status (Preoţiuc-Pietro et al., 2015a,b; Liu et al., 2016c) , popularity (Lampos et al., 2014) or location (Cheng et al., 2010) .", "Research on predicting political orientation has focused on methodological improvements (Pennacchiotti and Popescu, 2011) and used data sets with publicly stated dichotomous political orientation labels due to their easy accessibility (Sylwester and Purver, 2015) .", "However, these data sets are not representative samples of the entire population (Cohen and Ruths, 2013) and do not accurately reflect the variety of political attitudes and engagement (Kam et al., 2007) .", "For example, we expect users who state their political affiliation in their profile description, tweet with partisan hashtags or appear in public party lists to use social media as a means of popularizing and supporting their political beliefs (Bar-berASa, 2015) .", "Many users may choose not to publicly post about their political preference for various social goals or perhaps this preference may not be strong or representative enough to be disclosed online.", "Dichotomous political preference also ignores users who do not have a political ideology.", "All of these types of users are very important for researchers aiming to understand group preferences, traits or moral values (Lewis and Reiley, 2014; Hersh, 2015) .", "The most common political ideology spectrum in the US is the conservative -liberal (Ellis and Stimson, 2012) .", "We collect a novel data set of Twitter users mapped to this seven-point spectrum which allows us to: 1.", "Uncover the differences in language use between ideological groups; 2.", "Develop a user-level political ideology prediction algorithm that classifies all levels of engagement and leverages the structure in the political ideology spectrum.", "First, using a broad range of language features including unigrams, word clusters and emotions, we study the linguistic differences between the two ideologically extreme groups, the two ideologically moderate groups and between both extremes and moderates in order to provide insight into the content they post on Twitter.", "In addition, we examine the extent to which the ideological groups in our data set post about politics and compare it to a data set obtained similarly to previous work.", "In prediction experiments, we show how accurately we can distinguish between opposing ideological groups in various scenarios and that previous binary political orientation prediction has been oversimplified.", "Then, we measure the extent to which we can predict the two dimensions of political leaning and engagement.", "Finally, we build an ideology classifier in a multi-task learning setup that leverages the relationships between groups.", "1 Related Work Automatically inferring user traits from their online footprints is a prolific topic of research, enabled by the increasing availability of user generated data and advances in machine learning.", "Beyond its research oriented goals, user profiling has important industry applications in online marketing, personalization or large-scale audience profiling.", "To this end, researchers have used a wide range of types of online footprints, including video (Subramanian et al., 2013) , audio (Alam and Riccardi, 2014 ), text (Preoţiuc-Pietro et al., 2015a) , profile images (Liu et al., 2016a) , social data (Van Der Heide et al., 2012; Hall et al., 2014) , social networks (Perozzi and Skiena, 2015; Rout et al., 2013) , payment data (Wang et al., 2016) and endorsements .", "Political orientation prediction has been studied in two related, albeit crucially different scenarios, as also identified in (Zafar et al., 2016) .", "First, researchers aimed to identify and quantify orientation of words (Monroe et al., 2008) , hashtags (Weber et al., 2013) or documents (Iyyer et al., 2014) , or to detect bias (Yano et al., 2010) or impartiality (Zafar et al., 2016) at a document level.", "Our study belongs to the second category, where political orientation is inferred at a user-level.", "All previous studies study labeling US conservatives vs. liberals using either text (Rao et al., 2010) , social network connections (Zamal et al., 2012) , platform-specific features (Conover et al., 2011) or a combination of these (Pennacchiotti and Popescu, 2011; Volkova et al., 2014) , with very high reported accuracies of up to 94.9% (Conover et al., 2011) .", "However, all previous work on predicting userlevel political preferences are limited to a binary prediction between liberal/democrat and conservative/republican, disregarding any nuances in political ideology.", "In addition, as the focus of the studies is more on the methodological or interpretation aspects of the problem, another downside is that the user labels were obtained in simple, albeit biased ways.", "These include users who explicitly state their political orientation on user lists of party supporters (Zamal et al., 2012; Pennacchiotti and Popescu, 2011) , supporting partisan causes (Rao et al., 2010) , by following political figures (Volkova et al., 2014) or party accounts (Sylwester and Purver, 2015) or that retweet partisan hashtags (Conover et al., 2011) .", "As also identified in (Cohen and Ruths, 2013) and further confirmed later in this study, these data sets are biased: most people do not clearly state their political preference online -fewer than 5% according to Priante et al.", "(2016) -and those that state their preference are very likely to be political activists.", "Cohen and Ruths (2013) demonstrated that predictive accuracy of classifiers is significantly lower when confronted with users that do not explicitly mention their political orientation.", "Despite this, their study is limited because in their hardest classification task, they use crowdsourced political orientation labels, which may not correspond to reality and suffer from biases (Flekova et al., 2016a; .", "Further, they still only look at predicting binary political orientation.", "To date, no other research on this topic has taken into account these findings.", "Data Set The main data set used in this study consists of 3,938 users recruited through the Qualtrics platform (D 1 ).", "Each participant was compensated with 3 USD for 15 minutes of their time.", "All participants first answered the same demographic questions (including political ideology), then were directed to one of four sets of psychological questionnaires unrelated to the political ideology question.", "They were asked to self-report their political ideology on a seven point scale: Very conservative (1), Conservative (2), Moderately conservative (3), Moderate (4), Moderately liberal (5), Liberal (6), Very liberal (7).", "In addition, participants had the option of choosing Apathetic and Other, which have ambiguous fits on the conservative -liberal spectrum and were removed from our analysis (399 users).", "We also asked participants to self-report their gender (2322 female, 1205 male, 12 other) and age.", "Participants were all from the US in order to limit the impact of cultural and political factors.", "The political ideology distribution in our sample is presented in Figure 1 .", "We asked users their Twitter handle and downloaded their most recent 3,200 tweets, leading to a total of 4,833,133 tweets.", "Before adding users to our 3,938 user data set, we performed the following checks to ensure that the Twitter handle was the user's own: 1) after compensation, users were if they were truthful in reporting their handle and if not, we removed their data from analysis; 2) we manually examined all handles marked as verified by Twitter or that had over 2000 followers and eliminated them if they were celebrities or corporate/news accounts, as these were unlikely the users who participated in the survey.", "This study received approval from the Institutional Review Board (IRB) of the University of Pennsylvania.", "In addition, to facilitate comparison to previous work, we also use a data set of 13,651 users with overt political orientation (D 2 ).", "We selected popular political figures unambiguously associated with US liberal politics (@SenSanders, @JoeBiden, @CoryBooker, @JohnKerry) or US conservative politics (@marcorubio, @tedcruz, @RandPaul, @RealBenCarson).", "Liberals in our set (N l = 7417) had to follow on Twitter all of the liberal political figures and none of the conservative figures.", "Likewise, conservative users (N c = 6234) had to follow all of the conservative figures and no liberal figures.", "We downloaded up to 3,200 of each user's most recent tweets, leading to a total of 25,493,407 tweets.", "All tweets were downloaded around 10 August 2016.", "Features In our analysis, we use a broad range of linguistic features described below.", "Unigrams We use the bag-of-words representation to reduce each user's posting history to a normalised frequency distribution over the vocabulary consisting of all words used by at least 10% of the users (6,060 words).", "LIWC Traditional psychological studies use a dictionary-based approach to representing text.", "The most popular method is based on Linguistic Inquiry and Word Count (LIWC) (Pennebaker et al., 2001) , and automatically counts word frequencies for 64 different categories manually constructed based on psychological theory.", "These include different parts-of-speech, topical categories and emotions.", "Each user is thereby represented as a frequency distribution over these categories.", "Word2Vec Topics An alternative to LIWC is to use automatically generated word clusters i.e., groups of words that are semantically and/or syntactically similar.", "The clusters help reducing the feature space and provides additional interpretability.", "To create these groups of words, we use an automatic method that leverages word co-occurrence patterns in large corpora by making use of the distributional hypothesis: similar words tend to cooccur in similar contexts (Harris, 1954) .", "Based on co-occurrence statistics, each word is represented as a low dimensional vector of numbers with words closer in this space being more similar (Deerwester et al., 1990) .", "We use the method from (Preoţiuc-Pietro et al., 2015a) to compute topics using word2vec similarity (Mikolov et al., 2013a,b) and spectral clustering (Shi and Malik, 2000; von Luxburg, 2007) of different sizes (from 30 to 2000).", "We have tried other alternatives to building clusters: using other word similarities to generate clusters -such as NPMI (Lampos et al., 2014) or GloVe as proposed in (Preoţiuc-Pietro et al., 2015a) -or using standard topic modelling approached to create soft clusters of words e.g., Latent Dirichlet Allocation (Blei et al., 2003) .", "For brevity, we present experiments with the best performing feature set containing 500 Word2Vec clusters.", "We aggregate all the words posted in a users' tweets and represent each user as a distribution of the fraction of words belonging to each cluster.", "Sentiment & Emotions We hypothesise that different political ideologies differ in the type and amount of emotions the users express through their posts.", "The most studied model of discrete emotions is the Ekman model (Ekman, 1992; Strapparava and Mihalcea, 2008; Strapparava et al., 2004) which posits the existence of six basic emotions: anger, disgust, fear, joy, sadness and surprise.", "We automatically quantify these emotions from our Twitter data set using a publicly available crowd-sourcing derived lexicon of words associated with any of the six emotions, as well as general positive and negative sentiment Turney, 2010, 2013) .", "Using these lexicons, we assign a predicted emotion to each message and then average across all users' posts to obtain user level emotion expression scores.", "Political Terms In order to select unigrams pertaining to politics, we assigned the most frequent 12,000 unigrams in our data set to three categories: • Political words: mentions of political terms (234); • Political NEs: mentions of politician proper names out of the political terms (39); • Media NEs: mentions of political media sources and pundits out of the political terms (20).", "This coding was initially performed by a research assistant studying political science with good knowledge of US politics and were further filtered and checked by one of the authors.", "Analysis First, we explore the relationships between language use and political ideological groups within each feature set and pairs of opposing user groups.", "To illustrate differences between ideological groups we compare the two political extremes (Very Conservative -Very Liberal) and the political moderates (Moderate Conservative -Moderate Liberal).", "We further compare outright moderates with a group combining the two political extremes to study if we can uncover differences in political engagement and extremity, regardless of the conservative-liberal leaning.", "We use univariate partial linear correlations with age and gender as co-variates to factor out the influence of basic demographics.", "For example, in D 1 , users who reported themselves as very conservative are older and more likely males (µ age = 35.1, pct male = 44%) than the data average (µ age = 31.2, pct male = 35%).", "Additionally, prior to combining the two ideologically extreme groups, we sub-sampled the larger class (Very Liberal) to match the smaller class (Very Conservative) in age and gender.", "In the later prediction experiments, we do not perform matching, as this represents useful signal for classification (Ellis and Stimson, 2012) .", "Results with unigrams are presented in Figure 2 and with the other features in Table 1 .", "These are selected using standard statistical significance tests.", "Very Conservatives vs.", "Very Liberals The comparison between the extreme categories reveals the largest number of significant differences.", "The unigrams and Word2Vec clusters specific to conservatives are dominated by religion specific terms ('praying', 'god', W2V-485, W2V-018, W2V-099, L-RELIG), confirming a well-documented relationship (Gelman, 2009) and words describing family relationships ('uncle', 'son', L-FAMILY), another conservative value (Lakoff, 1997) .", "The emphasis on religious terms among conservatives is consistent with the claim that many Americans associate 'conservative' with 'religious' (Ellis and Stimson, 2012) .", "Extreme liberals show a tendency to use more adjectives (W2V-075, W2V-110), adverbs (L-ADVERB), conjunctions (L-CONJ) and comparisons (L-COMPARE) which indicate more nuanced and complex posts.", "Extreme conservatives post tweets higher in all positive emotions than liberals (L-POSEMO, Emot-Joy, Emot-Positive), confirming a previously hypothesised relationship (Napier and Jost, 2008) .", "However, extreme liberals are not associated with posting negative emotions either, only using words that reflect more anxiety (L-ANX), which is related to neuroticism in which the liberals are higher (Gerber et al., 2010) .", "Political term analysis reveals the partisan terms Figure 2 : Unigrams with the highest 80 Pearson correlations shown as word clouds in three vertical panels with a binary variable representing the two ideological groups compared.", "The size of the unigram is scaled by its correlation with the ideological group in bold.", "The color indexes relative frequency, from light blue (rarely used) to dark blue (frequently used).", "All correlations are significant at p < .05 and controlled for age and gender.", "', 'racism', 'feminism', 'transgender') .", "This perhaps reflects the desire for conservatives on Twitter to identify like-minded individuals, as extreme conservatives are a minority on the platform.", "Liberals, by contrast, use the platform to discuss and popularize their causes.", "Moderate Conservatives vs.", "Moderate Liberals Comparing the two sides of moderate users reveals a slightly more nuanced view of the two ideologies.", "While moderate conservatives still make heavy use of religious terms and express positive emotions (Emot-Joy, L-DRIVES), they also use affiliative language (L-AFFILIATION) and plural pronouns (L-WE).", "Moderate liberals are identified by very different features compared to their more extreme counterparts.", "Most striking is the use of swear and sex words (L-SEXUAL, L-ANGER, W2V-316), also highlighted by Sylwester and Purver (2015) .", "Two word clusters relating to British culture (W2V-458) and art (W2V-373) reflect that liberals are more inclined towards arts (Dollinger, 2007) .", "Statistically significant political terms are very few compared to the previous comparison, probably due to their lower overall usage, which we further investigate later.", "Moderates vs. Extremists Our final comparison looks at outright moderates compared to the two extreme groups combined, as we hypothesise the existence of a difference in overall political engagement.", "Moderates are not characterized by many features besides a topic of casual words (W2V-098), indicating the heterogeneity of this group of users.", "However, regardless of their orientation, the ideological extremists stand out from moderates.", "They use words and word clusters related to political actors (W2V-309), issues (W2V-237) and laws (W2V-296, W2V-288).", "LIWC analysis uncovers differences in article use (L-ARTICLE) or power words (L-POWER) specific of political tweets.", "The overall sentiment of these users is negative (Emot-Fear, Emot-Disgust, Emot-Sadness, L-DEATH) compared to moderates.", "This reveals -combined with the finding from the first comparison -that while extreme conservatives are overall more positive than liberals, both groups share negative expression.", "Political terms are almost all significantly correlated with the extreme ideological groups, Con.", "(1) Con.", "(2) M.Con.", "(3) Mod.", "(4) confirming the existence of a difference in political engagement which we study in detail next.", "Figure 3 presents the use of the three types of political terms across the 7 ideological groups in D 1 and the two political groups from D 2 .", "We notice the following: Political Terms • D 2 has a huge skew towards political words, with an average of more than three times more political terms across all three categories than our extreme classes from D 1 ; • Within the groups in D 1 , we observe an almost perfectly symmetrical U-shape across all three types of political terms, confirming our hypothesis about political engagement; • The difference between 1-2/6-7 is larger than 2-3/5-6.", "The extreme liberals and conservatives are disproportionately political, and have the potential to give Twitter's political discussions an unrepresentative, extremist hue (Fiorina, 1999) .", "It is also possible, however, that characterizing one as an extreme liberal or conservative indicates as much about her level of political engagement as it does about her placement on a left-right scale (Converse, 1964; Broockman, 2016) .", "Prediction In this section we build predictive models of political ideology and compare them to data sets obtained using previous work.", "Cross-Group Prediction First, we experiment with classifying between conservatives and liberals across various levels of political engagement in D 1 and between the two polarized groups in D 2 .", "We use logistic regression classification to compare three setups in Table 2 with results measured with ROC AUC as the classes are slightly inbalanced: • 10-fold cross-validation where training is performed on the same task as the testing (principal diagonal); • A train-test setup where training is performed on one task (presented in rows) and testing is performed on another (presented in columns); • A domain adaptation setup (results in brackets) where on each of the 10 folds, the 9 training folds (presented in rows) are supplemented with all the data from a different task (presented in columns) using the EasyAdapt algorithm (Daumé III, 2007) as a proof on concept on the effects of using additional distantly supervised data.", "Data pooling lead to worse results than EasyAdapt.", "Each of the three tasks from D 1 have a similar number of training samples, hence we do not expect that data set size has any effects in comparing the results across tasks.", "The results with both sets of features show that: • Prediction performance is much higher for D 2 than for D 1 , with the more extreme groups in D 1 being easier to predict than the moderate groups.", "This confirms that the very high accuracies reported by previous research are an artifact of user label collection and that on regular users, the expected accuracy is much lower (Cohen and Ruths, 2013) .", "We further show that, as the level of political engagement decreases, the classification problem becomes even harder; • The model trained on D 2 and Word2Vec word clusters performs significantly worse on D 1 tasks even if the training data is over 10 times larger.", "When using political words, the D 2 trained classifier performs relatively well on all tasks from D 1 ; • Overall, using political words as features performs better than Word2Vec clusters in the binary classification tasks; • Domain adaptation helps in the majority of cases, leading to improvements of up to .03 in AUC (predicting 2v6 supplemented with 3v5 data).", "Political Leaning and Engagement Prediction Political leaning (Conservative -Liberal, excluding the Moderate group) can be considered an ordinal variable and the prediction problem framed as one of regression.", "In addition to the political leaning prediction, based on analysis and previous prediction results, we hypothesize the existence of a separate dimension of political engagement regardless of the partisan side.", "Thus, we merge users from classes 3-5, 2-6, 1-7 and create a variable with four values, where the lowest value is represented by moderate users (4) and the highest value is represented by either very conservative (1) or very liberal (7) users.", "We use a linear regression algorithm with an Elastic Net regularizer (Zou and Hastie, 2005) as implemented in ScikitLearn (Pedregosa et al., 2011) .", "To evaluate our results, we split our data into 10 stratified folds and performed crossvalidation on one held-out fold at a time.", "For all our methods we tune the parameters of our models on a separate validation fold.", "The overall performance is assessed using Pearson correlation between the set of predicted values and the userreported score.", "Results are presented in Table 3 .", "735 The same patterns hold when evaluating the results with Root Mean Squared Error (RMSE).", "Table 3 : Pearson correlations between the predictions and self-reported ideologies using linear regression with each feature category and a linear combination of their predictions in a 10-fold cross-validation setup.", "Political leaning is represented on the 1-7 scale removing the moderates (4).", "Political engagement is a scale ranging from 4 through 3-5 and 2-6 to 1-7.", "The results show that both dimensions can be predicted well above chance, with political leaning being easier to predict than engagement.", "Word2Vec clusters obtain the highest predictive accuracy for political leaning, even though they did not perform as well in the previous classification tasks.", "For political engagement, political terms and Word2Vec clusters obtain similar predictive accuracy.", "This result is expected based on the results from Figure 3 , which showed how political term usage varies across groups, and how it is especially dependent on political engagement.", "While political terms are very effective at distinguishing between two opposing political groups, they can not discriminate as well between levels of engagement within the same ideological orientation.", "Combining all classifiers' predictions in a linear ensemble obtains best results when compared to each individual category.", "Encoding Class Structure In our previous experiments, we uncovered that certain relationships exist between the seven groups.", "For example, extreme conservatives and liberals both demonstrate strong political engagement.", "Therefore, this class structure can be exploited to improve classification performance.", "To this end, we deploy the sparse graph regularized approach (Argyriou et al., 2007; Zhou et al., 2011) to encode the structure of the seven classes as a graph regularizer in a logistic regression framework.", "In particular, we employed a multi-task learning paradigm, where each task is a one-vs-all classification.", "Multi-task learning (MTL) is a learning paradigm that jointly learns multiple related tasks and can achieve better generalization performance than learning each task individually, especially when presented with insufficient training samples (Liu et al., 2015 (Liu et al., , 2016b .", "The group structure is encoded into a matrix R which codes the groups which are considered similar.", "The objective of the sparse graph regularized multi-task learning problem is: min W,c τ t=1 N i=1 log(1 + exp(−Y t,i (W T i,t X t,i + c t ))) + γ WR 2 F + λ W 1 , where τ is the number of tasks, |N | the number of samples, X the feature matrix, Y the outcome matrix, W i,t and c t is the model for task t and R is the structure matrix.", "We define three R matrices: (1) codes that groups with similar political engagement are similar (i.e.", "1-7, 2-6, 3-5); (2) codes that groups from each ideological side are similar (i.e.", "1-2, 1-3, 2-3, 5-6, 5-7, 6-7); (3) learnt from the data.", "Results are presented in Table 4 .", "Regular logistic regression performs slightly better than the majority class baseline, which demonstrates that the 7class classification is a very hard problem although most miss-classifications are within one ideology point.", "The graph regularization (GR) improves the classification performance over logistic regression (LR) in all cases, with political leaning based matrix (GR-Leaning) obtaining 2% in accuracy higher than the political engagement one (GR-Engagement) and the learnt matrix (GR-Learnt) obtaining best results.", "Conclusions This study analyzed user-level political ideology through Twitter posts.", "In contrast to previous work, we made use of a novel data set where finegrained user political ideology labels are obtained through surveys as opposed to binary self-reports.", "We showed that users in our data set are far less likely to post about politics and real-world finegrained political ideology prediction is harder and more nuanced than previously reported.", "We analyzed language differences between the ideological groups and uncovered a dimension of political engagement separate from political leaning.", "Our work has implications for pollsters or marketers, who are most interested to identify and persuade moderate users.", "With respect to political conclusions, researchers commonly conceptualize ideology as a single, left-right dimension similar to what we observe in the U.S. Congress (Ansolabehere et al., 2008; Bafumi and Herron, 2010) .", "Our results suggest a different direction: self-reported political extremity is more an indication of political engagement than of ideological self-placement (Abramowitz, 2010) .", "In fact, only self-reported extremists appear to devote much of their Twitter activity to politics at all.", "While our study focused solely on text posted by the user, follow-up work can use other modalities such as images or social network analysis to improve prediction performance.", "In addition, our work on user-level modeling can be integrated with work on message-level political bias to study how this is revealed across users with various levels of engagement.", "Another direction of future study will look at political ideology prediction in other countries and cultures, where ideology has different or multiple dimensions." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "5.1", "5.2", "5.3", "5.4", "6", "6.1", "6.2", "6.3", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Data Set", "Features", "Analysis", "Very Conservatives vs. Very Liberals", "Moderate Conservatives vs. Moderate Liberals", "Moderates vs. Extremists", "Political Terms", "Prediction", "Cross-Group Prediction", "Political Leaning and Engagement Prediction", "Encoding Class Structure", "Conclusions" ] }
GEM-SciDuet-train-94#paper-1239#slide-8
Political Engagement
H3a There is a separate dimension of political engagement Unigrams LIWC Topics Emotions Political All Pearson R between predictions and true labels, Linear Regression, 10 fold-cross validation
H3a There is a separate dimension of political engagement Unigrams LIWC Topics Emotions Political All Pearson R between predictions and true labels, Linear Regression, 10 fold-cross validation
[]
GEM-SciDuet-train-94#paper-1239#slide-9
1239
Beyond Binary Labels: Political Ideology Prediction of Twitter Users
Automatic political preference prediction from social media posts has to date proven successful only in distinguishing between publicly declared liberals and conservatives in the US. This study examines users' political ideology using a sevenpoint scale which enables us to identify politically moderate and neutral usersgroups which are of particular interest to political scientists and pollsters. Using a novel data set with political ideology labels self-reported through surveys, our goal is two-fold: a) to characterize the political groups of users through language use on Twitter; b) to build a fine-grained model that predicts political ideology of unseen users. Our results identify differences in both political leaning and engagement and the extent to which each group tweets using political keywords. Finally, we demonstrate how to improve ideology prediction accuracy by exploiting the relationships between the user groups.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175 ], "paper_content_text": [ "Automatic political preference prediction from social media posts has to date proven successful only in distinguishing between publicly declared liberals and conservatives in the US.", "This study examines users' political ideology using a sevenpoint scale which enables us to identify politically moderate and neutral usersgroups which are of particular interest to political scientists and pollsters.", "Using a novel data set with political ideology labels self-reported through surveys, our goal is two-fold: a) to characterize the political groups of users through language use on Twitter; b) to build a fine-grained model that predicts political ideology of unseen users.", "Our results identify differences in both political leaning and engagement and the extent to which each group tweets using political keywords.", "Finally, we demonstrate how to improve ideology prediction accuracy by exploiting the relationships between the user groups.", "Introduction Social media is used by people to share their opinions and views.", "Unsurprisingly, an important part of the population shares opinions and news related to politics or causes they support, thus offering strong cues about their political preferences and ideologies.", "In addition, political membership is also predictable purely from one's interests or demographics -it is much more likely for a religious person to be conservative or for a younger person to lean liberal (Ellis and Stimson, 2012) .", "* Work carried out during a research visit at the University of Pennsylvania User trait prediction from text is based on the assumption that language use reflects a user's demographics, psychological states or preferences.", "Applications include prediction of age (Rao et al., 2010; Flekova et al., 2016b) , gender (Burger et al., 2011; Sap et al., 2014) , personality (Schwartz et al., 2013; , socioeconomic status (Preoţiuc-Pietro et al., 2015a,b; Liu et al., 2016c) , popularity (Lampos et al., 2014) or location (Cheng et al., 2010) .", "Research on predicting political orientation has focused on methodological improvements (Pennacchiotti and Popescu, 2011) and used data sets with publicly stated dichotomous political orientation labels due to their easy accessibility (Sylwester and Purver, 2015) .", "However, these data sets are not representative samples of the entire population (Cohen and Ruths, 2013) and do not accurately reflect the variety of political attitudes and engagement (Kam et al., 2007) .", "For example, we expect users who state their political affiliation in their profile description, tweet with partisan hashtags or appear in public party lists to use social media as a means of popularizing and supporting their political beliefs (Bar-berASa, 2015) .", "Many users may choose not to publicly post about their political preference for various social goals or perhaps this preference may not be strong or representative enough to be disclosed online.", "Dichotomous political preference also ignores users who do not have a political ideology.", "All of these types of users are very important for researchers aiming to understand group preferences, traits or moral values (Lewis and Reiley, 2014; Hersh, 2015) .", "The most common political ideology spectrum in the US is the conservative -liberal (Ellis and Stimson, 2012) .", "We collect a novel data set of Twitter users mapped to this seven-point spectrum which allows us to: 1.", "Uncover the differences in language use between ideological groups; 2.", "Develop a user-level political ideology prediction algorithm that classifies all levels of engagement and leverages the structure in the political ideology spectrum.", "First, using a broad range of language features including unigrams, word clusters and emotions, we study the linguistic differences between the two ideologically extreme groups, the two ideologically moderate groups and between both extremes and moderates in order to provide insight into the content they post on Twitter.", "In addition, we examine the extent to which the ideological groups in our data set post about politics and compare it to a data set obtained similarly to previous work.", "In prediction experiments, we show how accurately we can distinguish between opposing ideological groups in various scenarios and that previous binary political orientation prediction has been oversimplified.", "Then, we measure the extent to which we can predict the two dimensions of political leaning and engagement.", "Finally, we build an ideology classifier in a multi-task learning setup that leverages the relationships between groups.", "1 Related Work Automatically inferring user traits from their online footprints is a prolific topic of research, enabled by the increasing availability of user generated data and advances in machine learning.", "Beyond its research oriented goals, user profiling has important industry applications in online marketing, personalization or large-scale audience profiling.", "To this end, researchers have used a wide range of types of online footprints, including video (Subramanian et al., 2013) , audio (Alam and Riccardi, 2014 ), text (Preoţiuc-Pietro et al., 2015a) , profile images (Liu et al., 2016a) , social data (Van Der Heide et al., 2012; Hall et al., 2014) , social networks (Perozzi and Skiena, 2015; Rout et al., 2013) , payment data (Wang et al., 2016) and endorsements .", "Political orientation prediction has been studied in two related, albeit crucially different scenarios, as also identified in (Zafar et al., 2016) .", "First, researchers aimed to identify and quantify orientation of words (Monroe et al., 2008) , hashtags (Weber et al., 2013) or documents (Iyyer et al., 2014) , or to detect bias (Yano et al., 2010) or impartiality (Zafar et al., 2016) at a document level.", "Our study belongs to the second category, where political orientation is inferred at a user-level.", "All previous studies study labeling US conservatives vs. liberals using either text (Rao et al., 2010) , social network connections (Zamal et al., 2012) , platform-specific features (Conover et al., 2011) or a combination of these (Pennacchiotti and Popescu, 2011; Volkova et al., 2014) , with very high reported accuracies of up to 94.9% (Conover et al., 2011) .", "However, all previous work on predicting userlevel political preferences are limited to a binary prediction between liberal/democrat and conservative/republican, disregarding any nuances in political ideology.", "In addition, as the focus of the studies is more on the methodological or interpretation aspects of the problem, another downside is that the user labels were obtained in simple, albeit biased ways.", "These include users who explicitly state their political orientation on user lists of party supporters (Zamal et al., 2012; Pennacchiotti and Popescu, 2011) , supporting partisan causes (Rao et al., 2010) , by following political figures (Volkova et al., 2014) or party accounts (Sylwester and Purver, 2015) or that retweet partisan hashtags (Conover et al., 2011) .", "As also identified in (Cohen and Ruths, 2013) and further confirmed later in this study, these data sets are biased: most people do not clearly state their political preference online -fewer than 5% according to Priante et al.", "(2016) -and those that state their preference are very likely to be political activists.", "Cohen and Ruths (2013) demonstrated that predictive accuracy of classifiers is significantly lower when confronted with users that do not explicitly mention their political orientation.", "Despite this, their study is limited because in their hardest classification task, they use crowdsourced political orientation labels, which may not correspond to reality and suffer from biases (Flekova et al., 2016a; .", "Further, they still only look at predicting binary political orientation.", "To date, no other research on this topic has taken into account these findings.", "Data Set The main data set used in this study consists of 3,938 users recruited through the Qualtrics platform (D 1 ).", "Each participant was compensated with 3 USD for 15 minutes of their time.", "All participants first answered the same demographic questions (including political ideology), then were directed to one of four sets of psychological questionnaires unrelated to the political ideology question.", "They were asked to self-report their political ideology on a seven point scale: Very conservative (1), Conservative (2), Moderately conservative (3), Moderate (4), Moderately liberal (5), Liberal (6), Very liberal (7).", "In addition, participants had the option of choosing Apathetic and Other, which have ambiguous fits on the conservative -liberal spectrum and were removed from our analysis (399 users).", "We also asked participants to self-report their gender (2322 female, 1205 male, 12 other) and age.", "Participants were all from the US in order to limit the impact of cultural and political factors.", "The political ideology distribution in our sample is presented in Figure 1 .", "We asked users their Twitter handle and downloaded their most recent 3,200 tweets, leading to a total of 4,833,133 tweets.", "Before adding users to our 3,938 user data set, we performed the following checks to ensure that the Twitter handle was the user's own: 1) after compensation, users were if they were truthful in reporting their handle and if not, we removed their data from analysis; 2) we manually examined all handles marked as verified by Twitter or that had over 2000 followers and eliminated them if they were celebrities or corporate/news accounts, as these were unlikely the users who participated in the survey.", "This study received approval from the Institutional Review Board (IRB) of the University of Pennsylvania.", "In addition, to facilitate comparison to previous work, we also use a data set of 13,651 users with overt political orientation (D 2 ).", "We selected popular political figures unambiguously associated with US liberal politics (@SenSanders, @JoeBiden, @CoryBooker, @JohnKerry) or US conservative politics (@marcorubio, @tedcruz, @RandPaul, @RealBenCarson).", "Liberals in our set (N l = 7417) had to follow on Twitter all of the liberal political figures and none of the conservative figures.", "Likewise, conservative users (N c = 6234) had to follow all of the conservative figures and no liberal figures.", "We downloaded up to 3,200 of each user's most recent tweets, leading to a total of 25,493,407 tweets.", "All tweets were downloaded around 10 August 2016.", "Features In our analysis, we use a broad range of linguistic features described below.", "Unigrams We use the bag-of-words representation to reduce each user's posting history to a normalised frequency distribution over the vocabulary consisting of all words used by at least 10% of the users (6,060 words).", "LIWC Traditional psychological studies use a dictionary-based approach to representing text.", "The most popular method is based on Linguistic Inquiry and Word Count (LIWC) (Pennebaker et al., 2001) , and automatically counts word frequencies for 64 different categories manually constructed based on psychological theory.", "These include different parts-of-speech, topical categories and emotions.", "Each user is thereby represented as a frequency distribution over these categories.", "Word2Vec Topics An alternative to LIWC is to use automatically generated word clusters i.e., groups of words that are semantically and/or syntactically similar.", "The clusters help reducing the feature space and provides additional interpretability.", "To create these groups of words, we use an automatic method that leverages word co-occurrence patterns in large corpora by making use of the distributional hypothesis: similar words tend to cooccur in similar contexts (Harris, 1954) .", "Based on co-occurrence statistics, each word is represented as a low dimensional vector of numbers with words closer in this space being more similar (Deerwester et al., 1990) .", "We use the method from (Preoţiuc-Pietro et al., 2015a) to compute topics using word2vec similarity (Mikolov et al., 2013a,b) and spectral clustering (Shi and Malik, 2000; von Luxburg, 2007) of different sizes (from 30 to 2000).", "We have tried other alternatives to building clusters: using other word similarities to generate clusters -such as NPMI (Lampos et al., 2014) or GloVe as proposed in (Preoţiuc-Pietro et al., 2015a) -or using standard topic modelling approached to create soft clusters of words e.g., Latent Dirichlet Allocation (Blei et al., 2003) .", "For brevity, we present experiments with the best performing feature set containing 500 Word2Vec clusters.", "We aggregate all the words posted in a users' tweets and represent each user as a distribution of the fraction of words belonging to each cluster.", "Sentiment & Emotions We hypothesise that different political ideologies differ in the type and amount of emotions the users express through their posts.", "The most studied model of discrete emotions is the Ekman model (Ekman, 1992; Strapparava and Mihalcea, 2008; Strapparava et al., 2004) which posits the existence of six basic emotions: anger, disgust, fear, joy, sadness and surprise.", "We automatically quantify these emotions from our Twitter data set using a publicly available crowd-sourcing derived lexicon of words associated with any of the six emotions, as well as general positive and negative sentiment Turney, 2010, 2013) .", "Using these lexicons, we assign a predicted emotion to each message and then average across all users' posts to obtain user level emotion expression scores.", "Political Terms In order to select unigrams pertaining to politics, we assigned the most frequent 12,000 unigrams in our data set to three categories: • Political words: mentions of political terms (234); • Political NEs: mentions of politician proper names out of the political terms (39); • Media NEs: mentions of political media sources and pundits out of the political terms (20).", "This coding was initially performed by a research assistant studying political science with good knowledge of US politics and were further filtered and checked by one of the authors.", "Analysis First, we explore the relationships between language use and political ideological groups within each feature set and pairs of opposing user groups.", "To illustrate differences between ideological groups we compare the two political extremes (Very Conservative -Very Liberal) and the political moderates (Moderate Conservative -Moderate Liberal).", "We further compare outright moderates with a group combining the two political extremes to study if we can uncover differences in political engagement and extremity, regardless of the conservative-liberal leaning.", "We use univariate partial linear correlations with age and gender as co-variates to factor out the influence of basic demographics.", "For example, in D 1 , users who reported themselves as very conservative are older and more likely males (µ age = 35.1, pct male = 44%) than the data average (µ age = 31.2, pct male = 35%).", "Additionally, prior to combining the two ideologically extreme groups, we sub-sampled the larger class (Very Liberal) to match the smaller class (Very Conservative) in age and gender.", "In the later prediction experiments, we do not perform matching, as this represents useful signal for classification (Ellis and Stimson, 2012) .", "Results with unigrams are presented in Figure 2 and with the other features in Table 1 .", "These are selected using standard statistical significance tests.", "Very Conservatives vs.", "Very Liberals The comparison between the extreme categories reveals the largest number of significant differences.", "The unigrams and Word2Vec clusters specific to conservatives are dominated by religion specific terms ('praying', 'god', W2V-485, W2V-018, W2V-099, L-RELIG), confirming a well-documented relationship (Gelman, 2009) and words describing family relationships ('uncle', 'son', L-FAMILY), another conservative value (Lakoff, 1997) .", "The emphasis on religious terms among conservatives is consistent with the claim that many Americans associate 'conservative' with 'religious' (Ellis and Stimson, 2012) .", "Extreme liberals show a tendency to use more adjectives (W2V-075, W2V-110), adverbs (L-ADVERB), conjunctions (L-CONJ) and comparisons (L-COMPARE) which indicate more nuanced and complex posts.", "Extreme conservatives post tweets higher in all positive emotions than liberals (L-POSEMO, Emot-Joy, Emot-Positive), confirming a previously hypothesised relationship (Napier and Jost, 2008) .", "However, extreme liberals are not associated with posting negative emotions either, only using words that reflect more anxiety (L-ANX), which is related to neuroticism in which the liberals are higher (Gerber et al., 2010) .", "Political term analysis reveals the partisan terms Figure 2 : Unigrams with the highest 80 Pearson correlations shown as word clouds in three vertical panels with a binary variable representing the two ideological groups compared.", "The size of the unigram is scaled by its correlation with the ideological group in bold.", "The color indexes relative frequency, from light blue (rarely used) to dark blue (frequently used).", "All correlations are significant at p < .05 and controlled for age and gender.", "', 'racism', 'feminism', 'transgender') .", "This perhaps reflects the desire for conservatives on Twitter to identify like-minded individuals, as extreme conservatives are a minority on the platform.", "Liberals, by contrast, use the platform to discuss and popularize their causes.", "Moderate Conservatives vs.", "Moderate Liberals Comparing the two sides of moderate users reveals a slightly more nuanced view of the two ideologies.", "While moderate conservatives still make heavy use of religious terms and express positive emotions (Emot-Joy, L-DRIVES), they also use affiliative language (L-AFFILIATION) and plural pronouns (L-WE).", "Moderate liberals are identified by very different features compared to their more extreme counterparts.", "Most striking is the use of swear and sex words (L-SEXUAL, L-ANGER, W2V-316), also highlighted by Sylwester and Purver (2015) .", "Two word clusters relating to British culture (W2V-458) and art (W2V-373) reflect that liberals are more inclined towards arts (Dollinger, 2007) .", "Statistically significant political terms are very few compared to the previous comparison, probably due to their lower overall usage, which we further investigate later.", "Moderates vs. Extremists Our final comparison looks at outright moderates compared to the two extreme groups combined, as we hypothesise the existence of a difference in overall political engagement.", "Moderates are not characterized by many features besides a topic of casual words (W2V-098), indicating the heterogeneity of this group of users.", "However, regardless of their orientation, the ideological extremists stand out from moderates.", "They use words and word clusters related to political actors (W2V-309), issues (W2V-237) and laws (W2V-296, W2V-288).", "LIWC analysis uncovers differences in article use (L-ARTICLE) or power words (L-POWER) specific of political tweets.", "The overall sentiment of these users is negative (Emot-Fear, Emot-Disgust, Emot-Sadness, L-DEATH) compared to moderates.", "This reveals -combined with the finding from the first comparison -that while extreme conservatives are overall more positive than liberals, both groups share negative expression.", "Political terms are almost all significantly correlated with the extreme ideological groups, Con.", "(1) Con.", "(2) M.Con.", "(3) Mod.", "(4) confirming the existence of a difference in political engagement which we study in detail next.", "Figure 3 presents the use of the three types of political terms across the 7 ideological groups in D 1 and the two political groups from D 2 .", "We notice the following: Political Terms • D 2 has a huge skew towards political words, with an average of more than three times more political terms across all three categories than our extreme classes from D 1 ; • Within the groups in D 1 , we observe an almost perfectly symmetrical U-shape across all three types of political terms, confirming our hypothesis about political engagement; • The difference between 1-2/6-7 is larger than 2-3/5-6.", "The extreme liberals and conservatives are disproportionately political, and have the potential to give Twitter's political discussions an unrepresentative, extremist hue (Fiorina, 1999) .", "It is also possible, however, that characterizing one as an extreme liberal or conservative indicates as much about her level of political engagement as it does about her placement on a left-right scale (Converse, 1964; Broockman, 2016) .", "Prediction In this section we build predictive models of political ideology and compare them to data sets obtained using previous work.", "Cross-Group Prediction First, we experiment with classifying between conservatives and liberals across various levels of political engagement in D 1 and between the two polarized groups in D 2 .", "We use logistic regression classification to compare three setups in Table 2 with results measured with ROC AUC as the classes are slightly inbalanced: • 10-fold cross-validation where training is performed on the same task as the testing (principal diagonal); • A train-test setup where training is performed on one task (presented in rows) and testing is performed on another (presented in columns); • A domain adaptation setup (results in brackets) where on each of the 10 folds, the 9 training folds (presented in rows) are supplemented with all the data from a different task (presented in columns) using the EasyAdapt algorithm (Daumé III, 2007) as a proof on concept on the effects of using additional distantly supervised data.", "Data pooling lead to worse results than EasyAdapt.", "Each of the three tasks from D 1 have a similar number of training samples, hence we do not expect that data set size has any effects in comparing the results across tasks.", "The results with both sets of features show that: • Prediction performance is much higher for D 2 than for D 1 , with the more extreme groups in D 1 being easier to predict than the moderate groups.", "This confirms that the very high accuracies reported by previous research are an artifact of user label collection and that on regular users, the expected accuracy is much lower (Cohen and Ruths, 2013) .", "We further show that, as the level of political engagement decreases, the classification problem becomes even harder; • The model trained on D 2 and Word2Vec word clusters performs significantly worse on D 1 tasks even if the training data is over 10 times larger.", "When using political words, the D 2 trained classifier performs relatively well on all tasks from D 1 ; • Overall, using political words as features performs better than Word2Vec clusters in the binary classification tasks; • Domain adaptation helps in the majority of cases, leading to improvements of up to .03 in AUC (predicting 2v6 supplemented with 3v5 data).", "Political Leaning and Engagement Prediction Political leaning (Conservative -Liberal, excluding the Moderate group) can be considered an ordinal variable and the prediction problem framed as one of regression.", "In addition to the political leaning prediction, based on analysis and previous prediction results, we hypothesize the existence of a separate dimension of political engagement regardless of the partisan side.", "Thus, we merge users from classes 3-5, 2-6, 1-7 and create a variable with four values, where the lowest value is represented by moderate users (4) and the highest value is represented by either very conservative (1) or very liberal (7) users.", "We use a linear regression algorithm with an Elastic Net regularizer (Zou and Hastie, 2005) as implemented in ScikitLearn (Pedregosa et al., 2011) .", "To evaluate our results, we split our data into 10 stratified folds and performed crossvalidation on one held-out fold at a time.", "For all our methods we tune the parameters of our models on a separate validation fold.", "The overall performance is assessed using Pearson correlation between the set of predicted values and the userreported score.", "Results are presented in Table 3 .", "735 The same patterns hold when evaluating the results with Root Mean Squared Error (RMSE).", "Table 3 : Pearson correlations between the predictions and self-reported ideologies using linear regression with each feature category and a linear combination of their predictions in a 10-fold cross-validation setup.", "Political leaning is represented on the 1-7 scale removing the moderates (4).", "Political engagement is a scale ranging from 4 through 3-5 and 2-6 to 1-7.", "The results show that both dimensions can be predicted well above chance, with political leaning being easier to predict than engagement.", "Word2Vec clusters obtain the highest predictive accuracy for political leaning, even though they did not perform as well in the previous classification tasks.", "For political engagement, political terms and Word2Vec clusters obtain similar predictive accuracy.", "This result is expected based on the results from Figure 3 , which showed how political term usage varies across groups, and how it is especially dependent on political engagement.", "While political terms are very effective at distinguishing between two opposing political groups, they can not discriminate as well between levels of engagement within the same ideological orientation.", "Combining all classifiers' predictions in a linear ensemble obtains best results when compared to each individual category.", "Encoding Class Structure In our previous experiments, we uncovered that certain relationships exist between the seven groups.", "For example, extreme conservatives and liberals both demonstrate strong political engagement.", "Therefore, this class structure can be exploited to improve classification performance.", "To this end, we deploy the sparse graph regularized approach (Argyriou et al., 2007; Zhou et al., 2011) to encode the structure of the seven classes as a graph regularizer in a logistic regression framework.", "In particular, we employed a multi-task learning paradigm, where each task is a one-vs-all classification.", "Multi-task learning (MTL) is a learning paradigm that jointly learns multiple related tasks and can achieve better generalization performance than learning each task individually, especially when presented with insufficient training samples (Liu et al., 2015 (Liu et al., , 2016b .", "The group structure is encoded into a matrix R which codes the groups which are considered similar.", "The objective of the sparse graph regularized multi-task learning problem is: min W,c τ t=1 N i=1 log(1 + exp(−Y t,i (W T i,t X t,i + c t ))) + γ WR 2 F + λ W 1 , where τ is the number of tasks, |N | the number of samples, X the feature matrix, Y the outcome matrix, W i,t and c t is the model for task t and R is the structure matrix.", "We define three R matrices: (1) codes that groups with similar political engagement are similar (i.e.", "1-7, 2-6, 3-5); (2) codes that groups from each ideological side are similar (i.e.", "1-2, 1-3, 2-3, 5-6, 5-7, 6-7); (3) learnt from the data.", "Results are presented in Table 4 .", "Regular logistic regression performs slightly better than the majority class baseline, which demonstrates that the 7class classification is a very hard problem although most miss-classifications are within one ideology point.", "The graph regularization (GR) improves the classification performance over logistic regression (LR) in all cases, with political leaning based matrix (GR-Leaning) obtaining 2% in accuracy higher than the political engagement one (GR-Engagement) and the learnt matrix (GR-Learnt) obtaining best results.", "Conclusions This study analyzed user-level political ideology through Twitter posts.", "In contrast to previous work, we made use of a novel data set where finegrained user political ideology labels are obtained through surveys as opposed to binary self-reports.", "We showed that users in our data set are far less likely to post about politics and real-world finegrained political ideology prediction is harder and more nuanced than previously reported.", "We analyzed language differences between the ideological groups and uncovered a dimension of political engagement separate from political leaning.", "Our work has implications for pollsters or marketers, who are most interested to identify and persuade moderate users.", "With respect to political conclusions, researchers commonly conceptualize ideology as a single, left-right dimension similar to what we observe in the U.S. Congress (Ansolabehere et al., 2008; Bafumi and Herron, 2010) .", "Our results suggest a different direction: self-reported political extremity is more an indication of political engagement than of ideological self-placement (Abramowitz, 2010) .", "In fact, only self-reported extremists appear to devote much of their Twitter activity to politics at all.", "While our study focused solely on text posted by the user, follow-up work can use other modalities such as images or social network analysis to improve prediction performance.", "In addition, our work on user-level modeling can be integrated with work on message-level political bias to study how this is revealed across users with various levels of engagement.", "Another direction of future study will look at political ideology prediction in other countries and cultures, where ideology has different or multiple dimensions." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "5.1", "5.2", "5.3", "5.4", "6", "6.1", "6.2", "6.3", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Data Set", "Features", "Analysis", "Very Conservatives vs. Very Liberals", "Moderate Conservatives vs. Moderate Liberals", "Moderates vs. Extremists", "Political Terms", "Prediction", "Cross-Group Prediction", "Political Leaning and Engagement Prediction", "Encoding Class Structure", "Conclusions" ] }
GEM-SciDuet-train-94#paper-1239#slide-9
Moderate Users
H4 Differences between moderate and extreme users Words associated with moderate Words associated with extreme correlation strength relative frequency Correlations are age and gender controlled
H4 Differences between moderate and extreme users Words associated with moderate Words associated with extreme correlation strength relative frequency Correlations are age and gender controlled
[]
GEM-SciDuet-train-94#paper-1239#slide-10
1239
Beyond Binary Labels: Political Ideology Prediction of Twitter Users
Automatic political preference prediction from social media posts has to date proven successful only in distinguishing between publicly declared liberals and conservatives in the US. This study examines users' political ideology using a sevenpoint scale which enables us to identify politically moderate and neutral usersgroups which are of particular interest to political scientists and pollsters. Using a novel data set with political ideology labels self-reported through surveys, our goal is two-fold: a) to characterize the political groups of users through language use on Twitter; b) to build a fine-grained model that predicts political ideology of unseen users. Our results identify differences in both political leaning and engagement and the extent to which each group tweets using political keywords. Finally, we demonstrate how to improve ideology prediction accuracy by exploiting the relationships between the user groups.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175 ], "paper_content_text": [ "Automatic political preference prediction from social media posts has to date proven successful only in distinguishing between publicly declared liberals and conservatives in the US.", "This study examines users' political ideology using a sevenpoint scale which enables us to identify politically moderate and neutral usersgroups which are of particular interest to political scientists and pollsters.", "Using a novel data set with political ideology labels self-reported through surveys, our goal is two-fold: a) to characterize the political groups of users through language use on Twitter; b) to build a fine-grained model that predicts political ideology of unseen users.", "Our results identify differences in both political leaning and engagement and the extent to which each group tweets using political keywords.", "Finally, we demonstrate how to improve ideology prediction accuracy by exploiting the relationships between the user groups.", "Introduction Social media is used by people to share their opinions and views.", "Unsurprisingly, an important part of the population shares opinions and news related to politics or causes they support, thus offering strong cues about their political preferences and ideologies.", "In addition, political membership is also predictable purely from one's interests or demographics -it is much more likely for a religious person to be conservative or for a younger person to lean liberal (Ellis and Stimson, 2012) .", "* Work carried out during a research visit at the University of Pennsylvania User trait prediction from text is based on the assumption that language use reflects a user's demographics, psychological states or preferences.", "Applications include prediction of age (Rao et al., 2010; Flekova et al., 2016b) , gender (Burger et al., 2011; Sap et al., 2014) , personality (Schwartz et al., 2013; , socioeconomic status (Preoţiuc-Pietro et al., 2015a,b; Liu et al., 2016c) , popularity (Lampos et al., 2014) or location (Cheng et al., 2010) .", "Research on predicting political orientation has focused on methodological improvements (Pennacchiotti and Popescu, 2011) and used data sets with publicly stated dichotomous political orientation labels due to their easy accessibility (Sylwester and Purver, 2015) .", "However, these data sets are not representative samples of the entire population (Cohen and Ruths, 2013) and do not accurately reflect the variety of political attitudes and engagement (Kam et al., 2007) .", "For example, we expect users who state their political affiliation in their profile description, tweet with partisan hashtags or appear in public party lists to use social media as a means of popularizing and supporting their political beliefs (Bar-berASa, 2015) .", "Many users may choose not to publicly post about their political preference for various social goals or perhaps this preference may not be strong or representative enough to be disclosed online.", "Dichotomous political preference also ignores users who do not have a political ideology.", "All of these types of users are very important for researchers aiming to understand group preferences, traits or moral values (Lewis and Reiley, 2014; Hersh, 2015) .", "The most common political ideology spectrum in the US is the conservative -liberal (Ellis and Stimson, 2012) .", "We collect a novel data set of Twitter users mapped to this seven-point spectrum which allows us to: 1.", "Uncover the differences in language use between ideological groups; 2.", "Develop a user-level political ideology prediction algorithm that classifies all levels of engagement and leverages the structure in the political ideology spectrum.", "First, using a broad range of language features including unigrams, word clusters and emotions, we study the linguistic differences between the two ideologically extreme groups, the two ideologically moderate groups and between both extremes and moderates in order to provide insight into the content they post on Twitter.", "In addition, we examine the extent to which the ideological groups in our data set post about politics and compare it to a data set obtained similarly to previous work.", "In prediction experiments, we show how accurately we can distinguish between opposing ideological groups in various scenarios and that previous binary political orientation prediction has been oversimplified.", "Then, we measure the extent to which we can predict the two dimensions of political leaning and engagement.", "Finally, we build an ideology classifier in a multi-task learning setup that leverages the relationships between groups.", "1 Related Work Automatically inferring user traits from their online footprints is a prolific topic of research, enabled by the increasing availability of user generated data and advances in machine learning.", "Beyond its research oriented goals, user profiling has important industry applications in online marketing, personalization or large-scale audience profiling.", "To this end, researchers have used a wide range of types of online footprints, including video (Subramanian et al., 2013) , audio (Alam and Riccardi, 2014 ), text (Preoţiuc-Pietro et al., 2015a) , profile images (Liu et al., 2016a) , social data (Van Der Heide et al., 2012; Hall et al., 2014) , social networks (Perozzi and Skiena, 2015; Rout et al., 2013) , payment data (Wang et al., 2016) and endorsements .", "Political orientation prediction has been studied in two related, albeit crucially different scenarios, as also identified in (Zafar et al., 2016) .", "First, researchers aimed to identify and quantify orientation of words (Monroe et al., 2008) , hashtags (Weber et al., 2013) or documents (Iyyer et al., 2014) , or to detect bias (Yano et al., 2010) or impartiality (Zafar et al., 2016) at a document level.", "Our study belongs to the second category, where political orientation is inferred at a user-level.", "All previous studies study labeling US conservatives vs. liberals using either text (Rao et al., 2010) , social network connections (Zamal et al., 2012) , platform-specific features (Conover et al., 2011) or a combination of these (Pennacchiotti and Popescu, 2011; Volkova et al., 2014) , with very high reported accuracies of up to 94.9% (Conover et al., 2011) .", "However, all previous work on predicting userlevel political preferences are limited to a binary prediction between liberal/democrat and conservative/republican, disregarding any nuances in political ideology.", "In addition, as the focus of the studies is more on the methodological or interpretation aspects of the problem, another downside is that the user labels were obtained in simple, albeit biased ways.", "These include users who explicitly state their political orientation on user lists of party supporters (Zamal et al., 2012; Pennacchiotti and Popescu, 2011) , supporting partisan causes (Rao et al., 2010) , by following political figures (Volkova et al., 2014) or party accounts (Sylwester and Purver, 2015) or that retweet partisan hashtags (Conover et al., 2011) .", "As also identified in (Cohen and Ruths, 2013) and further confirmed later in this study, these data sets are biased: most people do not clearly state their political preference online -fewer than 5% according to Priante et al.", "(2016) -and those that state their preference are very likely to be political activists.", "Cohen and Ruths (2013) demonstrated that predictive accuracy of classifiers is significantly lower when confronted with users that do not explicitly mention their political orientation.", "Despite this, their study is limited because in their hardest classification task, they use crowdsourced political orientation labels, which may not correspond to reality and suffer from biases (Flekova et al., 2016a; .", "Further, they still only look at predicting binary political orientation.", "To date, no other research on this topic has taken into account these findings.", "Data Set The main data set used in this study consists of 3,938 users recruited through the Qualtrics platform (D 1 ).", "Each participant was compensated with 3 USD for 15 minutes of their time.", "All participants first answered the same demographic questions (including political ideology), then were directed to one of four sets of psychological questionnaires unrelated to the political ideology question.", "They were asked to self-report their political ideology on a seven point scale: Very conservative (1), Conservative (2), Moderately conservative (3), Moderate (4), Moderately liberal (5), Liberal (6), Very liberal (7).", "In addition, participants had the option of choosing Apathetic and Other, which have ambiguous fits on the conservative -liberal spectrum and were removed from our analysis (399 users).", "We also asked participants to self-report their gender (2322 female, 1205 male, 12 other) and age.", "Participants were all from the US in order to limit the impact of cultural and political factors.", "The political ideology distribution in our sample is presented in Figure 1 .", "We asked users their Twitter handle and downloaded their most recent 3,200 tweets, leading to a total of 4,833,133 tweets.", "Before adding users to our 3,938 user data set, we performed the following checks to ensure that the Twitter handle was the user's own: 1) after compensation, users were if they were truthful in reporting their handle and if not, we removed their data from analysis; 2) we manually examined all handles marked as verified by Twitter or that had over 2000 followers and eliminated them if they were celebrities or corporate/news accounts, as these were unlikely the users who participated in the survey.", "This study received approval from the Institutional Review Board (IRB) of the University of Pennsylvania.", "In addition, to facilitate comparison to previous work, we also use a data set of 13,651 users with overt political orientation (D 2 ).", "We selected popular political figures unambiguously associated with US liberal politics (@SenSanders, @JoeBiden, @CoryBooker, @JohnKerry) or US conservative politics (@marcorubio, @tedcruz, @RandPaul, @RealBenCarson).", "Liberals in our set (N l = 7417) had to follow on Twitter all of the liberal political figures and none of the conservative figures.", "Likewise, conservative users (N c = 6234) had to follow all of the conservative figures and no liberal figures.", "We downloaded up to 3,200 of each user's most recent tweets, leading to a total of 25,493,407 tweets.", "All tweets were downloaded around 10 August 2016.", "Features In our analysis, we use a broad range of linguistic features described below.", "Unigrams We use the bag-of-words representation to reduce each user's posting history to a normalised frequency distribution over the vocabulary consisting of all words used by at least 10% of the users (6,060 words).", "LIWC Traditional psychological studies use a dictionary-based approach to representing text.", "The most popular method is based on Linguistic Inquiry and Word Count (LIWC) (Pennebaker et al., 2001) , and automatically counts word frequencies for 64 different categories manually constructed based on psychological theory.", "These include different parts-of-speech, topical categories and emotions.", "Each user is thereby represented as a frequency distribution over these categories.", "Word2Vec Topics An alternative to LIWC is to use automatically generated word clusters i.e., groups of words that are semantically and/or syntactically similar.", "The clusters help reducing the feature space and provides additional interpretability.", "To create these groups of words, we use an automatic method that leverages word co-occurrence patterns in large corpora by making use of the distributional hypothesis: similar words tend to cooccur in similar contexts (Harris, 1954) .", "Based on co-occurrence statistics, each word is represented as a low dimensional vector of numbers with words closer in this space being more similar (Deerwester et al., 1990) .", "We use the method from (Preoţiuc-Pietro et al., 2015a) to compute topics using word2vec similarity (Mikolov et al., 2013a,b) and spectral clustering (Shi and Malik, 2000; von Luxburg, 2007) of different sizes (from 30 to 2000).", "We have tried other alternatives to building clusters: using other word similarities to generate clusters -such as NPMI (Lampos et al., 2014) or GloVe as proposed in (Preoţiuc-Pietro et al., 2015a) -or using standard topic modelling approached to create soft clusters of words e.g., Latent Dirichlet Allocation (Blei et al., 2003) .", "For brevity, we present experiments with the best performing feature set containing 500 Word2Vec clusters.", "We aggregate all the words posted in a users' tweets and represent each user as a distribution of the fraction of words belonging to each cluster.", "Sentiment & Emotions We hypothesise that different political ideologies differ in the type and amount of emotions the users express through their posts.", "The most studied model of discrete emotions is the Ekman model (Ekman, 1992; Strapparava and Mihalcea, 2008; Strapparava et al., 2004) which posits the existence of six basic emotions: anger, disgust, fear, joy, sadness and surprise.", "We automatically quantify these emotions from our Twitter data set using a publicly available crowd-sourcing derived lexicon of words associated with any of the six emotions, as well as general positive and negative sentiment Turney, 2010, 2013) .", "Using these lexicons, we assign a predicted emotion to each message and then average across all users' posts to obtain user level emotion expression scores.", "Political Terms In order to select unigrams pertaining to politics, we assigned the most frequent 12,000 unigrams in our data set to three categories: • Political words: mentions of political terms (234); • Political NEs: mentions of politician proper names out of the political terms (39); • Media NEs: mentions of political media sources and pundits out of the political terms (20).", "This coding was initially performed by a research assistant studying political science with good knowledge of US politics and were further filtered and checked by one of the authors.", "Analysis First, we explore the relationships between language use and political ideological groups within each feature set and pairs of opposing user groups.", "To illustrate differences between ideological groups we compare the two political extremes (Very Conservative -Very Liberal) and the political moderates (Moderate Conservative -Moderate Liberal).", "We further compare outright moderates with a group combining the two political extremes to study if we can uncover differences in political engagement and extremity, regardless of the conservative-liberal leaning.", "We use univariate partial linear correlations with age and gender as co-variates to factor out the influence of basic demographics.", "For example, in D 1 , users who reported themselves as very conservative are older and more likely males (µ age = 35.1, pct male = 44%) than the data average (µ age = 31.2, pct male = 35%).", "Additionally, prior to combining the two ideologically extreme groups, we sub-sampled the larger class (Very Liberal) to match the smaller class (Very Conservative) in age and gender.", "In the later prediction experiments, we do not perform matching, as this represents useful signal for classification (Ellis and Stimson, 2012) .", "Results with unigrams are presented in Figure 2 and with the other features in Table 1 .", "These are selected using standard statistical significance tests.", "Very Conservatives vs.", "Very Liberals The comparison between the extreme categories reveals the largest number of significant differences.", "The unigrams and Word2Vec clusters specific to conservatives are dominated by religion specific terms ('praying', 'god', W2V-485, W2V-018, W2V-099, L-RELIG), confirming a well-documented relationship (Gelman, 2009) and words describing family relationships ('uncle', 'son', L-FAMILY), another conservative value (Lakoff, 1997) .", "The emphasis on religious terms among conservatives is consistent with the claim that many Americans associate 'conservative' with 'religious' (Ellis and Stimson, 2012) .", "Extreme liberals show a tendency to use more adjectives (W2V-075, W2V-110), adverbs (L-ADVERB), conjunctions (L-CONJ) and comparisons (L-COMPARE) which indicate more nuanced and complex posts.", "Extreme conservatives post tweets higher in all positive emotions than liberals (L-POSEMO, Emot-Joy, Emot-Positive), confirming a previously hypothesised relationship (Napier and Jost, 2008) .", "However, extreme liberals are not associated with posting negative emotions either, only using words that reflect more anxiety (L-ANX), which is related to neuroticism in which the liberals are higher (Gerber et al., 2010) .", "Political term analysis reveals the partisan terms Figure 2 : Unigrams with the highest 80 Pearson correlations shown as word clouds in three vertical panels with a binary variable representing the two ideological groups compared.", "The size of the unigram is scaled by its correlation with the ideological group in bold.", "The color indexes relative frequency, from light blue (rarely used) to dark blue (frequently used).", "All correlations are significant at p < .05 and controlled for age and gender.", "', 'racism', 'feminism', 'transgender') .", "This perhaps reflects the desire for conservatives on Twitter to identify like-minded individuals, as extreme conservatives are a minority on the platform.", "Liberals, by contrast, use the platform to discuss and popularize their causes.", "Moderate Conservatives vs.", "Moderate Liberals Comparing the two sides of moderate users reveals a slightly more nuanced view of the two ideologies.", "While moderate conservatives still make heavy use of religious terms and express positive emotions (Emot-Joy, L-DRIVES), they also use affiliative language (L-AFFILIATION) and plural pronouns (L-WE).", "Moderate liberals are identified by very different features compared to their more extreme counterparts.", "Most striking is the use of swear and sex words (L-SEXUAL, L-ANGER, W2V-316), also highlighted by Sylwester and Purver (2015) .", "Two word clusters relating to British culture (W2V-458) and art (W2V-373) reflect that liberals are more inclined towards arts (Dollinger, 2007) .", "Statistically significant political terms are very few compared to the previous comparison, probably due to their lower overall usage, which we further investigate later.", "Moderates vs. Extremists Our final comparison looks at outright moderates compared to the two extreme groups combined, as we hypothesise the existence of a difference in overall political engagement.", "Moderates are not characterized by many features besides a topic of casual words (W2V-098), indicating the heterogeneity of this group of users.", "However, regardless of their orientation, the ideological extremists stand out from moderates.", "They use words and word clusters related to political actors (W2V-309), issues (W2V-237) and laws (W2V-296, W2V-288).", "LIWC analysis uncovers differences in article use (L-ARTICLE) or power words (L-POWER) specific of political tweets.", "The overall sentiment of these users is negative (Emot-Fear, Emot-Disgust, Emot-Sadness, L-DEATH) compared to moderates.", "This reveals -combined with the finding from the first comparison -that while extreme conservatives are overall more positive than liberals, both groups share negative expression.", "Political terms are almost all significantly correlated with the extreme ideological groups, Con.", "(1) Con.", "(2) M.Con.", "(3) Mod.", "(4) confirming the existence of a difference in political engagement which we study in detail next.", "Figure 3 presents the use of the three types of political terms across the 7 ideological groups in D 1 and the two political groups from D 2 .", "We notice the following: Political Terms • D 2 has a huge skew towards political words, with an average of more than three times more political terms across all three categories than our extreme classes from D 1 ; • Within the groups in D 1 , we observe an almost perfectly symmetrical U-shape across all three types of political terms, confirming our hypothesis about political engagement; • The difference between 1-2/6-7 is larger than 2-3/5-6.", "The extreme liberals and conservatives are disproportionately political, and have the potential to give Twitter's political discussions an unrepresentative, extremist hue (Fiorina, 1999) .", "It is also possible, however, that characterizing one as an extreme liberal or conservative indicates as much about her level of political engagement as it does about her placement on a left-right scale (Converse, 1964; Broockman, 2016) .", "Prediction In this section we build predictive models of political ideology and compare them to data sets obtained using previous work.", "Cross-Group Prediction First, we experiment with classifying between conservatives and liberals across various levels of political engagement in D 1 and between the two polarized groups in D 2 .", "We use logistic regression classification to compare three setups in Table 2 with results measured with ROC AUC as the classes are slightly inbalanced: • 10-fold cross-validation where training is performed on the same task as the testing (principal diagonal); • A train-test setup where training is performed on one task (presented in rows) and testing is performed on another (presented in columns); • A domain adaptation setup (results in brackets) where on each of the 10 folds, the 9 training folds (presented in rows) are supplemented with all the data from a different task (presented in columns) using the EasyAdapt algorithm (Daumé III, 2007) as a proof on concept on the effects of using additional distantly supervised data.", "Data pooling lead to worse results than EasyAdapt.", "Each of the three tasks from D 1 have a similar number of training samples, hence we do not expect that data set size has any effects in comparing the results across tasks.", "The results with both sets of features show that: • Prediction performance is much higher for D 2 than for D 1 , with the more extreme groups in D 1 being easier to predict than the moderate groups.", "This confirms that the very high accuracies reported by previous research are an artifact of user label collection and that on regular users, the expected accuracy is much lower (Cohen and Ruths, 2013) .", "We further show that, as the level of political engagement decreases, the classification problem becomes even harder; • The model trained on D 2 and Word2Vec word clusters performs significantly worse on D 1 tasks even if the training data is over 10 times larger.", "When using political words, the D 2 trained classifier performs relatively well on all tasks from D 1 ; • Overall, using political words as features performs better than Word2Vec clusters in the binary classification tasks; • Domain adaptation helps in the majority of cases, leading to improvements of up to .03 in AUC (predicting 2v6 supplemented with 3v5 data).", "Political Leaning and Engagement Prediction Political leaning (Conservative -Liberal, excluding the Moderate group) can be considered an ordinal variable and the prediction problem framed as one of regression.", "In addition to the political leaning prediction, based on analysis and previous prediction results, we hypothesize the existence of a separate dimension of political engagement regardless of the partisan side.", "Thus, we merge users from classes 3-5, 2-6, 1-7 and create a variable with four values, where the lowest value is represented by moderate users (4) and the highest value is represented by either very conservative (1) or very liberal (7) users.", "We use a linear regression algorithm with an Elastic Net regularizer (Zou and Hastie, 2005) as implemented in ScikitLearn (Pedregosa et al., 2011) .", "To evaluate our results, we split our data into 10 stratified folds and performed crossvalidation on one held-out fold at a time.", "For all our methods we tune the parameters of our models on a separate validation fold.", "The overall performance is assessed using Pearson correlation between the set of predicted values and the userreported score.", "Results are presented in Table 3 .", "735 The same patterns hold when evaluating the results with Root Mean Squared Error (RMSE).", "Table 3 : Pearson correlations between the predictions and self-reported ideologies using linear regression with each feature category and a linear combination of their predictions in a 10-fold cross-validation setup.", "Political leaning is represented on the 1-7 scale removing the moderates (4).", "Political engagement is a scale ranging from 4 through 3-5 and 2-6 to 1-7.", "The results show that both dimensions can be predicted well above chance, with political leaning being easier to predict than engagement.", "Word2Vec clusters obtain the highest predictive accuracy for political leaning, even though they did not perform as well in the previous classification tasks.", "For political engagement, political terms and Word2Vec clusters obtain similar predictive accuracy.", "This result is expected based on the results from Figure 3 , which showed how political term usage varies across groups, and how it is especially dependent on political engagement.", "While political terms are very effective at distinguishing between two opposing political groups, they can not discriminate as well between levels of engagement within the same ideological orientation.", "Combining all classifiers' predictions in a linear ensemble obtains best results when compared to each individual category.", "Encoding Class Structure In our previous experiments, we uncovered that certain relationships exist between the seven groups.", "For example, extreme conservatives and liberals both demonstrate strong political engagement.", "Therefore, this class structure can be exploited to improve classification performance.", "To this end, we deploy the sparse graph regularized approach (Argyriou et al., 2007; Zhou et al., 2011) to encode the structure of the seven classes as a graph regularizer in a logistic regression framework.", "In particular, we employed a multi-task learning paradigm, where each task is a one-vs-all classification.", "Multi-task learning (MTL) is a learning paradigm that jointly learns multiple related tasks and can achieve better generalization performance than learning each task individually, especially when presented with insufficient training samples (Liu et al., 2015 (Liu et al., , 2016b .", "The group structure is encoded into a matrix R which codes the groups which are considered similar.", "The objective of the sparse graph regularized multi-task learning problem is: min W,c τ t=1 N i=1 log(1 + exp(−Y t,i (W T i,t X t,i + c t ))) + γ WR 2 F + λ W 1 , where τ is the number of tasks, |N | the number of samples, X the feature matrix, Y the outcome matrix, W i,t and c t is the model for task t and R is the structure matrix.", "We define three R matrices: (1) codes that groups with similar political engagement are similar (i.e.", "1-7, 2-6, 3-5); (2) codes that groups from each ideological side are similar (i.e.", "1-2, 1-3, 2-3, 5-6, 5-7, 6-7); (3) learnt from the data.", "Results are presented in Table 4 .", "Regular logistic regression performs slightly better than the majority class baseline, which demonstrates that the 7class classification is a very hard problem although most miss-classifications are within one ideology point.", "The graph regularization (GR) improves the classification performance over logistic regression (LR) in all cases, with political leaning based matrix (GR-Leaning) obtaining 2% in accuracy higher than the political engagement one (GR-Engagement) and the learnt matrix (GR-Learnt) obtaining best results.", "Conclusions This study analyzed user-level political ideology through Twitter posts.", "In contrast to previous work, we made use of a novel data set where finegrained user political ideology labels are obtained through surveys as opposed to binary self-reports.", "We showed that users in our data set are far less likely to post about politics and real-world finegrained political ideology prediction is harder and more nuanced than previously reported.", "We analyzed language differences between the ideological groups and uncovered a dimension of political engagement separate from political leaning.", "Our work has implications for pollsters or marketers, who are most interested to identify and persuade moderate users.", "With respect to political conclusions, researchers commonly conceptualize ideology as a single, left-right dimension similar to what we observe in the U.S. Congress (Ansolabehere et al., 2008; Bafumi and Herron, 2010) .", "Our results suggest a different direction: self-reported political extremity is more an indication of political engagement than of ideological self-placement (Abramowitz, 2010) .", "In fact, only self-reported extremists appear to devote much of their Twitter activity to politics at all.", "While our study focused solely on text posted by the user, follow-up work can use other modalities such as images or social network analysis to improve prediction performance.", "In addition, our work on user-level modeling can be integrated with work on message-level political bias to study how this is revealed across users with various levels of engagement.", "Another direction of future study will look at political ideology prediction in other countries and cultures, where ideology has different or multiple dimensions." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "5.1", "5.2", "5.3", "5.4", "6", "6.1", "6.2", "6.3", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Data Set", "Features", "Analysis", "Very Conservatives vs. Very Liberals", "Moderate Conservatives vs. Moderate Liberals", "Moderates vs. Extremists", "Political Terms", "Prediction", "Cross-Group Prediction", "Political Leaning and Engagement Prediction", "Encoding Class Structure", "Conclusions" ] }
GEM-SciDuet-train-94#paper-1239#slide-10
Take Aways
I User-level trait acquisition methodologies can generate I Goes beyond binary classes I The problem was to date over-simplified I New data set available for research I New model to identify political leaning and engagement
I User-level trait acquisition methodologies can generate I Goes beyond binary classes I The problem was to date over-simplified I New data set available for research I New model to identify political leaning and engagement
[]
GEM-SciDuet-train-95#paper-1249#slide-0
1249
Simplified Abugidas
An abugida is a writing system where the consonant letters represent syllables with a default vowel and other vowels are denoted by diacritics. We investigate the feasibility of recovering the original text written in an abugida after omitting subordinate diacritics and merging consonant letters with similar phonetic values. This is crucial for developing more efficient input methods by reducing the complexity in abugidas. Four abugidas in the southern Brahmic family, i.e., Thai, Burmese, Khmer, and Lao, were studied using a newswire 20, 000-sentence dataset. We compared the recovery performance of a support vector machine and an LSTMbased recurrent neural network, finding that the abugida graphemes could be recovered with 94% -97% accuracy at the top-1 level and 98% -99% at the top-4 level, even after omitting most diacritics (10 -30 types) and merging the remaining 30 -50 characters into 21 graphemes.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91 ], "paper_content_text": [ "Introduction Writing systems are used to record utterances in a wide range of languages and can be organized into the hierarchy shown in Fig.", "1 .", "The symbols in a writing system generally represent either speech sounds (phonograms) or semantic units (logograms) .", "Phonograms can be either segmental or syllabic, with segmental systems being more phonetic because they use separate symbols (i.e., letters) to represent consonants and vowels.", "Segmental systems can be further subdivided depending on their representation of vowels.", "Alphabets (e.g., the Latin, Cyrillic, and Greek scripts) are the most common and treat vowel and consonant let- ters equally.", "In contrast, abjads (e.g., the Arabic and Hebrew scripts) do not write most vowels explicitly.", "The third type, abugidas, also called alphasyllabary, includes features from both segmental and syllabic systems.", "In abugidas, consonant letters represent syllables with a default vowel, and other vowels are denoted by diacritics.", "Abugidas thus denote vowels less explicitly than alphabets but more explicitly than abjads, while being less phonetic than alphabets, but more phonetic than syllabaries.", "Since abugidas combine segmental and syllabic systems, they typically have more symbols than conventional alphabets.", "In this study, we investigate how to simplify and recover abugidas, with the aim of developing a more efficient method of encoding abugidas for input.", "Alphabets generally do not have a large set of symbols, making them easy to map to a traditional keyboard, and logogram and syllabic systems need specially designed input methods because of their large variety of symbols.", "Traditional input methods for abugidas are similar to those for alphabets, mapping two or three different symbols onto each key and requiring users to type each character and diacritic exactly.", "In contrast, we are able to substantially simplify inputting abugidas by encoding them in a lossy (or \"fuzzy\") way.", "TH ะ ั ั ั ั ั ั ั ั ั ั ั ั ั ั ั ั ั MY ိ ိ ိ ိ ေိ ိ ိ ိ ိ KM ិ ិ ិ ិ ិ ិ ិ ើិ ើិ ើិ ើិ ែិ ៃិ ើិ ើិ ិ ិ ិ ិ ិ ិ ិ ិ ិ ិ ិ ិ LO ະ ិ ិ ិ ិ ិ ិ ិ ិ ិ ិ ຽ ិ ិ ិ ិ ិ ិ OMITTED I II I II I II I II MN K G U C J I Y T D N L P B M W R S H Q A E TH กขฃ คฅฆ ง จฉ ชซฌ ญ ย ฎฏฐดตถ ฑฒทธ ณน ลฦฬ บปผฝ พฟภ ม ว รฤ ศษส หฬ อ ๅ เ แ โ ใ ไ MY ကခ ဂဃ င စဆ ဇဈ ဉ ည ယ ိ ဋဌတထ ဍဎဒဓ ဏန လဠ ပဖ ဗဘ မ ဝ ိ ရ ြိ သဿ ဟ ိ အ ိ ိ ိ ိ KM កខ គឃ ង ចឆ ជឈ ញ យ ដឋតថ ឌឍទធ ណន លឡ បផ ពភ ម វ រ ឝឞស ហ អ ិ ្ LO ກຂ ຄ ງ ຈ ຊ ຍ ຢ ດຕຖ ທ ນ ລ ບປຜຝ ພຟ ມ ວ ຣ ສ ຫຮ ອ ເ ແ ໂ ໃ ໄ APP.", "DENTAL PALATE PRE-V. DE-V. PLOSIVE NAS.", "MERGED R-LIKE S-LIKE H-LIKE LONG-A ZERO-C. LABIAL PLOSIVE NAS.", "APP.", "PLOSIVE NAS.", "GUTTURAL PLOSIVE NAS.", "APP.", "Figure 3 : Merging and omission for Thai (TH), Burmese (MY), Khmer (KM), and Lao (LO) scripts.", "The MN row lists the mnemonics assigned to graphemes in our experiment.", "In this study, the mnemonics can be assigned arbitrarily, and we selected Latin letters related to the real pronunciation wherever possible.", "Fig.", "2 gives an overview of this study, showing examples in Khmer.", "We simplify abugidas by omitting vowel diacritics and merging consonant letters with identical or similar phonetic values, as shown in (a).", "This simplification is intuitive, both orthographically and phonetically.", "To resolve the ambiguities introduced by the simplification, we use data-driven methods to recover the original texts, as shown in (b).", "We conducted experiments on four southern Brahmic scripts, i.e., Thai, Burmese, Khmer, and Lao scripts, with a unified framework, using data from the Asian Language Treebank (ALT) (Riza et al., 2016) .", "The experiments show that the abugidas can be recovered satisfactorily by a recurrent neural network (RNN) using long short-term memory (LSTM) units, even when nearly all of the diacritics (10 -30 types) have been omitted and the remaining 30 -50 characters have been merged into 21 graphemes.", "Thai gave the best performance, with 97% top-1 accuracy for graphemes and over 99% top-4 accuracy.", "Lao, which gave the worst performance, still achieved the top-1 and top-4 accuracies of around 94% and 98%, respectively.", "The Burmese and Khmer results, which lay in-between the other two, were also investigated by manual evaluation.", "Related Work Some optimized keyboard layout have been proposed for specific abugidas (Ouk et al., 2008) .", "Most studies on input methods have focused on Chinese and Japanese characters, where thousands of symbols need to be encoded and recovered.", "For Chinese characters, Chen and Lee (2000) made an early attempt to apply statistical methods to sentence-level processing, using a hidden Markov model.", "Others have examined max-entropy models, support vector machines (SVMs), conditional random fields (CRFs), and machine translation techniques (Wang et al., 2006; Jiang et al., 2007; Li et al., 2009; Yang et al., 2012) .", "Similar methods have also been developed for character conversion in Japanese (Tokunaga et al., 2011) .", "This study takes a similar approach to the research on Chinese and Japanese, transforming a less informative encoding into strings in a natural and redundant writing system.", "Furthermore, our study can be considered as a specific lossy compression scheme on abugida textual data.", "Unlike images or audio, the lossy text compression has received little attention as it may cause difficulties with reading (Witten et al., 1994) .", "However, we handle this issue within an input method framework, where the simplified encoding is not read directly.", "Simplified Abugidas We designed simplification schemes for several different scripts within a unified framework based on phonetics and conventional usages, without considering many language specific features.", "Our primary aim was to investigate the feasibility of reducing the complexity of abugidas and to establish methods of recovering the texts.", "We will consider language-specific optimization in a future work, via both data-and user-driven studies.", "The simplification scheme is shown in Fig.", "3 .", "1 Generally, the merges are based on the common distribution of consonant phonemes in most natural languages, as well as the etymology of the characters in each abugida.", "Specifically, three or four graphemes are preserved for the different articulation locations (i.e., guttural, palate, dental, and labial), that two for plosives, one for nasal (NAS.", "), and one for approximant (APP.)", "if present.", "Additional consonants such as trills (R-LIKE), fricatives (S-/H-LIKE), and empty (ZERO-C.) are also assigned their own graphemes.", "Although the simplification omits most diacritics, three types are retained, i.e., one basic mark common to nearly all Brahmic abugidas (LONG-A), the preposed vowels in Thai and Lao (PRE-V.), and the vowel-depressors (and/or consonant-stackers) in Burmese and Khmer (DE-V.).", "We assigned graphemes to these because we found they informed the spelling and were intuitive when typing.", "The net result was the omission of 18 types of diacritics in Thai, 9 in Burmese, 27 in Khmer, and 18 in Lao, and the merging of the remaining 53 types of characters in Thai, 43 in Burmese, 37 in Khmer, and 33 in Lao, into a unified set of 21 graphemes.", "The simplification thus substantially reduces the number of graphemes, and represents a straightforward benchmark for further languagespecific refinement to build on.", "Recovery Methods The recovery process can be formalized as a sequential labeling task, that takes the simplified encoding as input, and outputs the writing units, composed of merged and omitted character(s) in the original abugidas, corresponding to each simplified grapheme.", "Although structured learning methods such as CRF (Lafferty et al., 2001) have been widely used, we found that searching for the label sequences in the output space was too costly, because of the number of labels to be recovered.", "2 Instead, we adopted non-structured point-wise prediction methods using a linear SVM (Cortes and Vapnik, 1995) and an LSTM-based RNN (Hochreiter and Schmidhuber, 1997) .", "Fig.", "4 shows the overall structure of the RNN.", "After many experimentations, a general \"shallow and broad\" configuration was adopted.", "Specifically, simplified grapheme bi-grams are first embedded into 128-dimensional vectors 3 and then encoded in one layer of a bi-directional LSTM, resulting in a final representation consisting of a 512-dimensional vector that concatenates two 256-dimensional vectors from the two directions.", "The number of dimensions used here is large because we found that higher-dimensional vectors were more effective than the deeper structures for this task, as memory capacity was more important than classification ability.", "For the same reason, the representations obtained from the LSTM layer are transformed linearly before the softmax function is applied, as we found that non-linear transformations, which are commonly used for final classification, did not help for this task.", "Experiments and Evaluation We used raw textual data from the ALT, 4 comprising around 20, 000 sentences translated from English.", "The data were divided into training, development, and test sets as specified by the project.", "5 For the SVM experiments, we used the offthe-shelf LIBLINEAR library (Fan et al., 2008) wrapped by the KyTea toolkit.", "6 Table 1 gives the recovery accuracies, demonstrating that recovery is not a difficult classification task, given well represented contextual features.", "In general, using up to 5-gram features before/after the simplified grapheme yielded the best results for the baseline, except with Burmese, where 7-gram features brought a small additional improvement.", "Because Burmese texts use relatively more spaces than the other three scripts, longer contexts help more.", "Meanwhile, Lao produced the worst results, possibly because the omission and merging process was harsh: Lao is the most phonetic of the four scripts, with the least redundant spellings.", "The LSTM-based RNN was implemented using DyNet (Neubig et al., 2017) , and it was trained using Adam (Kingma and Ba, 2014) with an initial learning rate of 10 −3 .", "If the accuracy decreased on the development set, the learning rate was halved, and learning was terminated when there was no improvement on the development set for three iterations.", "We did not use dropout (Srivastava et al., 2014) but instead a voting ensemble over a set of differently initialized models trained in parallel, which is both more effective and faster.", "As shown in Table 2 , the RNN outperformed SVM on all scripts in terms of top-1 accuracy.", "A more lenient evaluation, i.e., top-n accuracy, showed a satisfactory coverage of around 98% (Khmer and Lao) to 99% (Thai and Burmese) considering only the top four results.", "Fig.", "5 shows the effect of changing the size of the training dataset by repeatedly halving it until it was one-eighth of its original size, demonstrating that the RNN outperformed SVM regardless of training data size.", "The LSTM-based RNN should thus be a substantially better solution than the SVM for this task.", "We also investigated Burmese and Khmer further using manual evaluation.", "The results of RNN @1 ⊕16 in Table 2 were evaluated by native speakers, who examined the output writing units corresponding to each input simplified grapheme and classified the errors using four levels: 0) acceptable, i.e., alternative spelling, 1) clear and easy to identify the correct result, 2) confusing but possible to identify the correct result, and 3) incomprehensible.", "Table 3 shows the error distribution.", "For Burmese, most of the errors are at levels 1 and 2, and Khmer has a wider distribution.", "For both scripts, around 50% of the errors are serious (level 2 or 3), but the distributions suggest that they have different characteristics.", "We are currently conducting a case study on these errors for further language-specific improvements.", "Conclusion and Future Work In this study, a scheme was used to substantially simplify four abugidas, omitting most diacritics and merging the remaining characters.", "An SVM and an LSTM-based RNN were then used to recover the original texts, showing that the simplified abugidas could be recovered well.", "This illustrates the feasibility of encoding abugidas less redundantly, which could help with the development of more efficient input methods.", "As for the future work, we are planning to include language-specific optimizations in the design of the simplification scheme and to improve" ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Simplified Abugidas", "Recovery Methods", "Experiments and Evaluation", "Conclusion and Future Work" ] }
GEM-SciDuet-train-95#paper-1249#slide-0
Writing Systems Hierarchy
phonogram segmental alphabet can
phonogram segmental alphabet can
[]
GEM-SciDuet-train-95#paper-1249#slide-2
1249
Simplified Abugidas
An abugida is a writing system where the consonant letters represent syllables with a default vowel and other vowels are denoted by diacritics. We investigate the feasibility of recovering the original text written in an abugida after omitting subordinate diacritics and merging consonant letters with similar phonetic values. This is crucial for developing more efficient input methods by reducing the complexity in abugidas. Four abugidas in the southern Brahmic family, i.e., Thai, Burmese, Khmer, and Lao, were studied using a newswire 20, 000-sentence dataset. We compared the recovery performance of a support vector machine and an LSTMbased recurrent neural network, finding that the abugida graphemes could be recovered with 94% -97% accuracy at the top-1 level and 98% -99% at the top-4 level, even after omitting most diacritics (10 -30 types) and merging the remaining 30 -50 characters into 21 graphemes.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91 ], "paper_content_text": [ "Introduction Writing systems are used to record utterances in a wide range of languages and can be organized into the hierarchy shown in Fig.", "1 .", "The symbols in a writing system generally represent either speech sounds (phonograms) or semantic units (logograms) .", "Phonograms can be either segmental or syllabic, with segmental systems being more phonetic because they use separate symbols (i.e., letters) to represent consonants and vowels.", "Segmental systems can be further subdivided depending on their representation of vowels.", "Alphabets (e.g., the Latin, Cyrillic, and Greek scripts) are the most common and treat vowel and consonant let- ters equally.", "In contrast, abjads (e.g., the Arabic and Hebrew scripts) do not write most vowels explicitly.", "The third type, abugidas, also called alphasyllabary, includes features from both segmental and syllabic systems.", "In abugidas, consonant letters represent syllables with a default vowel, and other vowels are denoted by diacritics.", "Abugidas thus denote vowels less explicitly than alphabets but more explicitly than abjads, while being less phonetic than alphabets, but more phonetic than syllabaries.", "Since abugidas combine segmental and syllabic systems, they typically have more symbols than conventional alphabets.", "In this study, we investigate how to simplify and recover abugidas, with the aim of developing a more efficient method of encoding abugidas for input.", "Alphabets generally do not have a large set of symbols, making them easy to map to a traditional keyboard, and logogram and syllabic systems need specially designed input methods because of their large variety of symbols.", "Traditional input methods for abugidas are similar to those for alphabets, mapping two or three different symbols onto each key and requiring users to type each character and diacritic exactly.", "In contrast, we are able to substantially simplify inputting abugidas by encoding them in a lossy (or \"fuzzy\") way.", "TH ะ ั ั ั ั ั ั ั ั ั ั ั ั ั ั ั ั ั MY ိ ိ ိ ိ ေိ ိ ိ ိ ိ KM ិ ិ ិ ិ ិ ិ ិ ើិ ើិ ើិ ើិ ែិ ៃិ ើិ ើិ ិ ិ ិ ិ ិ ិ ិ ិ ិ ិ ិ ិ LO ະ ិ ិ ិ ិ ិ ិ ិ ិ ិ ិ ຽ ិ ិ ិ ិ ិ ិ OMITTED I II I II I II I II MN K G U C J I Y T D N L P B M W R S H Q A E TH กขฃ คฅฆ ง จฉ ชซฌ ญ ย ฎฏฐดตถ ฑฒทธ ณน ลฦฬ บปผฝ พฟภ ม ว รฤ ศษส หฬ อ ๅ เ แ โ ใ ไ MY ကခ ဂဃ င စဆ ဇဈ ဉ ည ယ ိ ဋဌတထ ဍဎဒဓ ဏန လဠ ပဖ ဗဘ မ ဝ ိ ရ ြိ သဿ ဟ ိ အ ိ ိ ိ ိ KM កខ គឃ ង ចឆ ជឈ ញ យ ដឋតថ ឌឍទធ ណន លឡ បផ ពភ ម វ រ ឝឞស ហ អ ិ ្ LO ກຂ ຄ ງ ຈ ຊ ຍ ຢ ດຕຖ ທ ນ ລ ບປຜຝ ພຟ ມ ວ ຣ ສ ຫຮ ອ ເ ແ ໂ ໃ ໄ APP.", "DENTAL PALATE PRE-V. DE-V. PLOSIVE NAS.", "MERGED R-LIKE S-LIKE H-LIKE LONG-A ZERO-C. LABIAL PLOSIVE NAS.", "APP.", "PLOSIVE NAS.", "GUTTURAL PLOSIVE NAS.", "APP.", "Figure 3 : Merging and omission for Thai (TH), Burmese (MY), Khmer (KM), and Lao (LO) scripts.", "The MN row lists the mnemonics assigned to graphemes in our experiment.", "In this study, the mnemonics can be assigned arbitrarily, and we selected Latin letters related to the real pronunciation wherever possible.", "Fig.", "2 gives an overview of this study, showing examples in Khmer.", "We simplify abugidas by omitting vowel diacritics and merging consonant letters with identical or similar phonetic values, as shown in (a).", "This simplification is intuitive, both orthographically and phonetically.", "To resolve the ambiguities introduced by the simplification, we use data-driven methods to recover the original texts, as shown in (b).", "We conducted experiments on four southern Brahmic scripts, i.e., Thai, Burmese, Khmer, and Lao scripts, with a unified framework, using data from the Asian Language Treebank (ALT) (Riza et al., 2016) .", "The experiments show that the abugidas can be recovered satisfactorily by a recurrent neural network (RNN) using long short-term memory (LSTM) units, even when nearly all of the diacritics (10 -30 types) have been omitted and the remaining 30 -50 characters have been merged into 21 graphemes.", "Thai gave the best performance, with 97% top-1 accuracy for graphemes and over 99% top-4 accuracy.", "Lao, which gave the worst performance, still achieved the top-1 and top-4 accuracies of around 94% and 98%, respectively.", "The Burmese and Khmer results, which lay in-between the other two, were also investigated by manual evaluation.", "Related Work Some optimized keyboard layout have been proposed for specific abugidas (Ouk et al., 2008) .", "Most studies on input methods have focused on Chinese and Japanese characters, where thousands of symbols need to be encoded and recovered.", "For Chinese characters, Chen and Lee (2000) made an early attempt to apply statistical methods to sentence-level processing, using a hidden Markov model.", "Others have examined max-entropy models, support vector machines (SVMs), conditional random fields (CRFs), and machine translation techniques (Wang et al., 2006; Jiang et al., 2007; Li et al., 2009; Yang et al., 2012) .", "Similar methods have also been developed for character conversion in Japanese (Tokunaga et al., 2011) .", "This study takes a similar approach to the research on Chinese and Japanese, transforming a less informative encoding into strings in a natural and redundant writing system.", "Furthermore, our study can be considered as a specific lossy compression scheme on abugida textual data.", "Unlike images or audio, the lossy text compression has received little attention as it may cause difficulties with reading (Witten et al., 1994) .", "However, we handle this issue within an input method framework, where the simplified encoding is not read directly.", "Simplified Abugidas We designed simplification schemes for several different scripts within a unified framework based on phonetics and conventional usages, without considering many language specific features.", "Our primary aim was to investigate the feasibility of reducing the complexity of abugidas and to establish methods of recovering the texts.", "We will consider language-specific optimization in a future work, via both data-and user-driven studies.", "The simplification scheme is shown in Fig.", "3 .", "1 Generally, the merges are based on the common distribution of consonant phonemes in most natural languages, as well as the etymology of the characters in each abugida.", "Specifically, three or four graphemes are preserved for the different articulation locations (i.e., guttural, palate, dental, and labial), that two for plosives, one for nasal (NAS.", "), and one for approximant (APP.)", "if present.", "Additional consonants such as trills (R-LIKE), fricatives (S-/H-LIKE), and empty (ZERO-C.) are also assigned their own graphemes.", "Although the simplification omits most diacritics, three types are retained, i.e., one basic mark common to nearly all Brahmic abugidas (LONG-A), the preposed vowels in Thai and Lao (PRE-V.), and the vowel-depressors (and/or consonant-stackers) in Burmese and Khmer (DE-V.).", "We assigned graphemes to these because we found they informed the spelling and were intuitive when typing.", "The net result was the omission of 18 types of diacritics in Thai, 9 in Burmese, 27 in Khmer, and 18 in Lao, and the merging of the remaining 53 types of characters in Thai, 43 in Burmese, 37 in Khmer, and 33 in Lao, into a unified set of 21 graphemes.", "The simplification thus substantially reduces the number of graphemes, and represents a straightforward benchmark for further languagespecific refinement to build on.", "Recovery Methods The recovery process can be formalized as a sequential labeling task, that takes the simplified encoding as input, and outputs the writing units, composed of merged and omitted character(s) in the original abugidas, corresponding to each simplified grapheme.", "Although structured learning methods such as CRF (Lafferty et al., 2001) have been widely used, we found that searching for the label sequences in the output space was too costly, because of the number of labels to be recovered.", "2 Instead, we adopted non-structured point-wise prediction methods using a linear SVM (Cortes and Vapnik, 1995) and an LSTM-based RNN (Hochreiter and Schmidhuber, 1997) .", "Fig.", "4 shows the overall structure of the RNN.", "After many experimentations, a general \"shallow and broad\" configuration was adopted.", "Specifically, simplified grapheme bi-grams are first embedded into 128-dimensional vectors 3 and then encoded in one layer of a bi-directional LSTM, resulting in a final representation consisting of a 512-dimensional vector that concatenates two 256-dimensional vectors from the two directions.", "The number of dimensions used here is large because we found that higher-dimensional vectors were more effective than the deeper structures for this task, as memory capacity was more important than classification ability.", "For the same reason, the representations obtained from the LSTM layer are transformed linearly before the softmax function is applied, as we found that non-linear transformations, which are commonly used for final classification, did not help for this task.", "Experiments and Evaluation We used raw textual data from the ALT, 4 comprising around 20, 000 sentences translated from English.", "The data were divided into training, development, and test sets as specified by the project.", "5 For the SVM experiments, we used the offthe-shelf LIBLINEAR library (Fan et al., 2008) wrapped by the KyTea toolkit.", "6 Table 1 gives the recovery accuracies, demonstrating that recovery is not a difficult classification task, given well represented contextual features.", "In general, using up to 5-gram features before/after the simplified grapheme yielded the best results for the baseline, except with Burmese, where 7-gram features brought a small additional improvement.", "Because Burmese texts use relatively more spaces than the other three scripts, longer contexts help more.", "Meanwhile, Lao produced the worst results, possibly because the omission and merging process was harsh: Lao is the most phonetic of the four scripts, with the least redundant spellings.", "The LSTM-based RNN was implemented using DyNet (Neubig et al., 2017) , and it was trained using Adam (Kingma and Ba, 2014) with an initial learning rate of 10 −3 .", "If the accuracy decreased on the development set, the learning rate was halved, and learning was terminated when there was no improvement on the development set for three iterations.", "We did not use dropout (Srivastava et al., 2014) but instead a voting ensemble over a set of differently initialized models trained in parallel, which is both more effective and faster.", "As shown in Table 2 , the RNN outperformed SVM on all scripts in terms of top-1 accuracy.", "A more lenient evaluation, i.e., top-n accuracy, showed a satisfactory coverage of around 98% (Khmer and Lao) to 99% (Thai and Burmese) considering only the top four results.", "Fig.", "5 shows the effect of changing the size of the training dataset by repeatedly halving it until it was one-eighth of its original size, demonstrating that the RNN outperformed SVM regardless of training data size.", "The LSTM-based RNN should thus be a substantially better solution than the SVM for this task.", "We also investigated Burmese and Khmer further using manual evaluation.", "The results of RNN @1 ⊕16 in Table 2 were evaluated by native speakers, who examined the output writing units corresponding to each input simplified grapheme and classified the errors using four levels: 0) acceptable, i.e., alternative spelling, 1) clear and easy to identify the correct result, 2) confusing but possible to identify the correct result, and 3) incomprehensible.", "Table 3 shows the error distribution.", "For Burmese, most of the errors are at levels 1 and 2, and Khmer has a wider distribution.", "For both scripts, around 50% of the errors are serious (level 2 or 3), but the distributions suggest that they have different characteristics.", "We are currently conducting a case study on these errors for further language-specific improvements.", "Conclusion and Future Work In this study, a scheme was used to substantially simplify four abugidas, omitting most diacritics and merging the remaining characters.", "An SVM and an LSTM-based RNN were then used to recover the original texts, showing that the simplified abugidas could be recovered well.", "This illustrates the feasibility of encoding abugidas less redundantly, which could help with the development of more efficient input methods.", "As for the future work, we are planning to include language-specific optimizations in the design of the simplification scheme and to improve" ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Simplified Abugidas", "Recovery Methods", "Experiments and Evaluation", "Conclusion and Future Work" ] }
GEM-SciDuet-train-95#paper-1249#slide-2
Writing Systems How to Input
Logogram > Syllabic Abugida Alphabet & Abjad
Logogram > Syllabic Abugida Alphabet & Abjad
[]
GEM-SciDuet-train-95#paper-1249#slide-3
1249
Simplified Abugidas
An abugida is a writing system where the consonant letters represent syllables with a default vowel and other vowels are denoted by diacritics. We investigate the feasibility of recovering the original text written in an abugida after omitting subordinate diacritics and merging consonant letters with similar phonetic values. This is crucial for developing more efficient input methods by reducing the complexity in abugidas. Four abugidas in the southern Brahmic family, i.e., Thai, Burmese, Khmer, and Lao, were studied using a newswire 20, 000-sentence dataset. We compared the recovery performance of a support vector machine and an LSTMbased recurrent neural network, finding that the abugida graphemes could be recovered with 94% -97% accuracy at the top-1 level and 98% -99% at the top-4 level, even after omitting most diacritics (10 -30 types) and merging the remaining 30 -50 characters into 21 graphemes.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91 ], "paper_content_text": [ "Introduction Writing systems are used to record utterances in a wide range of languages and can be organized into the hierarchy shown in Fig.", "1 .", "The symbols in a writing system generally represent either speech sounds (phonograms) or semantic units (logograms) .", "Phonograms can be either segmental or syllabic, with segmental systems being more phonetic because they use separate symbols (i.e., letters) to represent consonants and vowels.", "Segmental systems can be further subdivided depending on their representation of vowels.", "Alphabets (e.g., the Latin, Cyrillic, and Greek scripts) are the most common and treat vowel and consonant let- ters equally.", "In contrast, abjads (e.g., the Arabic and Hebrew scripts) do not write most vowels explicitly.", "The third type, abugidas, also called alphasyllabary, includes features from both segmental and syllabic systems.", "In abugidas, consonant letters represent syllables with a default vowel, and other vowels are denoted by diacritics.", "Abugidas thus denote vowels less explicitly than alphabets but more explicitly than abjads, while being less phonetic than alphabets, but more phonetic than syllabaries.", "Since abugidas combine segmental and syllabic systems, they typically have more symbols than conventional alphabets.", "In this study, we investigate how to simplify and recover abugidas, with the aim of developing a more efficient method of encoding abugidas for input.", "Alphabets generally do not have a large set of symbols, making them easy to map to a traditional keyboard, and logogram and syllabic systems need specially designed input methods because of their large variety of symbols.", "Traditional input methods for abugidas are similar to those for alphabets, mapping two or three different symbols onto each key and requiring users to type each character and diacritic exactly.", "In contrast, we are able to substantially simplify inputting abugidas by encoding them in a lossy (or \"fuzzy\") way.", "TH ะ ั ั ั ั ั ั ั ั ั ั ั ั ั ั ั ั ั MY ိ ိ ိ ိ ေိ ိ ိ ိ ိ KM ិ ិ ិ ិ ិ ិ ិ ើិ ើិ ើិ ើិ ែិ ៃិ ើិ ើិ ិ ិ ិ ិ ិ ិ ិ ិ ិ ិ ិ ិ LO ະ ិ ិ ិ ិ ិ ិ ិ ិ ិ ិ ຽ ិ ិ ិ ិ ិ ិ OMITTED I II I II I II I II MN K G U C J I Y T D N L P B M W R S H Q A E TH กขฃ คฅฆ ง จฉ ชซฌ ญ ย ฎฏฐดตถ ฑฒทธ ณน ลฦฬ บปผฝ พฟภ ม ว รฤ ศษส หฬ อ ๅ เ แ โ ใ ไ MY ကခ ဂဃ င စဆ ဇဈ ဉ ည ယ ိ ဋဌတထ ဍဎဒဓ ဏန လဠ ပဖ ဗဘ မ ဝ ိ ရ ြိ သဿ ဟ ိ အ ိ ိ ိ ိ KM កខ គឃ ង ចឆ ជឈ ញ យ ដឋតថ ឌឍទធ ណន លឡ បផ ពភ ម វ រ ឝឞស ហ អ ិ ្ LO ກຂ ຄ ງ ຈ ຊ ຍ ຢ ດຕຖ ທ ນ ລ ບປຜຝ ພຟ ມ ວ ຣ ສ ຫຮ ອ ເ ແ ໂ ໃ ໄ APP.", "DENTAL PALATE PRE-V. DE-V. PLOSIVE NAS.", "MERGED R-LIKE S-LIKE H-LIKE LONG-A ZERO-C. LABIAL PLOSIVE NAS.", "APP.", "PLOSIVE NAS.", "GUTTURAL PLOSIVE NAS.", "APP.", "Figure 3 : Merging and omission for Thai (TH), Burmese (MY), Khmer (KM), and Lao (LO) scripts.", "The MN row lists the mnemonics assigned to graphemes in our experiment.", "In this study, the mnemonics can be assigned arbitrarily, and we selected Latin letters related to the real pronunciation wherever possible.", "Fig.", "2 gives an overview of this study, showing examples in Khmer.", "We simplify abugidas by omitting vowel diacritics and merging consonant letters with identical or similar phonetic values, as shown in (a).", "This simplification is intuitive, both orthographically and phonetically.", "To resolve the ambiguities introduced by the simplification, we use data-driven methods to recover the original texts, as shown in (b).", "We conducted experiments on four southern Brahmic scripts, i.e., Thai, Burmese, Khmer, and Lao scripts, with a unified framework, using data from the Asian Language Treebank (ALT) (Riza et al., 2016) .", "The experiments show that the abugidas can be recovered satisfactorily by a recurrent neural network (RNN) using long short-term memory (LSTM) units, even when nearly all of the diacritics (10 -30 types) have been omitted and the remaining 30 -50 characters have been merged into 21 graphemes.", "Thai gave the best performance, with 97% top-1 accuracy for graphemes and over 99% top-4 accuracy.", "Lao, which gave the worst performance, still achieved the top-1 and top-4 accuracies of around 94% and 98%, respectively.", "The Burmese and Khmer results, which lay in-between the other two, were also investigated by manual evaluation.", "Related Work Some optimized keyboard layout have been proposed for specific abugidas (Ouk et al., 2008) .", "Most studies on input methods have focused on Chinese and Japanese characters, where thousands of symbols need to be encoded and recovered.", "For Chinese characters, Chen and Lee (2000) made an early attempt to apply statistical methods to sentence-level processing, using a hidden Markov model.", "Others have examined max-entropy models, support vector machines (SVMs), conditional random fields (CRFs), and machine translation techniques (Wang et al., 2006; Jiang et al., 2007; Li et al., 2009; Yang et al., 2012) .", "Similar methods have also been developed for character conversion in Japanese (Tokunaga et al., 2011) .", "This study takes a similar approach to the research on Chinese and Japanese, transforming a less informative encoding into strings in a natural and redundant writing system.", "Furthermore, our study can be considered as a specific lossy compression scheme on abugida textual data.", "Unlike images or audio, the lossy text compression has received little attention as it may cause difficulties with reading (Witten et al., 1994) .", "However, we handle this issue within an input method framework, where the simplified encoding is not read directly.", "Simplified Abugidas We designed simplification schemes for several different scripts within a unified framework based on phonetics and conventional usages, without considering many language specific features.", "Our primary aim was to investigate the feasibility of reducing the complexity of abugidas and to establish methods of recovering the texts.", "We will consider language-specific optimization in a future work, via both data-and user-driven studies.", "The simplification scheme is shown in Fig.", "3 .", "1 Generally, the merges are based on the common distribution of consonant phonemes in most natural languages, as well as the etymology of the characters in each abugida.", "Specifically, three or four graphemes are preserved for the different articulation locations (i.e., guttural, palate, dental, and labial), that two for plosives, one for nasal (NAS.", "), and one for approximant (APP.)", "if present.", "Additional consonants such as trills (R-LIKE), fricatives (S-/H-LIKE), and empty (ZERO-C.) are also assigned their own graphemes.", "Although the simplification omits most diacritics, three types are retained, i.e., one basic mark common to nearly all Brahmic abugidas (LONG-A), the preposed vowels in Thai and Lao (PRE-V.), and the vowel-depressors (and/or consonant-stackers) in Burmese and Khmer (DE-V.).", "We assigned graphemes to these because we found they informed the spelling and were intuitive when typing.", "The net result was the omission of 18 types of diacritics in Thai, 9 in Burmese, 27 in Khmer, and 18 in Lao, and the merging of the remaining 53 types of characters in Thai, 43 in Burmese, 37 in Khmer, and 33 in Lao, into a unified set of 21 graphemes.", "The simplification thus substantially reduces the number of graphemes, and represents a straightforward benchmark for further languagespecific refinement to build on.", "Recovery Methods The recovery process can be formalized as a sequential labeling task, that takes the simplified encoding as input, and outputs the writing units, composed of merged and omitted character(s) in the original abugidas, corresponding to each simplified grapheme.", "Although structured learning methods such as CRF (Lafferty et al., 2001) have been widely used, we found that searching for the label sequences in the output space was too costly, because of the number of labels to be recovered.", "2 Instead, we adopted non-structured point-wise prediction methods using a linear SVM (Cortes and Vapnik, 1995) and an LSTM-based RNN (Hochreiter and Schmidhuber, 1997) .", "Fig.", "4 shows the overall structure of the RNN.", "After many experimentations, a general \"shallow and broad\" configuration was adopted.", "Specifically, simplified grapheme bi-grams are first embedded into 128-dimensional vectors 3 and then encoded in one layer of a bi-directional LSTM, resulting in a final representation consisting of a 512-dimensional vector that concatenates two 256-dimensional vectors from the two directions.", "The number of dimensions used here is large because we found that higher-dimensional vectors were more effective than the deeper structures for this task, as memory capacity was more important than classification ability.", "For the same reason, the representations obtained from the LSTM layer are transformed linearly before the softmax function is applied, as we found that non-linear transformations, which are commonly used for final classification, did not help for this task.", "Experiments and Evaluation We used raw textual data from the ALT, 4 comprising around 20, 000 sentences translated from English.", "The data were divided into training, development, and test sets as specified by the project.", "5 For the SVM experiments, we used the offthe-shelf LIBLINEAR library (Fan et al., 2008) wrapped by the KyTea toolkit.", "6 Table 1 gives the recovery accuracies, demonstrating that recovery is not a difficult classification task, given well represented contextual features.", "In general, using up to 5-gram features before/after the simplified grapheme yielded the best results for the baseline, except with Burmese, where 7-gram features brought a small additional improvement.", "Because Burmese texts use relatively more spaces than the other three scripts, longer contexts help more.", "Meanwhile, Lao produced the worst results, possibly because the omission and merging process was harsh: Lao is the most phonetic of the four scripts, with the least redundant spellings.", "The LSTM-based RNN was implemented using DyNet (Neubig et al., 2017) , and it was trained using Adam (Kingma and Ba, 2014) with an initial learning rate of 10 −3 .", "If the accuracy decreased on the development set, the learning rate was halved, and learning was terminated when there was no improvement on the development set for three iterations.", "We did not use dropout (Srivastava et al., 2014) but instead a voting ensemble over a set of differently initialized models trained in parallel, which is both more effective and faster.", "As shown in Table 2 , the RNN outperformed SVM on all scripts in terms of top-1 accuracy.", "A more lenient evaluation, i.e., top-n accuracy, showed a satisfactory coverage of around 98% (Khmer and Lao) to 99% (Thai and Burmese) considering only the top four results.", "Fig.", "5 shows the effect of changing the size of the training dataset by repeatedly halving it until it was one-eighth of its original size, demonstrating that the RNN outperformed SVM regardless of training data size.", "The LSTM-based RNN should thus be a substantially better solution than the SVM for this task.", "We also investigated Burmese and Khmer further using manual evaluation.", "The results of RNN @1 ⊕16 in Table 2 were evaluated by native speakers, who examined the output writing units corresponding to each input simplified grapheme and classified the errors using four levels: 0) acceptable, i.e., alternative spelling, 1) clear and easy to identify the correct result, 2) confusing but possible to identify the correct result, and 3) incomprehensible.", "Table 3 shows the error distribution.", "For Burmese, most of the errors are at levels 1 and 2, and Khmer has a wider distribution.", "For both scripts, around 50% of the errors are serious (level 2 or 3), but the distributions suggest that they have different characteristics.", "We are currently conducting a case study on these errors for further language-specific improvements.", "Conclusion and Future Work In this study, a scheme was used to substantially simplify four abugidas, omitting most diacritics and merging the remaining characters.", "An SVM and an LSTM-based RNN were then used to recover the original texts, showing that the simplified abugidas could be recovered well.", "This illustrates the feasibility of encoding abugidas less redundantly, which could help with the development of more efficient input methods.", "As for the future work, we are planning to include language-specific optimizations in the design of the simplification scheme and to improve" ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Simplified Abugidas", "Recovery Methods", "Experiments and Evaluation", "Conclusion and Future Work" ] }
GEM-SciDuet-train-95#paper-1249#slide-3
Motivation of This Study
Can abugidas be inputted more efficiently? To insert a light layer of input method To type less and to recover automatically Various approaches for Chinese and Japanese To take advantage of redundancy in a writing system
Can abugidas be inputted more efficiently? To insert a light layer of input method To type less and to recover automatically Various approaches for Chinese and Japanese To take advantage of redundancy in a writing system
[]
GEM-SciDuet-train-95#paper-1249#slide-4
1249
Simplified Abugidas
An abugida is a writing system where the consonant letters represent syllables with a default vowel and other vowels are denoted by diacritics. We investigate the feasibility of recovering the original text written in an abugida after omitting subordinate diacritics and merging consonant letters with similar phonetic values. This is crucial for developing more efficient input methods by reducing the complexity in abugidas. Four abugidas in the southern Brahmic family, i.e., Thai, Burmese, Khmer, and Lao, were studied using a newswire 20, 000-sentence dataset. We compared the recovery performance of a support vector machine and an LSTMbased recurrent neural network, finding that the abugida graphemes could be recovered with 94% -97% accuracy at the top-1 level and 98% -99% at the top-4 level, even after omitting most diacritics (10 -30 types) and merging the remaining 30 -50 characters into 21 graphemes.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91 ], "paper_content_text": [ "Introduction Writing systems are used to record utterances in a wide range of languages and can be organized into the hierarchy shown in Fig.", "1 .", "The symbols in a writing system generally represent either speech sounds (phonograms) or semantic units (logograms) .", "Phonograms can be either segmental or syllabic, with segmental systems being more phonetic because they use separate symbols (i.e., letters) to represent consonants and vowels.", "Segmental systems can be further subdivided depending on their representation of vowels.", "Alphabets (e.g., the Latin, Cyrillic, and Greek scripts) are the most common and treat vowel and consonant let- ters equally.", "In contrast, abjads (e.g., the Arabic and Hebrew scripts) do not write most vowels explicitly.", "The third type, abugidas, also called alphasyllabary, includes features from both segmental and syllabic systems.", "In abugidas, consonant letters represent syllables with a default vowel, and other vowels are denoted by diacritics.", "Abugidas thus denote vowels less explicitly than alphabets but more explicitly than abjads, while being less phonetic than alphabets, but more phonetic than syllabaries.", "Since abugidas combine segmental and syllabic systems, they typically have more symbols than conventional alphabets.", "In this study, we investigate how to simplify and recover abugidas, with the aim of developing a more efficient method of encoding abugidas for input.", "Alphabets generally do not have a large set of symbols, making them easy to map to a traditional keyboard, and logogram and syllabic systems need specially designed input methods because of their large variety of symbols.", "Traditional input methods for abugidas are similar to those for alphabets, mapping two or three different symbols onto each key and requiring users to type each character and diacritic exactly.", "In contrast, we are able to substantially simplify inputting abugidas by encoding them in a lossy (or \"fuzzy\") way.", "TH ะ ั ั ั ั ั ั ั ั ั ั ั ั ั ั ั ั ั MY ိ ိ ိ ိ ေိ ိ ိ ိ ိ KM ិ ិ ិ ិ ិ ិ ិ ើិ ើិ ើិ ើិ ែិ ៃិ ើិ ើិ ិ ិ ិ ិ ិ ិ ិ ិ ិ ិ ិ ិ LO ະ ិ ិ ិ ិ ិ ិ ិ ិ ិ ិ ຽ ិ ិ ិ ិ ិ ិ OMITTED I II I II I II I II MN K G U C J I Y T D N L P B M W R S H Q A E TH กขฃ คฅฆ ง จฉ ชซฌ ญ ย ฎฏฐดตถ ฑฒทธ ณน ลฦฬ บปผฝ พฟภ ม ว รฤ ศษส หฬ อ ๅ เ แ โ ใ ไ MY ကခ ဂဃ င စဆ ဇဈ ဉ ည ယ ိ ဋဌတထ ဍဎဒဓ ဏန လဠ ပဖ ဗဘ မ ဝ ိ ရ ြိ သဿ ဟ ိ အ ိ ိ ိ ိ KM កខ គឃ ង ចឆ ជឈ ញ យ ដឋតថ ឌឍទធ ណន លឡ បផ ពភ ម វ រ ឝឞស ហ អ ិ ្ LO ກຂ ຄ ງ ຈ ຊ ຍ ຢ ດຕຖ ທ ນ ລ ບປຜຝ ພຟ ມ ວ ຣ ສ ຫຮ ອ ເ ແ ໂ ໃ ໄ APP.", "DENTAL PALATE PRE-V. DE-V. PLOSIVE NAS.", "MERGED R-LIKE S-LIKE H-LIKE LONG-A ZERO-C. LABIAL PLOSIVE NAS.", "APP.", "PLOSIVE NAS.", "GUTTURAL PLOSIVE NAS.", "APP.", "Figure 3 : Merging and omission for Thai (TH), Burmese (MY), Khmer (KM), and Lao (LO) scripts.", "The MN row lists the mnemonics assigned to graphemes in our experiment.", "In this study, the mnemonics can be assigned arbitrarily, and we selected Latin letters related to the real pronunciation wherever possible.", "Fig.", "2 gives an overview of this study, showing examples in Khmer.", "We simplify abugidas by omitting vowel diacritics and merging consonant letters with identical or similar phonetic values, as shown in (a).", "This simplification is intuitive, both orthographically and phonetically.", "To resolve the ambiguities introduced by the simplification, we use data-driven methods to recover the original texts, as shown in (b).", "We conducted experiments on four southern Brahmic scripts, i.e., Thai, Burmese, Khmer, and Lao scripts, with a unified framework, using data from the Asian Language Treebank (ALT) (Riza et al., 2016) .", "The experiments show that the abugidas can be recovered satisfactorily by a recurrent neural network (RNN) using long short-term memory (LSTM) units, even when nearly all of the diacritics (10 -30 types) have been omitted and the remaining 30 -50 characters have been merged into 21 graphemes.", "Thai gave the best performance, with 97% top-1 accuracy for graphemes and over 99% top-4 accuracy.", "Lao, which gave the worst performance, still achieved the top-1 and top-4 accuracies of around 94% and 98%, respectively.", "The Burmese and Khmer results, which lay in-between the other two, were also investigated by manual evaluation.", "Related Work Some optimized keyboard layout have been proposed for specific abugidas (Ouk et al., 2008) .", "Most studies on input methods have focused on Chinese and Japanese characters, where thousands of symbols need to be encoded and recovered.", "For Chinese characters, Chen and Lee (2000) made an early attempt to apply statistical methods to sentence-level processing, using a hidden Markov model.", "Others have examined max-entropy models, support vector machines (SVMs), conditional random fields (CRFs), and machine translation techniques (Wang et al., 2006; Jiang et al., 2007; Li et al., 2009; Yang et al., 2012) .", "Similar methods have also been developed for character conversion in Japanese (Tokunaga et al., 2011) .", "This study takes a similar approach to the research on Chinese and Japanese, transforming a less informative encoding into strings in a natural and redundant writing system.", "Furthermore, our study can be considered as a specific lossy compression scheme on abugida textual data.", "Unlike images or audio, the lossy text compression has received little attention as it may cause difficulties with reading (Witten et al., 1994) .", "However, we handle this issue within an input method framework, where the simplified encoding is not read directly.", "Simplified Abugidas We designed simplification schemes for several different scripts within a unified framework based on phonetics and conventional usages, without considering many language specific features.", "Our primary aim was to investigate the feasibility of reducing the complexity of abugidas and to establish methods of recovering the texts.", "We will consider language-specific optimization in a future work, via both data-and user-driven studies.", "The simplification scheme is shown in Fig.", "3 .", "1 Generally, the merges are based on the common distribution of consonant phonemes in most natural languages, as well as the etymology of the characters in each abugida.", "Specifically, three or four graphemes are preserved for the different articulation locations (i.e., guttural, palate, dental, and labial), that two for plosives, one for nasal (NAS.", "), and one for approximant (APP.)", "if present.", "Additional consonants such as trills (R-LIKE), fricatives (S-/H-LIKE), and empty (ZERO-C.) are also assigned their own graphemes.", "Although the simplification omits most diacritics, three types are retained, i.e., one basic mark common to nearly all Brahmic abugidas (LONG-A), the preposed vowels in Thai and Lao (PRE-V.), and the vowel-depressors (and/or consonant-stackers) in Burmese and Khmer (DE-V.).", "We assigned graphemes to these because we found they informed the spelling and were intuitive when typing.", "The net result was the omission of 18 types of diacritics in Thai, 9 in Burmese, 27 in Khmer, and 18 in Lao, and the merging of the remaining 53 types of characters in Thai, 43 in Burmese, 37 in Khmer, and 33 in Lao, into a unified set of 21 graphemes.", "The simplification thus substantially reduces the number of graphemes, and represents a straightforward benchmark for further languagespecific refinement to build on.", "Recovery Methods The recovery process can be formalized as a sequential labeling task, that takes the simplified encoding as input, and outputs the writing units, composed of merged and omitted character(s) in the original abugidas, corresponding to each simplified grapheme.", "Although structured learning methods such as CRF (Lafferty et al., 2001) have been widely used, we found that searching for the label sequences in the output space was too costly, because of the number of labels to be recovered.", "2 Instead, we adopted non-structured point-wise prediction methods using a linear SVM (Cortes and Vapnik, 1995) and an LSTM-based RNN (Hochreiter and Schmidhuber, 1997) .", "Fig.", "4 shows the overall structure of the RNN.", "After many experimentations, a general \"shallow and broad\" configuration was adopted.", "Specifically, simplified grapheme bi-grams are first embedded into 128-dimensional vectors 3 and then encoded in one layer of a bi-directional LSTM, resulting in a final representation consisting of a 512-dimensional vector that concatenates two 256-dimensional vectors from the two directions.", "The number of dimensions used here is large because we found that higher-dimensional vectors were more effective than the deeper structures for this task, as memory capacity was more important than classification ability.", "For the same reason, the representations obtained from the LSTM layer are transformed linearly before the softmax function is applied, as we found that non-linear transformations, which are commonly used for final classification, did not help for this task.", "Experiments and Evaluation We used raw textual data from the ALT, 4 comprising around 20, 000 sentences translated from English.", "The data were divided into training, development, and test sets as specified by the project.", "5 For the SVM experiments, we used the offthe-shelf LIBLINEAR library (Fan et al., 2008) wrapped by the KyTea toolkit.", "6 Table 1 gives the recovery accuracies, demonstrating that recovery is not a difficult classification task, given well represented contextual features.", "In general, using up to 5-gram features before/after the simplified grapheme yielded the best results for the baseline, except with Burmese, where 7-gram features brought a small additional improvement.", "Because Burmese texts use relatively more spaces than the other three scripts, longer contexts help more.", "Meanwhile, Lao produced the worst results, possibly because the omission and merging process was harsh: Lao is the most phonetic of the four scripts, with the least redundant spellings.", "The LSTM-based RNN was implemented using DyNet (Neubig et al., 2017) , and it was trained using Adam (Kingma and Ba, 2014) with an initial learning rate of 10 −3 .", "If the accuracy decreased on the development set, the learning rate was halved, and learning was terminated when there was no improvement on the development set for three iterations.", "We did not use dropout (Srivastava et al., 2014) but instead a voting ensemble over a set of differently initialized models trained in parallel, which is both more effective and faster.", "As shown in Table 2 , the RNN outperformed SVM on all scripts in terms of top-1 accuracy.", "A more lenient evaluation, i.e., top-n accuracy, showed a satisfactory coverage of around 98% (Khmer and Lao) to 99% (Thai and Burmese) considering only the top four results.", "Fig.", "5 shows the effect of changing the size of the training dataset by repeatedly halving it until it was one-eighth of its original size, demonstrating that the RNN outperformed SVM regardless of training data size.", "The LSTM-based RNN should thus be a substantially better solution than the SVM for this task.", "We also investigated Burmese and Khmer further using manual evaluation.", "The results of RNN @1 ⊕16 in Table 2 were evaluated by native speakers, who examined the output writing units corresponding to each input simplified grapheme and classified the errors using four levels: 0) acceptable, i.e., alternative spelling, 1) clear and easy to identify the correct result, 2) confusing but possible to identify the correct result, and 3) incomprehensible.", "Table 3 shows the error distribution.", "For Burmese, most of the errors are at levels 1 and 2, and Khmer has a wider distribution.", "For both scripts, around 50% of the errors are serious (level 2 or 3), but the distributions suggest that they have different characteristics.", "We are currently conducting a case study on these errors for further language-specific improvements.", "Conclusion and Future Work In this study, a scheme was used to substantially simplify four abugidas, omitting most diacritics and merging the remaining characters.", "An SVM and an LSTM-based RNN were then used to recover the original texts, showing that the simplified abugidas could be recovered well.", "This illustrates the feasibility of encoding abugidas less redundantly, which could help with the development of more efficient input methods.", "As for the future work, we are planning to include language-specific optimizations in the design of the simplification scheme and to improve" ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Simplified Abugidas", "Recovery Methods", "Experiments and Evaluation", "Conclusion and Future Work" ] }
GEM-SciDuet-train-95#paper-1249#slide-4
Abugida Simplification
Thai, Burmese (Myanmar), Khmer (Cambodian), and Lao Based on phonetics / conventional usages reduced to 21 symbols GUTTURAL PALATE DENTAL LABIAL I I I I II NAS. APP. R-LIKE S-LIKE H-LIKE ZERO-C. LONG-A TH MY KM LO Khmer script as an example J T N N Thai Burmese Khmer Lao Around one quarter characters ( ) saved
Thai, Burmese (Myanmar), Khmer (Cambodian), and Lao Based on phonetics / conventional usages reduced to 21 symbols GUTTURAL PALATE DENTAL LABIAL I I I I II NAS. APP. R-LIKE S-LIKE H-LIKE ZERO-C. LONG-A TH MY KM LO Khmer script as an example J T N N Thai Burmese Khmer Lao Around one quarter characters ( ) saved
[]
GEM-SciDuet-train-95#paper-1249#slide-5
1249
Simplified Abugidas
An abugida is a writing system where the consonant letters represent syllables with a default vowel and other vowels are denoted by diacritics. We investigate the feasibility of recovering the original text written in an abugida after omitting subordinate diacritics and merging consonant letters with similar phonetic values. This is crucial for developing more efficient input methods by reducing the complexity in abugidas. Four abugidas in the southern Brahmic family, i.e., Thai, Burmese, Khmer, and Lao, were studied using a newswire 20, 000-sentence dataset. We compared the recovery performance of a support vector machine and an LSTMbased recurrent neural network, finding that the abugida graphemes could be recovered with 94% -97% accuracy at the top-1 level and 98% -99% at the top-4 level, even after omitting most diacritics (10 -30 types) and merging the remaining 30 -50 characters into 21 graphemes.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91 ], "paper_content_text": [ "Introduction Writing systems are used to record utterances in a wide range of languages and can be organized into the hierarchy shown in Fig.", "1 .", "The symbols in a writing system generally represent either speech sounds (phonograms) or semantic units (logograms) .", "Phonograms can be either segmental or syllabic, with segmental systems being more phonetic because they use separate symbols (i.e., letters) to represent consonants and vowels.", "Segmental systems can be further subdivided depending on their representation of vowels.", "Alphabets (e.g., the Latin, Cyrillic, and Greek scripts) are the most common and treat vowel and consonant let- ters equally.", "In contrast, abjads (e.g., the Arabic and Hebrew scripts) do not write most vowels explicitly.", "The third type, abugidas, also called alphasyllabary, includes features from both segmental and syllabic systems.", "In abugidas, consonant letters represent syllables with a default vowel, and other vowels are denoted by diacritics.", "Abugidas thus denote vowels less explicitly than alphabets but more explicitly than abjads, while being less phonetic than alphabets, but more phonetic than syllabaries.", "Since abugidas combine segmental and syllabic systems, they typically have more symbols than conventional alphabets.", "In this study, we investigate how to simplify and recover abugidas, with the aim of developing a more efficient method of encoding abugidas for input.", "Alphabets generally do not have a large set of symbols, making them easy to map to a traditional keyboard, and logogram and syllabic systems need specially designed input methods because of their large variety of symbols.", "Traditional input methods for abugidas are similar to those for alphabets, mapping two or three different symbols onto each key and requiring users to type each character and diacritic exactly.", "In contrast, we are able to substantially simplify inputting abugidas by encoding them in a lossy (or \"fuzzy\") way.", "TH ะ ั ั ั ั ั ั ั ั ั ั ั ั ั ั ั ั ั MY ိ ိ ိ ိ ေိ ိ ိ ိ ိ KM ិ ិ ិ ិ ិ ិ ិ ើិ ើិ ើិ ើិ ែិ ៃិ ើិ ើិ ិ ិ ិ ិ ិ ិ ិ ិ ិ ិ ិ ិ LO ະ ិ ិ ិ ិ ិ ិ ិ ិ ិ ិ ຽ ិ ិ ិ ិ ិ ិ OMITTED I II I II I II I II MN K G U C J I Y T D N L P B M W R S H Q A E TH กขฃ คฅฆ ง จฉ ชซฌ ญ ย ฎฏฐดตถ ฑฒทธ ณน ลฦฬ บปผฝ พฟภ ม ว รฤ ศษส หฬ อ ๅ เ แ โ ใ ไ MY ကခ ဂဃ င စဆ ဇဈ ဉ ည ယ ိ ဋဌတထ ဍဎဒဓ ဏန လဠ ပဖ ဗဘ မ ဝ ိ ရ ြိ သဿ ဟ ိ အ ိ ိ ိ ိ KM កខ គឃ ង ចឆ ជឈ ញ យ ដឋតថ ឌឍទធ ណន លឡ បផ ពភ ម វ រ ឝឞស ហ អ ិ ្ LO ກຂ ຄ ງ ຈ ຊ ຍ ຢ ດຕຖ ທ ນ ລ ບປຜຝ ພຟ ມ ວ ຣ ສ ຫຮ ອ ເ ແ ໂ ໃ ໄ APP.", "DENTAL PALATE PRE-V. DE-V. PLOSIVE NAS.", "MERGED R-LIKE S-LIKE H-LIKE LONG-A ZERO-C. LABIAL PLOSIVE NAS.", "APP.", "PLOSIVE NAS.", "GUTTURAL PLOSIVE NAS.", "APP.", "Figure 3 : Merging and omission for Thai (TH), Burmese (MY), Khmer (KM), and Lao (LO) scripts.", "The MN row lists the mnemonics assigned to graphemes in our experiment.", "In this study, the mnemonics can be assigned arbitrarily, and we selected Latin letters related to the real pronunciation wherever possible.", "Fig.", "2 gives an overview of this study, showing examples in Khmer.", "We simplify abugidas by omitting vowel diacritics and merging consonant letters with identical or similar phonetic values, as shown in (a).", "This simplification is intuitive, both orthographically and phonetically.", "To resolve the ambiguities introduced by the simplification, we use data-driven methods to recover the original texts, as shown in (b).", "We conducted experiments on four southern Brahmic scripts, i.e., Thai, Burmese, Khmer, and Lao scripts, with a unified framework, using data from the Asian Language Treebank (ALT) (Riza et al., 2016) .", "The experiments show that the abugidas can be recovered satisfactorily by a recurrent neural network (RNN) using long short-term memory (LSTM) units, even when nearly all of the diacritics (10 -30 types) have been omitted and the remaining 30 -50 characters have been merged into 21 graphemes.", "Thai gave the best performance, with 97% top-1 accuracy for graphemes and over 99% top-4 accuracy.", "Lao, which gave the worst performance, still achieved the top-1 and top-4 accuracies of around 94% and 98%, respectively.", "The Burmese and Khmer results, which lay in-between the other two, were also investigated by manual evaluation.", "Related Work Some optimized keyboard layout have been proposed for specific abugidas (Ouk et al., 2008) .", "Most studies on input methods have focused on Chinese and Japanese characters, where thousands of symbols need to be encoded and recovered.", "For Chinese characters, Chen and Lee (2000) made an early attempt to apply statistical methods to sentence-level processing, using a hidden Markov model.", "Others have examined max-entropy models, support vector machines (SVMs), conditional random fields (CRFs), and machine translation techniques (Wang et al., 2006; Jiang et al., 2007; Li et al., 2009; Yang et al., 2012) .", "Similar methods have also been developed for character conversion in Japanese (Tokunaga et al., 2011) .", "This study takes a similar approach to the research on Chinese and Japanese, transforming a less informative encoding into strings in a natural and redundant writing system.", "Furthermore, our study can be considered as a specific lossy compression scheme on abugida textual data.", "Unlike images or audio, the lossy text compression has received little attention as it may cause difficulties with reading (Witten et al., 1994) .", "However, we handle this issue within an input method framework, where the simplified encoding is not read directly.", "Simplified Abugidas We designed simplification schemes for several different scripts within a unified framework based on phonetics and conventional usages, without considering many language specific features.", "Our primary aim was to investigate the feasibility of reducing the complexity of abugidas and to establish methods of recovering the texts.", "We will consider language-specific optimization in a future work, via both data-and user-driven studies.", "The simplification scheme is shown in Fig.", "3 .", "1 Generally, the merges are based on the common distribution of consonant phonemes in most natural languages, as well as the etymology of the characters in each abugida.", "Specifically, three or four graphemes are preserved for the different articulation locations (i.e., guttural, palate, dental, and labial), that two for plosives, one for nasal (NAS.", "), and one for approximant (APP.)", "if present.", "Additional consonants such as trills (R-LIKE), fricatives (S-/H-LIKE), and empty (ZERO-C.) are also assigned their own graphemes.", "Although the simplification omits most diacritics, three types are retained, i.e., one basic mark common to nearly all Brahmic abugidas (LONG-A), the preposed vowels in Thai and Lao (PRE-V.), and the vowel-depressors (and/or consonant-stackers) in Burmese and Khmer (DE-V.).", "We assigned graphemes to these because we found they informed the spelling and were intuitive when typing.", "The net result was the omission of 18 types of diacritics in Thai, 9 in Burmese, 27 in Khmer, and 18 in Lao, and the merging of the remaining 53 types of characters in Thai, 43 in Burmese, 37 in Khmer, and 33 in Lao, into a unified set of 21 graphemes.", "The simplification thus substantially reduces the number of graphemes, and represents a straightforward benchmark for further languagespecific refinement to build on.", "Recovery Methods The recovery process can be formalized as a sequential labeling task, that takes the simplified encoding as input, and outputs the writing units, composed of merged and omitted character(s) in the original abugidas, corresponding to each simplified grapheme.", "Although structured learning methods such as CRF (Lafferty et al., 2001) have been widely used, we found that searching for the label sequences in the output space was too costly, because of the number of labels to be recovered.", "2 Instead, we adopted non-structured point-wise prediction methods using a linear SVM (Cortes and Vapnik, 1995) and an LSTM-based RNN (Hochreiter and Schmidhuber, 1997) .", "Fig.", "4 shows the overall structure of the RNN.", "After many experimentations, a general \"shallow and broad\" configuration was adopted.", "Specifically, simplified grapheme bi-grams are first embedded into 128-dimensional vectors 3 and then encoded in one layer of a bi-directional LSTM, resulting in a final representation consisting of a 512-dimensional vector that concatenates two 256-dimensional vectors from the two directions.", "The number of dimensions used here is large because we found that higher-dimensional vectors were more effective than the deeper structures for this task, as memory capacity was more important than classification ability.", "For the same reason, the representations obtained from the LSTM layer are transformed linearly before the softmax function is applied, as we found that non-linear transformations, which are commonly used for final classification, did not help for this task.", "Experiments and Evaluation We used raw textual data from the ALT, 4 comprising around 20, 000 sentences translated from English.", "The data were divided into training, development, and test sets as specified by the project.", "5 For the SVM experiments, we used the offthe-shelf LIBLINEAR library (Fan et al., 2008) wrapped by the KyTea toolkit.", "6 Table 1 gives the recovery accuracies, demonstrating that recovery is not a difficult classification task, given well represented contextual features.", "In general, using up to 5-gram features before/after the simplified grapheme yielded the best results for the baseline, except with Burmese, where 7-gram features brought a small additional improvement.", "Because Burmese texts use relatively more spaces than the other three scripts, longer contexts help more.", "Meanwhile, Lao produced the worst results, possibly because the omission and merging process was harsh: Lao is the most phonetic of the four scripts, with the least redundant spellings.", "The LSTM-based RNN was implemented using DyNet (Neubig et al., 2017) , and it was trained using Adam (Kingma and Ba, 2014) with an initial learning rate of 10 −3 .", "If the accuracy decreased on the development set, the learning rate was halved, and learning was terminated when there was no improvement on the development set for three iterations.", "We did not use dropout (Srivastava et al., 2014) but instead a voting ensemble over a set of differently initialized models trained in parallel, which is both more effective and faster.", "As shown in Table 2 , the RNN outperformed SVM on all scripts in terms of top-1 accuracy.", "A more lenient evaluation, i.e., top-n accuracy, showed a satisfactory coverage of around 98% (Khmer and Lao) to 99% (Thai and Burmese) considering only the top four results.", "Fig.", "5 shows the effect of changing the size of the training dataset by repeatedly halving it until it was one-eighth of its original size, demonstrating that the RNN outperformed SVM regardless of training data size.", "The LSTM-based RNN should thus be a substantially better solution than the SVM for this task.", "We also investigated Burmese and Khmer further using manual evaluation.", "The results of RNN @1 ⊕16 in Table 2 were evaluated by native speakers, who examined the output writing units corresponding to each input simplified grapheme and classified the errors using four levels: 0) acceptable, i.e., alternative spelling, 1) clear and easy to identify the correct result, 2) confusing but possible to identify the correct result, and 3) incomprehensible.", "Table 3 shows the error distribution.", "For Burmese, most of the errors are at levels 1 and 2, and Khmer has a wider distribution.", "For both scripts, around 50% of the errors are serious (level 2 or 3), but the distributions suggest that they have different characteristics.", "We are currently conducting a case study on these errors for further language-specific improvements.", "Conclusion and Future Work In this study, a scheme was used to substantially simplify four abugidas, omitting most diacritics and merging the remaining characters.", "An SVM and an LSTM-based RNN were then used to recover the original texts, showing that the simplified abugidas could be recovered well.", "This illustrates the feasibility of encoding abugidas less redundantly, which could help with the development of more efficient input methods.", "As for the future work, we are planning to include language-specific optimizations in the design of the simplification scheme and to improve" ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Simplified Abugidas", "Recovery Methods", "Experiments and Evaluation", "Conclusion and Future Work" ] }
GEM-SciDuet-train-95#paper-1249#slide-5
Recovery Methods
To formulate as a sequential labeling task However, list-wise search as in conditional random fields is costing To solve by point-wise classification Support vector machine (SVM) as a baseline Recurrent neural network (RNN) as a state-of-the-art method Setting for the SVM baseline Linear kernel with N-gram features Wrapped by the KyTea toolkit
To formulate as a sequential labeling task However, list-wise search as in conditional random fields is costing To solve by point-wise classification Support vector machine (SVM) as a baseline Recurrent neural network (RNN) as a state-of-the-art method Setting for the SVM baseline Linear kernel with N-gram features Wrapped by the KyTea toolkit
[]
GEM-SciDuet-train-95#paper-1249#slide-6
1249
Simplified Abugidas
An abugida is a writing system where the consonant letters represent syllables with a default vowel and other vowels are denoted by diacritics. We investigate the feasibility of recovering the original text written in an abugida after omitting subordinate diacritics and merging consonant letters with similar phonetic values. This is crucial for developing more efficient input methods by reducing the complexity in abugidas. Four abugidas in the southern Brahmic family, i.e., Thai, Burmese, Khmer, and Lao, were studied using a newswire 20, 000-sentence dataset. We compared the recovery performance of a support vector machine and an LSTMbased recurrent neural network, finding that the abugida graphemes could be recovered with 94% -97% accuracy at the top-1 level and 98% -99% at the top-4 level, even after omitting most diacritics (10 -30 types) and merging the remaining 30 -50 characters into 21 graphemes.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91 ], "paper_content_text": [ "Introduction Writing systems are used to record utterances in a wide range of languages and can be organized into the hierarchy shown in Fig.", "1 .", "The symbols in a writing system generally represent either speech sounds (phonograms) or semantic units (logograms) .", "Phonograms can be either segmental or syllabic, with segmental systems being more phonetic because they use separate symbols (i.e., letters) to represent consonants and vowels.", "Segmental systems can be further subdivided depending on their representation of vowels.", "Alphabets (e.g., the Latin, Cyrillic, and Greek scripts) are the most common and treat vowel and consonant let- ters equally.", "In contrast, abjads (e.g., the Arabic and Hebrew scripts) do not write most vowels explicitly.", "The third type, abugidas, also called alphasyllabary, includes features from both segmental and syllabic systems.", "In abugidas, consonant letters represent syllables with a default vowel, and other vowels are denoted by diacritics.", "Abugidas thus denote vowels less explicitly than alphabets but more explicitly than abjads, while being less phonetic than alphabets, but more phonetic than syllabaries.", "Since abugidas combine segmental and syllabic systems, they typically have more symbols than conventional alphabets.", "In this study, we investigate how to simplify and recover abugidas, with the aim of developing a more efficient method of encoding abugidas for input.", "Alphabets generally do not have a large set of symbols, making them easy to map to a traditional keyboard, and logogram and syllabic systems need specially designed input methods because of their large variety of symbols.", "Traditional input methods for abugidas are similar to those for alphabets, mapping two or three different symbols onto each key and requiring users to type each character and diacritic exactly.", "In contrast, we are able to substantially simplify inputting abugidas by encoding them in a lossy (or \"fuzzy\") way.", "TH ะ ั ั ั ั ั ั ั ั ั ั ั ั ั ั ั ั ั MY ိ ိ ိ ိ ေိ ိ ိ ိ ိ KM ិ ិ ិ ិ ិ ិ ិ ើិ ើិ ើិ ើិ ែិ ៃិ ើិ ើិ ិ ិ ិ ិ ិ ិ ិ ិ ិ ិ ិ ិ LO ະ ិ ិ ិ ិ ិ ិ ិ ិ ិ ិ ຽ ិ ិ ិ ិ ិ ិ OMITTED I II I II I II I II MN K G U C J I Y T D N L P B M W R S H Q A E TH กขฃ คฅฆ ง จฉ ชซฌ ญ ย ฎฏฐดตถ ฑฒทธ ณน ลฦฬ บปผฝ พฟภ ม ว รฤ ศษส หฬ อ ๅ เ แ โ ใ ไ MY ကခ ဂဃ င စဆ ဇဈ ဉ ည ယ ိ ဋဌတထ ဍဎဒဓ ဏန လဠ ပဖ ဗဘ မ ဝ ိ ရ ြိ သဿ ဟ ိ အ ိ ိ ိ ိ KM កខ គឃ ង ចឆ ជឈ ញ យ ដឋតថ ឌឍទធ ណន លឡ បផ ពភ ម វ រ ឝឞស ហ អ ិ ្ LO ກຂ ຄ ງ ຈ ຊ ຍ ຢ ດຕຖ ທ ນ ລ ບປຜຝ ພຟ ມ ວ ຣ ສ ຫຮ ອ ເ ແ ໂ ໃ ໄ APP.", "DENTAL PALATE PRE-V. DE-V. PLOSIVE NAS.", "MERGED R-LIKE S-LIKE H-LIKE LONG-A ZERO-C. LABIAL PLOSIVE NAS.", "APP.", "PLOSIVE NAS.", "GUTTURAL PLOSIVE NAS.", "APP.", "Figure 3 : Merging and omission for Thai (TH), Burmese (MY), Khmer (KM), and Lao (LO) scripts.", "The MN row lists the mnemonics assigned to graphemes in our experiment.", "In this study, the mnemonics can be assigned arbitrarily, and we selected Latin letters related to the real pronunciation wherever possible.", "Fig.", "2 gives an overview of this study, showing examples in Khmer.", "We simplify abugidas by omitting vowel diacritics and merging consonant letters with identical or similar phonetic values, as shown in (a).", "This simplification is intuitive, both orthographically and phonetically.", "To resolve the ambiguities introduced by the simplification, we use data-driven methods to recover the original texts, as shown in (b).", "We conducted experiments on four southern Brahmic scripts, i.e., Thai, Burmese, Khmer, and Lao scripts, with a unified framework, using data from the Asian Language Treebank (ALT) (Riza et al., 2016) .", "The experiments show that the abugidas can be recovered satisfactorily by a recurrent neural network (RNN) using long short-term memory (LSTM) units, even when nearly all of the diacritics (10 -30 types) have been omitted and the remaining 30 -50 characters have been merged into 21 graphemes.", "Thai gave the best performance, with 97% top-1 accuracy for graphemes and over 99% top-4 accuracy.", "Lao, which gave the worst performance, still achieved the top-1 and top-4 accuracies of around 94% and 98%, respectively.", "The Burmese and Khmer results, which lay in-between the other two, were also investigated by manual evaluation.", "Related Work Some optimized keyboard layout have been proposed for specific abugidas (Ouk et al., 2008) .", "Most studies on input methods have focused on Chinese and Japanese characters, where thousands of symbols need to be encoded and recovered.", "For Chinese characters, Chen and Lee (2000) made an early attempt to apply statistical methods to sentence-level processing, using a hidden Markov model.", "Others have examined max-entropy models, support vector machines (SVMs), conditional random fields (CRFs), and machine translation techniques (Wang et al., 2006; Jiang et al., 2007; Li et al., 2009; Yang et al., 2012) .", "Similar methods have also been developed for character conversion in Japanese (Tokunaga et al., 2011) .", "This study takes a similar approach to the research on Chinese and Japanese, transforming a less informative encoding into strings in a natural and redundant writing system.", "Furthermore, our study can be considered as a specific lossy compression scheme on abugida textual data.", "Unlike images or audio, the lossy text compression has received little attention as it may cause difficulties with reading (Witten et al., 1994) .", "However, we handle this issue within an input method framework, where the simplified encoding is not read directly.", "Simplified Abugidas We designed simplification schemes for several different scripts within a unified framework based on phonetics and conventional usages, without considering many language specific features.", "Our primary aim was to investigate the feasibility of reducing the complexity of abugidas and to establish methods of recovering the texts.", "We will consider language-specific optimization in a future work, via both data-and user-driven studies.", "The simplification scheme is shown in Fig.", "3 .", "1 Generally, the merges are based on the common distribution of consonant phonemes in most natural languages, as well as the etymology of the characters in each abugida.", "Specifically, three or four graphemes are preserved for the different articulation locations (i.e., guttural, palate, dental, and labial), that two for plosives, one for nasal (NAS.", "), and one for approximant (APP.)", "if present.", "Additional consonants such as trills (R-LIKE), fricatives (S-/H-LIKE), and empty (ZERO-C.) are also assigned their own graphemes.", "Although the simplification omits most diacritics, three types are retained, i.e., one basic mark common to nearly all Brahmic abugidas (LONG-A), the preposed vowels in Thai and Lao (PRE-V.), and the vowel-depressors (and/or consonant-stackers) in Burmese and Khmer (DE-V.).", "We assigned graphemes to these because we found they informed the spelling and were intuitive when typing.", "The net result was the omission of 18 types of diacritics in Thai, 9 in Burmese, 27 in Khmer, and 18 in Lao, and the merging of the remaining 53 types of characters in Thai, 43 in Burmese, 37 in Khmer, and 33 in Lao, into a unified set of 21 graphemes.", "The simplification thus substantially reduces the number of graphemes, and represents a straightforward benchmark for further languagespecific refinement to build on.", "Recovery Methods The recovery process can be formalized as a sequential labeling task, that takes the simplified encoding as input, and outputs the writing units, composed of merged and omitted character(s) in the original abugidas, corresponding to each simplified grapheme.", "Although structured learning methods such as CRF (Lafferty et al., 2001) have been widely used, we found that searching for the label sequences in the output space was too costly, because of the number of labels to be recovered.", "2 Instead, we adopted non-structured point-wise prediction methods using a linear SVM (Cortes and Vapnik, 1995) and an LSTM-based RNN (Hochreiter and Schmidhuber, 1997) .", "Fig.", "4 shows the overall structure of the RNN.", "After many experimentations, a general \"shallow and broad\" configuration was adopted.", "Specifically, simplified grapheme bi-grams are first embedded into 128-dimensional vectors 3 and then encoded in one layer of a bi-directional LSTM, resulting in a final representation consisting of a 512-dimensional vector that concatenates two 256-dimensional vectors from the two directions.", "The number of dimensions used here is large because we found that higher-dimensional vectors were more effective than the deeper structures for this task, as memory capacity was more important than classification ability.", "For the same reason, the representations obtained from the LSTM layer are transformed linearly before the softmax function is applied, as we found that non-linear transformations, which are commonly used for final classification, did not help for this task.", "Experiments and Evaluation We used raw textual data from the ALT, 4 comprising around 20, 000 sentences translated from English.", "The data were divided into training, development, and test sets as specified by the project.", "5 For the SVM experiments, we used the offthe-shelf LIBLINEAR library (Fan et al., 2008) wrapped by the KyTea toolkit.", "6 Table 1 gives the recovery accuracies, demonstrating that recovery is not a difficult classification task, given well represented contextual features.", "In general, using up to 5-gram features before/after the simplified grapheme yielded the best results for the baseline, except with Burmese, where 7-gram features brought a small additional improvement.", "Because Burmese texts use relatively more spaces than the other three scripts, longer contexts help more.", "Meanwhile, Lao produced the worst results, possibly because the omission and merging process was harsh: Lao is the most phonetic of the four scripts, with the least redundant spellings.", "The LSTM-based RNN was implemented using DyNet (Neubig et al., 2017) , and it was trained using Adam (Kingma and Ba, 2014) with an initial learning rate of 10 −3 .", "If the accuracy decreased on the development set, the learning rate was halved, and learning was terminated when there was no improvement on the development set for three iterations.", "We did not use dropout (Srivastava et al., 2014) but instead a voting ensemble over a set of differently initialized models trained in parallel, which is both more effective and faster.", "As shown in Table 2 , the RNN outperformed SVM on all scripts in terms of top-1 accuracy.", "A more lenient evaluation, i.e., top-n accuracy, showed a satisfactory coverage of around 98% (Khmer and Lao) to 99% (Thai and Burmese) considering only the top four results.", "Fig.", "5 shows the effect of changing the size of the training dataset by repeatedly halving it until it was one-eighth of its original size, demonstrating that the RNN outperformed SVM regardless of training data size.", "The LSTM-based RNN should thus be a substantially better solution than the SVM for this task.", "We also investigated Burmese and Khmer further using manual evaluation.", "The results of RNN @1 ⊕16 in Table 2 were evaluated by native speakers, who examined the output writing units corresponding to each input simplified grapheme and classified the errors using four levels: 0) acceptable, i.e., alternative spelling, 1) clear and easy to identify the correct result, 2) confusing but possible to identify the correct result, and 3) incomprehensible.", "Table 3 shows the error distribution.", "For Burmese, most of the errors are at levels 1 and 2, and Khmer has a wider distribution.", "For both scripts, around 50% of the errors are serious (level 2 or 3), but the distributions suggest that they have different characteristics.", "We are currently conducting a case study on these errors for further language-specific improvements.", "Conclusion and Future Work In this study, a scheme was used to substantially simplify four abugidas, omitting most diacritics and merging the remaining characters.", "An SVM and an LSTM-based RNN were then used to recover the original texts, showing that the simplified abugidas could be recovered well.", "This illustrates the feasibility of encoding abugidas less redundantly, which could help with the development of more efficient input methods.", "As for the future work, we are planning to include language-specific optimizations in the design of the simplification scheme and to improve" ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Simplified Abugidas", "Recovery Methods", "Experiments and Evaluation", "Conclusion and Future Work" ] }
GEM-SciDuet-train-95#paper-1249#slide-6
RNN Structure and Settings
Bi-gram of graphemes as input Embedding Bi-directional LSTM Linear transform Softmax Original writing units as output softmax Implemented by DyNet Trained by Adam 512-dim. Initial learning rate 0.001 Controlled by a validation set Multi-model ensemble input J T N N
Bi-gram of graphemes as input Embedding Bi-directional LSTM Linear transform Softmax Original writing units as output softmax Implemented by DyNet Trained by Adam 512-dim. Initial learning rate 0.001 Controlled by a validation set Multi-model ensemble input J T N N
[]
GEM-SciDuet-train-95#paper-1249#slide-7
1249
Simplified Abugidas
An abugida is a writing system where the consonant letters represent syllables with a default vowel and other vowels are denoted by diacritics. We investigate the feasibility of recovering the original text written in an abugida after omitting subordinate diacritics and merging consonant letters with similar phonetic values. This is crucial for developing more efficient input methods by reducing the complexity in abugidas. Four abugidas in the southern Brahmic family, i.e., Thai, Burmese, Khmer, and Lao, were studied using a newswire 20, 000-sentence dataset. We compared the recovery performance of a support vector machine and an LSTMbased recurrent neural network, finding that the abugida graphemes could be recovered with 94% -97% accuracy at the top-1 level and 98% -99% at the top-4 level, even after omitting most diacritics (10 -30 types) and merging the remaining 30 -50 characters into 21 graphemes.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91 ], "paper_content_text": [ "Introduction Writing systems are used to record utterances in a wide range of languages and can be organized into the hierarchy shown in Fig.", "1 .", "The symbols in a writing system generally represent either speech sounds (phonograms) or semantic units (logograms) .", "Phonograms can be either segmental or syllabic, with segmental systems being more phonetic because they use separate symbols (i.e., letters) to represent consonants and vowels.", "Segmental systems can be further subdivided depending on their representation of vowels.", "Alphabets (e.g., the Latin, Cyrillic, and Greek scripts) are the most common and treat vowel and consonant let- ters equally.", "In contrast, abjads (e.g., the Arabic and Hebrew scripts) do not write most vowels explicitly.", "The third type, abugidas, also called alphasyllabary, includes features from both segmental and syllabic systems.", "In abugidas, consonant letters represent syllables with a default vowel, and other vowels are denoted by diacritics.", "Abugidas thus denote vowels less explicitly than alphabets but more explicitly than abjads, while being less phonetic than alphabets, but more phonetic than syllabaries.", "Since abugidas combine segmental and syllabic systems, they typically have more symbols than conventional alphabets.", "In this study, we investigate how to simplify and recover abugidas, with the aim of developing a more efficient method of encoding abugidas for input.", "Alphabets generally do not have a large set of symbols, making them easy to map to a traditional keyboard, and logogram and syllabic systems need specially designed input methods because of their large variety of symbols.", "Traditional input methods for abugidas are similar to those for alphabets, mapping two or three different symbols onto each key and requiring users to type each character and diacritic exactly.", "In contrast, we are able to substantially simplify inputting abugidas by encoding them in a lossy (or \"fuzzy\") way.", "TH ะ ั ั ั ั ั ั ั ั ั ั ั ั ั ั ั ั ั MY ိ ိ ိ ိ ေိ ိ ိ ိ ိ KM ិ ិ ិ ិ ិ ិ ិ ើិ ើិ ើិ ើិ ែិ ៃិ ើិ ើិ ិ ិ ិ ិ ិ ិ ិ ិ ិ ិ ិ ិ LO ະ ិ ិ ិ ិ ិ ិ ិ ិ ិ ិ ຽ ិ ិ ិ ិ ិ ិ OMITTED I II I II I II I II MN K G U C J I Y T D N L P B M W R S H Q A E TH กขฃ คฅฆ ง จฉ ชซฌ ญ ย ฎฏฐดตถ ฑฒทธ ณน ลฦฬ บปผฝ พฟภ ม ว รฤ ศษส หฬ อ ๅ เ แ โ ใ ไ MY ကခ ဂဃ င စဆ ဇဈ ဉ ည ယ ိ ဋဌတထ ဍဎဒဓ ဏန လဠ ပဖ ဗဘ မ ဝ ိ ရ ြိ သဿ ဟ ိ အ ိ ိ ိ ိ KM កខ គឃ ង ចឆ ជឈ ញ យ ដឋតថ ឌឍទធ ណន លឡ បផ ពភ ម វ រ ឝឞស ហ អ ិ ្ LO ກຂ ຄ ງ ຈ ຊ ຍ ຢ ດຕຖ ທ ນ ລ ບປຜຝ ພຟ ມ ວ ຣ ສ ຫຮ ອ ເ ແ ໂ ໃ ໄ APP.", "DENTAL PALATE PRE-V. DE-V. PLOSIVE NAS.", "MERGED R-LIKE S-LIKE H-LIKE LONG-A ZERO-C. LABIAL PLOSIVE NAS.", "APP.", "PLOSIVE NAS.", "GUTTURAL PLOSIVE NAS.", "APP.", "Figure 3 : Merging and omission for Thai (TH), Burmese (MY), Khmer (KM), and Lao (LO) scripts.", "The MN row lists the mnemonics assigned to graphemes in our experiment.", "In this study, the mnemonics can be assigned arbitrarily, and we selected Latin letters related to the real pronunciation wherever possible.", "Fig.", "2 gives an overview of this study, showing examples in Khmer.", "We simplify abugidas by omitting vowel diacritics and merging consonant letters with identical or similar phonetic values, as shown in (a).", "This simplification is intuitive, both orthographically and phonetically.", "To resolve the ambiguities introduced by the simplification, we use data-driven methods to recover the original texts, as shown in (b).", "We conducted experiments on four southern Brahmic scripts, i.e., Thai, Burmese, Khmer, and Lao scripts, with a unified framework, using data from the Asian Language Treebank (ALT) (Riza et al., 2016) .", "The experiments show that the abugidas can be recovered satisfactorily by a recurrent neural network (RNN) using long short-term memory (LSTM) units, even when nearly all of the diacritics (10 -30 types) have been omitted and the remaining 30 -50 characters have been merged into 21 graphemes.", "Thai gave the best performance, with 97% top-1 accuracy for graphemes and over 99% top-4 accuracy.", "Lao, which gave the worst performance, still achieved the top-1 and top-4 accuracies of around 94% and 98%, respectively.", "The Burmese and Khmer results, which lay in-between the other two, were also investigated by manual evaluation.", "Related Work Some optimized keyboard layout have been proposed for specific abugidas (Ouk et al., 2008) .", "Most studies on input methods have focused on Chinese and Japanese characters, where thousands of symbols need to be encoded and recovered.", "For Chinese characters, Chen and Lee (2000) made an early attempt to apply statistical methods to sentence-level processing, using a hidden Markov model.", "Others have examined max-entropy models, support vector machines (SVMs), conditional random fields (CRFs), and machine translation techniques (Wang et al., 2006; Jiang et al., 2007; Li et al., 2009; Yang et al., 2012) .", "Similar methods have also been developed for character conversion in Japanese (Tokunaga et al., 2011) .", "This study takes a similar approach to the research on Chinese and Japanese, transforming a less informative encoding into strings in a natural and redundant writing system.", "Furthermore, our study can be considered as a specific lossy compression scheme on abugida textual data.", "Unlike images or audio, the lossy text compression has received little attention as it may cause difficulties with reading (Witten et al., 1994) .", "However, we handle this issue within an input method framework, where the simplified encoding is not read directly.", "Simplified Abugidas We designed simplification schemes for several different scripts within a unified framework based on phonetics and conventional usages, without considering many language specific features.", "Our primary aim was to investigate the feasibility of reducing the complexity of abugidas and to establish methods of recovering the texts.", "We will consider language-specific optimization in a future work, via both data-and user-driven studies.", "The simplification scheme is shown in Fig.", "3 .", "1 Generally, the merges are based on the common distribution of consonant phonemes in most natural languages, as well as the etymology of the characters in each abugida.", "Specifically, three or four graphemes are preserved for the different articulation locations (i.e., guttural, palate, dental, and labial), that two for plosives, one for nasal (NAS.", "), and one for approximant (APP.)", "if present.", "Additional consonants such as trills (R-LIKE), fricatives (S-/H-LIKE), and empty (ZERO-C.) are also assigned their own graphemes.", "Although the simplification omits most diacritics, three types are retained, i.e., one basic mark common to nearly all Brahmic abugidas (LONG-A), the preposed vowels in Thai and Lao (PRE-V.), and the vowel-depressors (and/or consonant-stackers) in Burmese and Khmer (DE-V.).", "We assigned graphemes to these because we found they informed the spelling and were intuitive when typing.", "The net result was the omission of 18 types of diacritics in Thai, 9 in Burmese, 27 in Khmer, and 18 in Lao, and the merging of the remaining 53 types of characters in Thai, 43 in Burmese, 37 in Khmer, and 33 in Lao, into a unified set of 21 graphemes.", "The simplification thus substantially reduces the number of graphemes, and represents a straightforward benchmark for further languagespecific refinement to build on.", "Recovery Methods The recovery process can be formalized as a sequential labeling task, that takes the simplified encoding as input, and outputs the writing units, composed of merged and omitted character(s) in the original abugidas, corresponding to each simplified grapheme.", "Although structured learning methods such as CRF (Lafferty et al., 2001) have been widely used, we found that searching for the label sequences in the output space was too costly, because of the number of labels to be recovered.", "2 Instead, we adopted non-structured point-wise prediction methods using a linear SVM (Cortes and Vapnik, 1995) and an LSTM-based RNN (Hochreiter and Schmidhuber, 1997) .", "Fig.", "4 shows the overall structure of the RNN.", "After many experimentations, a general \"shallow and broad\" configuration was adopted.", "Specifically, simplified grapheme bi-grams are first embedded into 128-dimensional vectors 3 and then encoded in one layer of a bi-directional LSTM, resulting in a final representation consisting of a 512-dimensional vector that concatenates two 256-dimensional vectors from the two directions.", "The number of dimensions used here is large because we found that higher-dimensional vectors were more effective than the deeper structures for this task, as memory capacity was more important than classification ability.", "For the same reason, the representations obtained from the LSTM layer are transformed linearly before the softmax function is applied, as we found that non-linear transformations, which are commonly used for final classification, did not help for this task.", "Experiments and Evaluation We used raw textual data from the ALT, 4 comprising around 20, 000 sentences translated from English.", "The data were divided into training, development, and test sets as specified by the project.", "5 For the SVM experiments, we used the offthe-shelf LIBLINEAR library (Fan et al., 2008) wrapped by the KyTea toolkit.", "6 Table 1 gives the recovery accuracies, demonstrating that recovery is not a difficult classification task, given well represented contextual features.", "In general, using up to 5-gram features before/after the simplified grapheme yielded the best results for the baseline, except with Burmese, where 7-gram features brought a small additional improvement.", "Because Burmese texts use relatively more spaces than the other three scripts, longer contexts help more.", "Meanwhile, Lao produced the worst results, possibly because the omission and merging process was harsh: Lao is the most phonetic of the four scripts, with the least redundant spellings.", "The LSTM-based RNN was implemented using DyNet (Neubig et al., 2017) , and it was trained using Adam (Kingma and Ba, 2014) with an initial learning rate of 10 −3 .", "If the accuracy decreased on the development set, the learning rate was halved, and learning was terminated when there was no improvement on the development set for three iterations.", "We did not use dropout (Srivastava et al., 2014) but instead a voting ensemble over a set of differently initialized models trained in parallel, which is both more effective and faster.", "As shown in Table 2 , the RNN outperformed SVM on all scripts in terms of top-1 accuracy.", "A more lenient evaluation, i.e., top-n accuracy, showed a satisfactory coverage of around 98% (Khmer and Lao) to 99% (Thai and Burmese) considering only the top four results.", "Fig.", "5 shows the effect of changing the size of the training dataset by repeatedly halving it until it was one-eighth of its original size, demonstrating that the RNN outperformed SVM regardless of training data size.", "The LSTM-based RNN should thus be a substantially better solution than the SVM for this task.", "We also investigated Burmese and Khmer further using manual evaluation.", "The results of RNN @1 ⊕16 in Table 2 were evaluated by native speakers, who examined the output writing units corresponding to each input simplified grapheme and classified the errors using four levels: 0) acceptable, i.e., alternative spelling, 1) clear and easy to identify the correct result, 2) confusing but possible to identify the correct result, and 3) incomprehensible.", "Table 3 shows the error distribution.", "For Burmese, most of the errors are at levels 1 and 2, and Khmer has a wider distribution.", "For both scripts, around 50% of the errors are serious (level 2 or 3), but the distributions suggest that they have different characteristics.", "We are currently conducting a case study on these errors for further language-specific improvements.", "Conclusion and Future Work In this study, a scheme was used to substantially simplify four abugidas, omitting most diacritics and merging the remaining characters.", "An SVM and an LSTM-based RNN were then used to recover the original texts, showing that the simplified abugidas could be recovered well.", "This illustrates the feasibility of encoding abugidas less redundantly, which could help with the development of more efficient input methods.", "As for the future work, we are planning to include language-specific optimizations in the design of the simplification scheme and to improve" ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Simplified Abugidas", "Recovery Methods", "Experiments and Evaluation", "Conclusion and Future Work" ] }
GEM-SciDuet-train-95#paper-1249#slide-7
Experimental Results
Asian Lang. Treebank data Up to 5-gram for TH, KM, LO Top-4 is satisfactory Embedding + bi-LSTM > N-gram features
Asian Lang. Treebank data Up to 5-gram for TH, KM, LO Top-4 is satisfactory Embedding + bi-LSTM > N-gram features
[]
GEM-SciDuet-train-95#paper-1249#slide-8
1249
Simplified Abugidas
An abugida is a writing system where the consonant letters represent syllables with a default vowel and other vowels are denoted by diacritics. We investigate the feasibility of recovering the original text written in an abugida after omitting subordinate diacritics and merging consonant letters with similar phonetic values. This is crucial for developing more efficient input methods by reducing the complexity in abugidas. Four abugidas in the southern Brahmic family, i.e., Thai, Burmese, Khmer, and Lao, were studied using a newswire 20, 000-sentence dataset. We compared the recovery performance of a support vector machine and an LSTMbased recurrent neural network, finding that the abugida graphemes could be recovered with 94% -97% accuracy at the top-1 level and 98% -99% at the top-4 level, even after omitting most diacritics (10 -30 types) and merging the remaining 30 -50 characters into 21 graphemes.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91 ], "paper_content_text": [ "Introduction Writing systems are used to record utterances in a wide range of languages and can be organized into the hierarchy shown in Fig.", "1 .", "The symbols in a writing system generally represent either speech sounds (phonograms) or semantic units (logograms) .", "Phonograms can be either segmental or syllabic, with segmental systems being more phonetic because they use separate symbols (i.e., letters) to represent consonants and vowels.", "Segmental systems can be further subdivided depending on their representation of vowels.", "Alphabets (e.g., the Latin, Cyrillic, and Greek scripts) are the most common and treat vowel and consonant let- ters equally.", "In contrast, abjads (e.g., the Arabic and Hebrew scripts) do not write most vowels explicitly.", "The third type, abugidas, also called alphasyllabary, includes features from both segmental and syllabic systems.", "In abugidas, consonant letters represent syllables with a default vowel, and other vowels are denoted by diacritics.", "Abugidas thus denote vowels less explicitly than alphabets but more explicitly than abjads, while being less phonetic than alphabets, but more phonetic than syllabaries.", "Since abugidas combine segmental and syllabic systems, they typically have more symbols than conventional alphabets.", "In this study, we investigate how to simplify and recover abugidas, with the aim of developing a more efficient method of encoding abugidas for input.", "Alphabets generally do not have a large set of symbols, making them easy to map to a traditional keyboard, and logogram and syllabic systems need specially designed input methods because of their large variety of symbols.", "Traditional input methods for abugidas are similar to those for alphabets, mapping two or three different symbols onto each key and requiring users to type each character and diacritic exactly.", "In contrast, we are able to substantially simplify inputting abugidas by encoding them in a lossy (or \"fuzzy\") way.", "TH ะ ั ั ั ั ั ั ั ั ั ั ั ั ั ั ั ั ั MY ိ ိ ိ ိ ေိ ိ ိ ိ ိ KM ិ ិ ិ ិ ិ ិ ិ ើិ ើិ ើិ ើិ ែិ ៃិ ើិ ើិ ិ ិ ិ ិ ិ ិ ិ ិ ិ ិ ិ ិ LO ະ ិ ិ ិ ិ ិ ិ ិ ិ ិ ិ ຽ ិ ិ ិ ិ ិ ិ OMITTED I II I II I II I II MN K G U C J I Y T D N L P B M W R S H Q A E TH กขฃ คฅฆ ง จฉ ชซฌ ญ ย ฎฏฐดตถ ฑฒทธ ณน ลฦฬ บปผฝ พฟภ ม ว รฤ ศษส หฬ อ ๅ เ แ โ ใ ไ MY ကခ ဂဃ င စဆ ဇဈ ဉ ည ယ ိ ဋဌတထ ဍဎဒဓ ဏန လဠ ပဖ ဗဘ မ ဝ ိ ရ ြိ သဿ ဟ ိ အ ိ ိ ိ ိ KM កខ គឃ ង ចឆ ជឈ ញ យ ដឋតថ ឌឍទធ ណន លឡ បផ ពភ ម វ រ ឝឞស ហ អ ិ ្ LO ກຂ ຄ ງ ຈ ຊ ຍ ຢ ດຕຖ ທ ນ ລ ບປຜຝ ພຟ ມ ວ ຣ ສ ຫຮ ອ ເ ແ ໂ ໃ ໄ APP.", "DENTAL PALATE PRE-V. DE-V. PLOSIVE NAS.", "MERGED R-LIKE S-LIKE H-LIKE LONG-A ZERO-C. LABIAL PLOSIVE NAS.", "APP.", "PLOSIVE NAS.", "GUTTURAL PLOSIVE NAS.", "APP.", "Figure 3 : Merging and omission for Thai (TH), Burmese (MY), Khmer (KM), and Lao (LO) scripts.", "The MN row lists the mnemonics assigned to graphemes in our experiment.", "In this study, the mnemonics can be assigned arbitrarily, and we selected Latin letters related to the real pronunciation wherever possible.", "Fig.", "2 gives an overview of this study, showing examples in Khmer.", "We simplify abugidas by omitting vowel diacritics and merging consonant letters with identical or similar phonetic values, as shown in (a).", "This simplification is intuitive, both orthographically and phonetically.", "To resolve the ambiguities introduced by the simplification, we use data-driven methods to recover the original texts, as shown in (b).", "We conducted experiments on four southern Brahmic scripts, i.e., Thai, Burmese, Khmer, and Lao scripts, with a unified framework, using data from the Asian Language Treebank (ALT) (Riza et al., 2016) .", "The experiments show that the abugidas can be recovered satisfactorily by a recurrent neural network (RNN) using long short-term memory (LSTM) units, even when nearly all of the diacritics (10 -30 types) have been omitted and the remaining 30 -50 characters have been merged into 21 graphemes.", "Thai gave the best performance, with 97% top-1 accuracy for graphemes and over 99% top-4 accuracy.", "Lao, which gave the worst performance, still achieved the top-1 and top-4 accuracies of around 94% and 98%, respectively.", "The Burmese and Khmer results, which lay in-between the other two, were also investigated by manual evaluation.", "Related Work Some optimized keyboard layout have been proposed for specific abugidas (Ouk et al., 2008) .", "Most studies on input methods have focused on Chinese and Japanese characters, where thousands of symbols need to be encoded and recovered.", "For Chinese characters, Chen and Lee (2000) made an early attempt to apply statistical methods to sentence-level processing, using a hidden Markov model.", "Others have examined max-entropy models, support vector machines (SVMs), conditional random fields (CRFs), and machine translation techniques (Wang et al., 2006; Jiang et al., 2007; Li et al., 2009; Yang et al., 2012) .", "Similar methods have also been developed for character conversion in Japanese (Tokunaga et al., 2011) .", "This study takes a similar approach to the research on Chinese and Japanese, transforming a less informative encoding into strings in a natural and redundant writing system.", "Furthermore, our study can be considered as a specific lossy compression scheme on abugida textual data.", "Unlike images or audio, the lossy text compression has received little attention as it may cause difficulties with reading (Witten et al., 1994) .", "However, we handle this issue within an input method framework, where the simplified encoding is not read directly.", "Simplified Abugidas We designed simplification schemes for several different scripts within a unified framework based on phonetics and conventional usages, without considering many language specific features.", "Our primary aim was to investigate the feasibility of reducing the complexity of abugidas and to establish methods of recovering the texts.", "We will consider language-specific optimization in a future work, via both data-and user-driven studies.", "The simplification scheme is shown in Fig.", "3 .", "1 Generally, the merges are based on the common distribution of consonant phonemes in most natural languages, as well as the etymology of the characters in each abugida.", "Specifically, three or four graphemes are preserved for the different articulation locations (i.e., guttural, palate, dental, and labial), that two for plosives, one for nasal (NAS.", "), and one for approximant (APP.)", "if present.", "Additional consonants such as trills (R-LIKE), fricatives (S-/H-LIKE), and empty (ZERO-C.) are also assigned their own graphemes.", "Although the simplification omits most diacritics, three types are retained, i.e., one basic mark common to nearly all Brahmic abugidas (LONG-A), the preposed vowels in Thai and Lao (PRE-V.), and the vowel-depressors (and/or consonant-stackers) in Burmese and Khmer (DE-V.).", "We assigned graphemes to these because we found they informed the spelling and were intuitive when typing.", "The net result was the omission of 18 types of diacritics in Thai, 9 in Burmese, 27 in Khmer, and 18 in Lao, and the merging of the remaining 53 types of characters in Thai, 43 in Burmese, 37 in Khmer, and 33 in Lao, into a unified set of 21 graphemes.", "The simplification thus substantially reduces the number of graphemes, and represents a straightforward benchmark for further languagespecific refinement to build on.", "Recovery Methods The recovery process can be formalized as a sequential labeling task, that takes the simplified encoding as input, and outputs the writing units, composed of merged and omitted character(s) in the original abugidas, corresponding to each simplified grapheme.", "Although structured learning methods such as CRF (Lafferty et al., 2001) have been widely used, we found that searching for the label sequences in the output space was too costly, because of the number of labels to be recovered.", "2 Instead, we adopted non-structured point-wise prediction methods using a linear SVM (Cortes and Vapnik, 1995) and an LSTM-based RNN (Hochreiter and Schmidhuber, 1997) .", "Fig.", "4 shows the overall structure of the RNN.", "After many experimentations, a general \"shallow and broad\" configuration was adopted.", "Specifically, simplified grapheme bi-grams are first embedded into 128-dimensional vectors 3 and then encoded in one layer of a bi-directional LSTM, resulting in a final representation consisting of a 512-dimensional vector that concatenates two 256-dimensional vectors from the two directions.", "The number of dimensions used here is large because we found that higher-dimensional vectors were more effective than the deeper structures for this task, as memory capacity was more important than classification ability.", "For the same reason, the representations obtained from the LSTM layer are transformed linearly before the softmax function is applied, as we found that non-linear transformations, which are commonly used for final classification, did not help for this task.", "Experiments and Evaluation We used raw textual data from the ALT, 4 comprising around 20, 000 sentences translated from English.", "The data were divided into training, development, and test sets as specified by the project.", "5 For the SVM experiments, we used the offthe-shelf LIBLINEAR library (Fan et al., 2008) wrapped by the KyTea toolkit.", "6 Table 1 gives the recovery accuracies, demonstrating that recovery is not a difficult classification task, given well represented contextual features.", "In general, using up to 5-gram features before/after the simplified grapheme yielded the best results for the baseline, except with Burmese, where 7-gram features brought a small additional improvement.", "Because Burmese texts use relatively more spaces than the other three scripts, longer contexts help more.", "Meanwhile, Lao produced the worst results, possibly because the omission and merging process was harsh: Lao is the most phonetic of the four scripts, with the least redundant spellings.", "The LSTM-based RNN was implemented using DyNet (Neubig et al., 2017) , and it was trained using Adam (Kingma and Ba, 2014) with an initial learning rate of 10 −3 .", "If the accuracy decreased on the development set, the learning rate was halved, and learning was terminated when there was no improvement on the development set for three iterations.", "We did not use dropout (Srivastava et al., 2014) but instead a voting ensemble over a set of differently initialized models trained in parallel, which is both more effective and faster.", "As shown in Table 2 , the RNN outperformed SVM on all scripts in terms of top-1 accuracy.", "A more lenient evaluation, i.e., top-n accuracy, showed a satisfactory coverage of around 98% (Khmer and Lao) to 99% (Thai and Burmese) considering only the top four results.", "Fig.", "5 shows the effect of changing the size of the training dataset by repeatedly halving it until it was one-eighth of its original size, demonstrating that the RNN outperformed SVM regardless of training data size.", "The LSTM-based RNN should thus be a substantially better solution than the SVM for this task.", "We also investigated Burmese and Khmer further using manual evaluation.", "The results of RNN @1 ⊕16 in Table 2 were evaluated by native speakers, who examined the output writing units corresponding to each input simplified grapheme and classified the errors using four levels: 0) acceptable, i.e., alternative spelling, 1) clear and easy to identify the correct result, 2) confusing but possible to identify the correct result, and 3) incomprehensible.", "Table 3 shows the error distribution.", "For Burmese, most of the errors are at levels 1 and 2, and Khmer has a wider distribution.", "For both scripts, around 50% of the errors are serious (level 2 or 3), but the distributions suggest that they have different characteristics.", "We are currently conducting a case study on these errors for further language-specific improvements.", "Conclusion and Future Work In this study, a scheme was used to substantially simplify four abugidas, omitting most diacritics and merging the remaining characters.", "An SVM and an LSTM-based RNN were then used to recover the original texts, showing that the simplified abugidas could be recovered well.", "This illustrates the feasibility of encoding abugidas less redundantly, which could help with the development of more efficient input methods.", "As for the future work, we are planning to include language-specific optimizations in the design of the simplification scheme and to improve" ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Simplified Abugidas", "Recovery Methods", "Experiments and Evaluation", "Conclusion and Future Work" ] }
GEM-SciDuet-train-95#paper-1249#slide-8
Experimental Results Training Data Size
Number of graphemes after simplification RNN outperforms SVM, regardless of the training data size
Number of graphemes after simplification RNN outperforms SVM, regardless of the training data size
[]
GEM-SciDuet-train-95#paper-1249#slide-9
1249
Simplified Abugidas
An abugida is a writing system where the consonant letters represent syllables with a default vowel and other vowels are denoted by diacritics. We investigate the feasibility of recovering the original text written in an abugida after omitting subordinate diacritics and merging consonant letters with similar phonetic values. This is crucial for developing more efficient input methods by reducing the complexity in abugidas. Four abugidas in the southern Brahmic family, i.e., Thai, Burmese, Khmer, and Lao, were studied using a newswire 20, 000-sentence dataset. We compared the recovery performance of a support vector machine and an LSTMbased recurrent neural network, finding that the abugida graphemes could be recovered with 94% -97% accuracy at the top-1 level and 98% -99% at the top-4 level, even after omitting most diacritics (10 -30 types) and merging the remaining 30 -50 characters into 21 graphemes.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91 ], "paper_content_text": [ "Introduction Writing systems are used to record utterances in a wide range of languages and can be organized into the hierarchy shown in Fig.", "1 .", "The symbols in a writing system generally represent either speech sounds (phonograms) or semantic units (logograms) .", "Phonograms can be either segmental or syllabic, with segmental systems being more phonetic because they use separate symbols (i.e., letters) to represent consonants and vowels.", "Segmental systems can be further subdivided depending on their representation of vowels.", "Alphabets (e.g., the Latin, Cyrillic, and Greek scripts) are the most common and treat vowel and consonant let- ters equally.", "In contrast, abjads (e.g., the Arabic and Hebrew scripts) do not write most vowels explicitly.", "The third type, abugidas, also called alphasyllabary, includes features from both segmental and syllabic systems.", "In abugidas, consonant letters represent syllables with a default vowel, and other vowels are denoted by diacritics.", "Abugidas thus denote vowels less explicitly than alphabets but more explicitly than abjads, while being less phonetic than alphabets, but more phonetic than syllabaries.", "Since abugidas combine segmental and syllabic systems, they typically have more symbols than conventional alphabets.", "In this study, we investigate how to simplify and recover abugidas, with the aim of developing a more efficient method of encoding abugidas for input.", "Alphabets generally do not have a large set of symbols, making them easy to map to a traditional keyboard, and logogram and syllabic systems need specially designed input methods because of their large variety of symbols.", "Traditional input methods for abugidas are similar to those for alphabets, mapping two or three different symbols onto each key and requiring users to type each character and diacritic exactly.", "In contrast, we are able to substantially simplify inputting abugidas by encoding them in a lossy (or \"fuzzy\") way.", "TH ะ ั ั ั ั ั ั ั ั ั ั ั ั ั ั ั ั ั MY ိ ိ ိ ိ ေိ ိ ိ ိ ိ KM ិ ិ ិ ិ ិ ិ ិ ើិ ើិ ើិ ើិ ែិ ៃិ ើិ ើិ ិ ិ ិ ិ ិ ិ ិ ិ ិ ិ ិ ិ LO ະ ិ ិ ិ ិ ិ ិ ិ ិ ិ ិ ຽ ិ ិ ិ ិ ិ ិ OMITTED I II I II I II I II MN K G U C J I Y T D N L P B M W R S H Q A E TH กขฃ คฅฆ ง จฉ ชซฌ ญ ย ฎฏฐดตถ ฑฒทธ ณน ลฦฬ บปผฝ พฟภ ม ว รฤ ศษส หฬ อ ๅ เ แ โ ใ ไ MY ကခ ဂဃ င စဆ ဇဈ ဉ ည ယ ိ ဋဌတထ ဍဎဒဓ ဏန လဠ ပဖ ဗဘ မ ဝ ိ ရ ြိ သဿ ဟ ိ အ ိ ိ ိ ိ KM កខ គឃ ង ចឆ ជឈ ញ យ ដឋតថ ឌឍទធ ណន លឡ បផ ពភ ម វ រ ឝឞស ហ អ ិ ្ LO ກຂ ຄ ງ ຈ ຊ ຍ ຢ ດຕຖ ທ ນ ລ ບປຜຝ ພຟ ມ ວ ຣ ສ ຫຮ ອ ເ ແ ໂ ໃ ໄ APP.", "DENTAL PALATE PRE-V. DE-V. PLOSIVE NAS.", "MERGED R-LIKE S-LIKE H-LIKE LONG-A ZERO-C. LABIAL PLOSIVE NAS.", "APP.", "PLOSIVE NAS.", "GUTTURAL PLOSIVE NAS.", "APP.", "Figure 3 : Merging and omission for Thai (TH), Burmese (MY), Khmer (KM), and Lao (LO) scripts.", "The MN row lists the mnemonics assigned to graphemes in our experiment.", "In this study, the mnemonics can be assigned arbitrarily, and we selected Latin letters related to the real pronunciation wherever possible.", "Fig.", "2 gives an overview of this study, showing examples in Khmer.", "We simplify abugidas by omitting vowel diacritics and merging consonant letters with identical or similar phonetic values, as shown in (a).", "This simplification is intuitive, both orthographically and phonetically.", "To resolve the ambiguities introduced by the simplification, we use data-driven methods to recover the original texts, as shown in (b).", "We conducted experiments on four southern Brahmic scripts, i.e., Thai, Burmese, Khmer, and Lao scripts, with a unified framework, using data from the Asian Language Treebank (ALT) (Riza et al., 2016) .", "The experiments show that the abugidas can be recovered satisfactorily by a recurrent neural network (RNN) using long short-term memory (LSTM) units, even when nearly all of the diacritics (10 -30 types) have been omitted and the remaining 30 -50 characters have been merged into 21 graphemes.", "Thai gave the best performance, with 97% top-1 accuracy for graphemes and over 99% top-4 accuracy.", "Lao, which gave the worst performance, still achieved the top-1 and top-4 accuracies of around 94% and 98%, respectively.", "The Burmese and Khmer results, which lay in-between the other two, were also investigated by manual evaluation.", "Related Work Some optimized keyboard layout have been proposed for specific abugidas (Ouk et al., 2008) .", "Most studies on input methods have focused on Chinese and Japanese characters, where thousands of symbols need to be encoded and recovered.", "For Chinese characters, Chen and Lee (2000) made an early attempt to apply statistical methods to sentence-level processing, using a hidden Markov model.", "Others have examined max-entropy models, support vector machines (SVMs), conditional random fields (CRFs), and machine translation techniques (Wang et al., 2006; Jiang et al., 2007; Li et al., 2009; Yang et al., 2012) .", "Similar methods have also been developed for character conversion in Japanese (Tokunaga et al., 2011) .", "This study takes a similar approach to the research on Chinese and Japanese, transforming a less informative encoding into strings in a natural and redundant writing system.", "Furthermore, our study can be considered as a specific lossy compression scheme on abugida textual data.", "Unlike images or audio, the lossy text compression has received little attention as it may cause difficulties with reading (Witten et al., 1994) .", "However, we handle this issue within an input method framework, where the simplified encoding is not read directly.", "Simplified Abugidas We designed simplification schemes for several different scripts within a unified framework based on phonetics and conventional usages, without considering many language specific features.", "Our primary aim was to investigate the feasibility of reducing the complexity of abugidas and to establish methods of recovering the texts.", "We will consider language-specific optimization in a future work, via both data-and user-driven studies.", "The simplification scheme is shown in Fig.", "3 .", "1 Generally, the merges are based on the common distribution of consonant phonemes in most natural languages, as well as the etymology of the characters in each abugida.", "Specifically, three or four graphemes are preserved for the different articulation locations (i.e., guttural, palate, dental, and labial), that two for plosives, one for nasal (NAS.", "), and one for approximant (APP.)", "if present.", "Additional consonants such as trills (R-LIKE), fricatives (S-/H-LIKE), and empty (ZERO-C.) are also assigned their own graphemes.", "Although the simplification omits most diacritics, three types are retained, i.e., one basic mark common to nearly all Brahmic abugidas (LONG-A), the preposed vowels in Thai and Lao (PRE-V.), and the vowel-depressors (and/or consonant-stackers) in Burmese and Khmer (DE-V.).", "We assigned graphemes to these because we found they informed the spelling and were intuitive when typing.", "The net result was the omission of 18 types of diacritics in Thai, 9 in Burmese, 27 in Khmer, and 18 in Lao, and the merging of the remaining 53 types of characters in Thai, 43 in Burmese, 37 in Khmer, and 33 in Lao, into a unified set of 21 graphemes.", "The simplification thus substantially reduces the number of graphemes, and represents a straightforward benchmark for further languagespecific refinement to build on.", "Recovery Methods The recovery process can be formalized as a sequential labeling task, that takes the simplified encoding as input, and outputs the writing units, composed of merged and omitted character(s) in the original abugidas, corresponding to each simplified grapheme.", "Although structured learning methods such as CRF (Lafferty et al., 2001) have been widely used, we found that searching for the label sequences in the output space was too costly, because of the number of labels to be recovered.", "2 Instead, we adopted non-structured point-wise prediction methods using a linear SVM (Cortes and Vapnik, 1995) and an LSTM-based RNN (Hochreiter and Schmidhuber, 1997) .", "Fig.", "4 shows the overall structure of the RNN.", "After many experimentations, a general \"shallow and broad\" configuration was adopted.", "Specifically, simplified grapheme bi-grams are first embedded into 128-dimensional vectors 3 and then encoded in one layer of a bi-directional LSTM, resulting in a final representation consisting of a 512-dimensional vector that concatenates two 256-dimensional vectors from the two directions.", "The number of dimensions used here is large because we found that higher-dimensional vectors were more effective than the deeper structures for this task, as memory capacity was more important than classification ability.", "For the same reason, the representations obtained from the LSTM layer are transformed linearly before the softmax function is applied, as we found that non-linear transformations, which are commonly used for final classification, did not help for this task.", "Experiments and Evaluation We used raw textual data from the ALT, 4 comprising around 20, 000 sentences translated from English.", "The data were divided into training, development, and test sets as specified by the project.", "5 For the SVM experiments, we used the offthe-shelf LIBLINEAR library (Fan et al., 2008) wrapped by the KyTea toolkit.", "6 Table 1 gives the recovery accuracies, demonstrating that recovery is not a difficult classification task, given well represented contextual features.", "In general, using up to 5-gram features before/after the simplified grapheme yielded the best results for the baseline, except with Burmese, where 7-gram features brought a small additional improvement.", "Because Burmese texts use relatively more spaces than the other three scripts, longer contexts help more.", "Meanwhile, Lao produced the worst results, possibly because the omission and merging process was harsh: Lao is the most phonetic of the four scripts, with the least redundant spellings.", "The LSTM-based RNN was implemented using DyNet (Neubig et al., 2017) , and it was trained using Adam (Kingma and Ba, 2014) with an initial learning rate of 10 −3 .", "If the accuracy decreased on the development set, the learning rate was halved, and learning was terminated when there was no improvement on the development set for three iterations.", "We did not use dropout (Srivastava et al., 2014) but instead a voting ensemble over a set of differently initialized models trained in parallel, which is both more effective and faster.", "As shown in Table 2 , the RNN outperformed SVM on all scripts in terms of top-1 accuracy.", "A more lenient evaluation, i.e., top-n accuracy, showed a satisfactory coverage of around 98% (Khmer and Lao) to 99% (Thai and Burmese) considering only the top four results.", "Fig.", "5 shows the effect of changing the size of the training dataset by repeatedly halving it until it was one-eighth of its original size, demonstrating that the RNN outperformed SVM regardless of training data size.", "The LSTM-based RNN should thus be a substantially better solution than the SVM for this task.", "We also investigated Burmese and Khmer further using manual evaluation.", "The results of RNN @1 ⊕16 in Table 2 were evaluated by native speakers, who examined the output writing units corresponding to each input simplified grapheme and classified the errors using four levels: 0) acceptable, i.e., alternative spelling, 1) clear and easy to identify the correct result, 2) confusing but possible to identify the correct result, and 3) incomprehensible.", "Table 3 shows the error distribution.", "For Burmese, most of the errors are at levels 1 and 2, and Khmer has a wider distribution.", "For both scripts, around 50% of the errors are serious (level 2 or 3), but the distributions suggest that they have different characteristics.", "We are currently conducting a case study on these errors for further language-specific improvements.", "Conclusion and Future Work In this study, a scheme was used to substantially simplify four abugidas, omitting most diacritics and merging the remaining characters.", "An SVM and an LSTM-based RNN were then used to recover the original texts, showing that the simplified abugidas could be recovered well.", "This illustrates the feasibility of encoding abugidas less redundantly, which could help with the development of more efficient input methods.", "As for the future work, we are planning to include language-specific optimizations in the design of the simplification scheme and to improve" ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Simplified Abugidas", "Recovery Methods", "Experiments and Evaluation", "Conclusion and Future Work" ] }
GEM-SciDuet-train-95#paper-1249#slide-9
Manual Evaluation
On Burmese and Khmer best results by RNN To classify errors into four-level 0. acceptable, i.e., alternative spelling 1. clear and easy to identify the correct result 2. confusing but possible to identify the correct result
On Burmese and Khmer best results by RNN To classify errors into four-level 0. acceptable, i.e., alternative spelling 1. clear and easy to identify the correct result 2. confusing but possible to identify the correct result
[]
GEM-SciDuet-train-95#paper-1249#slide-10
1249
Simplified Abugidas
An abugida is a writing system where the consonant letters represent syllables with a default vowel and other vowels are denoted by diacritics. We investigate the feasibility of recovering the original text written in an abugida after omitting subordinate diacritics and merging consonant letters with similar phonetic values. This is crucial for developing more efficient input methods by reducing the complexity in abugidas. Four abugidas in the southern Brahmic family, i.e., Thai, Burmese, Khmer, and Lao, were studied using a newswire 20, 000-sentence dataset. We compared the recovery performance of a support vector machine and an LSTMbased recurrent neural network, finding that the abugida graphemes could be recovered with 94% -97% accuracy at the top-1 level and 98% -99% at the top-4 level, even after omitting most diacritics (10 -30 types) and merging the remaining 30 -50 characters into 21 graphemes.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91 ], "paper_content_text": [ "Introduction Writing systems are used to record utterances in a wide range of languages and can be organized into the hierarchy shown in Fig.", "1 .", "The symbols in a writing system generally represent either speech sounds (phonograms) or semantic units (logograms) .", "Phonograms can be either segmental or syllabic, with segmental systems being more phonetic because they use separate symbols (i.e., letters) to represent consonants and vowels.", "Segmental systems can be further subdivided depending on their representation of vowels.", "Alphabets (e.g., the Latin, Cyrillic, and Greek scripts) are the most common and treat vowel and consonant let- ters equally.", "In contrast, abjads (e.g., the Arabic and Hebrew scripts) do not write most vowels explicitly.", "The third type, abugidas, also called alphasyllabary, includes features from both segmental and syllabic systems.", "In abugidas, consonant letters represent syllables with a default vowel, and other vowels are denoted by diacritics.", "Abugidas thus denote vowels less explicitly than alphabets but more explicitly than abjads, while being less phonetic than alphabets, but more phonetic than syllabaries.", "Since abugidas combine segmental and syllabic systems, they typically have more symbols than conventional alphabets.", "In this study, we investigate how to simplify and recover abugidas, with the aim of developing a more efficient method of encoding abugidas for input.", "Alphabets generally do not have a large set of symbols, making them easy to map to a traditional keyboard, and logogram and syllabic systems need specially designed input methods because of their large variety of symbols.", "Traditional input methods for abugidas are similar to those for alphabets, mapping two or three different symbols onto each key and requiring users to type each character and diacritic exactly.", "In contrast, we are able to substantially simplify inputting abugidas by encoding them in a lossy (or \"fuzzy\") way.", "TH ะ ั ั ั ั ั ั ั ั ั ั ั ั ั ั ั ั ั MY ိ ိ ိ ိ ေိ ိ ိ ိ ိ KM ិ ិ ិ ិ ិ ិ ិ ើិ ើិ ើិ ើិ ែិ ៃិ ើិ ើិ ិ ិ ិ ិ ិ ិ ិ ិ ិ ិ ិ ិ LO ະ ិ ិ ិ ិ ិ ិ ិ ិ ិ ិ ຽ ិ ិ ិ ិ ិ ិ OMITTED I II I II I II I II MN K G U C J I Y T D N L P B M W R S H Q A E TH กขฃ คฅฆ ง จฉ ชซฌ ญ ย ฎฏฐดตถ ฑฒทธ ณน ลฦฬ บปผฝ พฟภ ม ว รฤ ศษส หฬ อ ๅ เ แ โ ใ ไ MY ကခ ဂဃ င စဆ ဇဈ ဉ ည ယ ိ ဋဌတထ ဍဎဒဓ ဏန လဠ ပဖ ဗဘ မ ဝ ိ ရ ြိ သဿ ဟ ိ အ ိ ိ ိ ိ KM កខ គឃ ង ចឆ ជឈ ញ យ ដឋតថ ឌឍទធ ណន លឡ បផ ពភ ម វ រ ឝឞស ហ អ ិ ្ LO ກຂ ຄ ງ ຈ ຊ ຍ ຢ ດຕຖ ທ ນ ລ ບປຜຝ ພຟ ມ ວ ຣ ສ ຫຮ ອ ເ ແ ໂ ໃ ໄ APP.", "DENTAL PALATE PRE-V. DE-V. PLOSIVE NAS.", "MERGED R-LIKE S-LIKE H-LIKE LONG-A ZERO-C. LABIAL PLOSIVE NAS.", "APP.", "PLOSIVE NAS.", "GUTTURAL PLOSIVE NAS.", "APP.", "Figure 3 : Merging and omission for Thai (TH), Burmese (MY), Khmer (KM), and Lao (LO) scripts.", "The MN row lists the mnemonics assigned to graphemes in our experiment.", "In this study, the mnemonics can be assigned arbitrarily, and we selected Latin letters related to the real pronunciation wherever possible.", "Fig.", "2 gives an overview of this study, showing examples in Khmer.", "We simplify abugidas by omitting vowel diacritics and merging consonant letters with identical or similar phonetic values, as shown in (a).", "This simplification is intuitive, both orthographically and phonetically.", "To resolve the ambiguities introduced by the simplification, we use data-driven methods to recover the original texts, as shown in (b).", "We conducted experiments on four southern Brahmic scripts, i.e., Thai, Burmese, Khmer, and Lao scripts, with a unified framework, using data from the Asian Language Treebank (ALT) (Riza et al., 2016) .", "The experiments show that the abugidas can be recovered satisfactorily by a recurrent neural network (RNN) using long short-term memory (LSTM) units, even when nearly all of the diacritics (10 -30 types) have been omitted and the remaining 30 -50 characters have been merged into 21 graphemes.", "Thai gave the best performance, with 97% top-1 accuracy for graphemes and over 99% top-4 accuracy.", "Lao, which gave the worst performance, still achieved the top-1 and top-4 accuracies of around 94% and 98%, respectively.", "The Burmese and Khmer results, which lay in-between the other two, were also investigated by manual evaluation.", "Related Work Some optimized keyboard layout have been proposed for specific abugidas (Ouk et al., 2008) .", "Most studies on input methods have focused on Chinese and Japanese characters, where thousands of symbols need to be encoded and recovered.", "For Chinese characters, Chen and Lee (2000) made an early attempt to apply statistical methods to sentence-level processing, using a hidden Markov model.", "Others have examined max-entropy models, support vector machines (SVMs), conditional random fields (CRFs), and machine translation techniques (Wang et al., 2006; Jiang et al., 2007; Li et al., 2009; Yang et al., 2012) .", "Similar methods have also been developed for character conversion in Japanese (Tokunaga et al., 2011) .", "This study takes a similar approach to the research on Chinese and Japanese, transforming a less informative encoding into strings in a natural and redundant writing system.", "Furthermore, our study can be considered as a specific lossy compression scheme on abugida textual data.", "Unlike images or audio, the lossy text compression has received little attention as it may cause difficulties with reading (Witten et al., 1994) .", "However, we handle this issue within an input method framework, where the simplified encoding is not read directly.", "Simplified Abugidas We designed simplification schemes for several different scripts within a unified framework based on phonetics and conventional usages, without considering many language specific features.", "Our primary aim was to investigate the feasibility of reducing the complexity of abugidas and to establish methods of recovering the texts.", "We will consider language-specific optimization in a future work, via both data-and user-driven studies.", "The simplification scheme is shown in Fig.", "3 .", "1 Generally, the merges are based on the common distribution of consonant phonemes in most natural languages, as well as the etymology of the characters in each abugida.", "Specifically, three or four graphemes are preserved for the different articulation locations (i.e., guttural, palate, dental, and labial), that two for plosives, one for nasal (NAS.", "), and one for approximant (APP.)", "if present.", "Additional consonants such as trills (R-LIKE), fricatives (S-/H-LIKE), and empty (ZERO-C.) are also assigned their own graphemes.", "Although the simplification omits most diacritics, three types are retained, i.e., one basic mark common to nearly all Brahmic abugidas (LONG-A), the preposed vowels in Thai and Lao (PRE-V.), and the vowel-depressors (and/or consonant-stackers) in Burmese and Khmer (DE-V.).", "We assigned graphemes to these because we found they informed the spelling and were intuitive when typing.", "The net result was the omission of 18 types of diacritics in Thai, 9 in Burmese, 27 in Khmer, and 18 in Lao, and the merging of the remaining 53 types of characters in Thai, 43 in Burmese, 37 in Khmer, and 33 in Lao, into a unified set of 21 graphemes.", "The simplification thus substantially reduces the number of graphemes, and represents a straightforward benchmark for further languagespecific refinement to build on.", "Recovery Methods The recovery process can be formalized as a sequential labeling task, that takes the simplified encoding as input, and outputs the writing units, composed of merged and omitted character(s) in the original abugidas, corresponding to each simplified grapheme.", "Although structured learning methods such as CRF (Lafferty et al., 2001) have been widely used, we found that searching for the label sequences in the output space was too costly, because of the number of labels to be recovered.", "2 Instead, we adopted non-structured point-wise prediction methods using a linear SVM (Cortes and Vapnik, 1995) and an LSTM-based RNN (Hochreiter and Schmidhuber, 1997) .", "Fig.", "4 shows the overall structure of the RNN.", "After many experimentations, a general \"shallow and broad\" configuration was adopted.", "Specifically, simplified grapheme bi-grams are first embedded into 128-dimensional vectors 3 and then encoded in one layer of a bi-directional LSTM, resulting in a final representation consisting of a 512-dimensional vector that concatenates two 256-dimensional vectors from the two directions.", "The number of dimensions used here is large because we found that higher-dimensional vectors were more effective than the deeper structures for this task, as memory capacity was more important than classification ability.", "For the same reason, the representations obtained from the LSTM layer are transformed linearly before the softmax function is applied, as we found that non-linear transformations, which are commonly used for final classification, did not help for this task.", "Experiments and Evaluation We used raw textual data from the ALT, 4 comprising around 20, 000 sentences translated from English.", "The data were divided into training, development, and test sets as specified by the project.", "5 For the SVM experiments, we used the offthe-shelf LIBLINEAR library (Fan et al., 2008) wrapped by the KyTea toolkit.", "6 Table 1 gives the recovery accuracies, demonstrating that recovery is not a difficult classification task, given well represented contextual features.", "In general, using up to 5-gram features before/after the simplified grapheme yielded the best results for the baseline, except with Burmese, where 7-gram features brought a small additional improvement.", "Because Burmese texts use relatively more spaces than the other three scripts, longer contexts help more.", "Meanwhile, Lao produced the worst results, possibly because the omission and merging process was harsh: Lao is the most phonetic of the four scripts, with the least redundant spellings.", "The LSTM-based RNN was implemented using DyNet (Neubig et al., 2017) , and it was trained using Adam (Kingma and Ba, 2014) with an initial learning rate of 10 −3 .", "If the accuracy decreased on the development set, the learning rate was halved, and learning was terminated when there was no improvement on the development set for three iterations.", "We did not use dropout (Srivastava et al., 2014) but instead a voting ensemble over a set of differently initialized models trained in parallel, which is both more effective and faster.", "As shown in Table 2 , the RNN outperformed SVM on all scripts in terms of top-1 accuracy.", "A more lenient evaluation, i.e., top-n accuracy, showed a satisfactory coverage of around 98% (Khmer and Lao) to 99% (Thai and Burmese) considering only the top four results.", "Fig.", "5 shows the effect of changing the size of the training dataset by repeatedly halving it until it was one-eighth of its original size, demonstrating that the RNN outperformed SVM regardless of training data size.", "The LSTM-based RNN should thus be a substantially better solution than the SVM for this task.", "We also investigated Burmese and Khmer further using manual evaluation.", "The results of RNN @1 ⊕16 in Table 2 were evaluated by native speakers, who examined the output writing units corresponding to each input simplified grapheme and classified the errors using four levels: 0) acceptable, i.e., alternative spelling, 1) clear and easy to identify the correct result, 2) confusing but possible to identify the correct result, and 3) incomprehensible.", "Table 3 shows the error distribution.", "For Burmese, most of the errors are at levels 1 and 2, and Khmer has a wider distribution.", "For both scripts, around 50% of the errors are serious (level 2 or 3), but the distributions suggest that they have different characteristics.", "We are currently conducting a case study on these errors for further language-specific improvements.", "Conclusion and Future Work In this study, a scheme was used to substantially simplify four abugidas, omitting most diacritics and merging the remaining characters.", "An SVM and an LSTM-based RNN were then used to recover the original texts, showing that the simplified abugidas could be recovered well.", "This illustrates the feasibility of encoding abugidas less redundantly, which could help with the development of more efficient input methods.", "As for the future work, we are planning to include language-specific optimizations in the design of the simplification scheme and to improve" ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Simplified Abugidas", "Recovery Methods", "Experiments and Evaluation", "Conclusion and Future Work" ] }
GEM-SciDuet-train-95#paper-1249#slide-10
Conclusion and Future Work
Abugidas can be simplified largely and recovered with high accuracy Four Brahmic abugidas are investigated Simplified into a compact symbol set (around 20 graphemes) Recovered satisfactorily by standard machine learning method Experimentally show the feasibility to encode abugidas in a lossy manner To develop practical input method for abugidas
Abugidas can be simplified largely and recovered with high accuracy Four Brahmic abugidas are investigated Simplified into a compact symbol set (around 20 graphemes) Recovered satisfactorily by standard machine learning method Experimentally show the feasibility to encode abugidas in a lossy manner To develop practical input method for abugidas
[]
GEM-SciDuet-train-96#paper-1251#slide-0
1251
A Co-Matching Model for Multi-choice Reading Comprehension
Multi-choice reading comprehension is a challenging task, which involves the matching between a passage and a question-answer pair. This paper proposes a new co-matching approach to this problem, which jointly models whether a passage can match both a question and a candidate answer. Experimental results on the RACE dataset demonstrate that our approach achieves state-of-the-art performance.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119 ], "paper_content_text": [ "Introduction Enabling machines to understand natural language text is arguably the ultimate goal of natural language processing, and the task of machine reading comprehension is an intermediate step towards this ultimate goal (Richardson et al., 2013; Hermann et al., 2015; Hill et al., 2015; Rajpurkar et al., 2016; Nguyen et al., 2016) .", "Recently, Lai et al.", "(2017) released a new multi-choice machine comprehension dataset called RACE that was extracted from middle and high school English examinations in China.", "Figure 1 shows an example passage and two related questions from RACE.", "The key difference between RACE and previously released machine comprehension datasets (e.g., the CNN/Daily Mail dataset (Hermann et al., 2015) and SQuAD (Rajpurkar et al., 2016) ) is that the answers in RACE often cannot be directly extracted from the given passages, as illustrated by the two example questions (Q1 & Q2) in Figure 1 .", "Thus, answering these questions is more challenging and requires more inferences.", "Previous approaches to machine comprehension are usually based on pairwise sequence matching, where either the passage is matched against the sequence that concatenates both the question and a candidate answer (Yin et al., 2016) , or the passage is matched against the question alone followed by a second step of selecting an answer using the matching result of the first step (Lai et al., 2017; Zhou et al., 2018) .", "However, these approaches may not be suitable for multi-choice reading comprehension since questions and answers are often equally important.", "Matching the passage only against the question may not be meaningful and may lead to loss of information from the original passage, as we can see from the first example question in Figure 1 .", "On the other hand, concatenating the question and the answer into a single sequence for matching may not work, either, due to the loss of interaction information between a question and an answer.", "As illustrated by Q2 in Figure 1 , the model may need to recognize what \"he\" and \"it\" in candidate answer (c) refer to in the question, in order to select (c) as the correct answer.", "This observation of the RACE dataset shows that we face a new challenge of matching sequence triplets (i.e., passage, question and answer) instead of pairwise matching.", "In this paper, we propose a new model to match a question-answer pair to a given passage.", "Our comatching approach explicitly treats the question and the candidate answer as two sequences and jointly matches them to the given passage.", "Specifically, for each position in the passage, we compute two attention-weighted vectors, where one is from the question and the other from the candidate answer.", "Then, two matching representations are constructed: the first one matches the passage with the question while the second one matches the passage with the candidate answer.", "These two newly constructed matching representations together form a co-matching state.", "Intuitively, it encodes the locational information of the question and the candidate answer matched to a specific context of the passage.", "Finally, we apply a hierar-Passage: My father wasn't a king, he was a taxi driver, but I am a prince-Prince Renato II, of the country Pontinha , an island fort on Funchal harbour.", "In 1903, the king of Portugal sold the land to a wealthy British family, the Blandys, who make Madeira wine.", "Fourteen years ago the family decided to sell it for just EUR25,000, but nobody wanted to buy it either.", "I met Blandy at a party and he asked if I'd like to buy the island.", "Of course I said yes, but I had no money-I was just an art teacher.", "I tried to find some business partners, who all thought I was crazy.", "So I sold some of my possessions, put my savings together and bought it.", "Of course, my family and my friends-all thought I was mad ...", "If l want to have a national flag, it could be blue today, red tomorrow.", "... My family sometimes drops by, and other people come every day because the country is free for tourists to visit ... Q1: Which statement of the following is true?", "Q2: How did the author get the island?", "a.", "The author made his living by driving.", "a.", "It was a present from Blandy.", "b.", "The author's wife supported to buy the island.", "b.", "The king sold it to him.", "c. Blue and red are the main colors of his national flag.", "c. He bought it from Blandy.", "d. People can travel around the island free of charge.", "d. He inherited from his father.", "chical LSTM (Tang et al., 2015) over the sequence of co-matching states at different positions of the passage.", "Information is aggregated from wordlevel to sentence-level and then from sentencelevel to document-level.", "In this way, our model can better deal with the questions that require evidence scattered in different sentences in the passage.", "Our model improves the state-of-the-art model by 3 percentage on the RACE dataset.", "Our code will be released under https://github.", "com/shuohangwang/comatch.", "Model For the task of multi-choice reading comprehension, the machine is given a passage, a question and a set of candidate answers.", "The goal is to select the correct answer from the candidates.", "Let us use P ∈ R d×P , Q ∈ R d×Q and A ∈ R d×A to represent the passage, the question and a candidate answer, respectively, where each word in each sequence is represented by an embedding vector.", "d is the dimensionality of the embeddings, and P , Q, and A are the lengths of these sequences.", "Overall our model works as follows.", "For each candidate answer, our model constructs a vector that represents the matching of P with both Q and A.", "The vectors of all candidate answers are then used for answer selection.", "Because we simultaneously match P with Q and A, we call this a comatching model.", "In Section 2.1 we introduce the word-level co-matching mechanism.", "Then in Section 2.2 we introduce a hierarchical aggregation process.", "Finally in Section 2.3 we present the objective function.", "An overview of our co-matching model is shown in Figure 2 .", "Co-matching The co-matching part of our model aims to match the passage with the question and the candidate answer at the word-level.", "Inspired by some previous work (Wang and Jiang, 2016; Trischler et al., 2016) , we first use bi-directional LSTMs (Hochreiter and Schmidhuber, 1997) to pre-process the sequences as follows: H p = Bi-LSTM(P), H q = Bi-LSTM(Q), H a = Bi-LSTM(A), (1) where H p ∈ R l×P , H q ∈ R l×Q and H a ∈ R l×A are the sequences of hidden states generated by the bi-directional LSTMs.", "We then make use of the attention mechanism to match each state in the passage to an aggregated representation of the question and the candidate answer.", "The attention vectors are computed as follows: G q = SoftMax (W g H q + b g ⊗ e Q ) T H p , G a = SoftMax (W g H a + b g ⊗ e Q ) T H p , H q = H q G q , H a = H a G a , (2) where W g ∈ R l×l and b g ∈ R l are the parameters to learn.", "e Q ∈ R Q is a vector of all 1s and it is used to repeat the bias vector into the matrix.", "G q ∈ R Q×P and G a ∈ R A×P are the attention M q = ReLU W m H q H p H q ⊗ H p + b m , M a = ReLU W m H a H p H a ⊗ H p + b m , C = M q M a , (3) where W g ∈ R l×2l and b g ∈ R l are the parameters to learn.", "· · is the column-wise concatenation of two matrices, and · · and · ⊗ · are the elementwise subtraction and multiplication between two matrices, which are used to build better matching representations (Tai et al., 2015; .", "M q ∈ R l×P represents the matching between the hidden states of the passage and the corresponding attention-weighted representations of the question.", "Similarly, we match the passage with the candidate answer and represent the matching results using M a ∈ R l×P .", "Finally C ∈ R 2l×P is the concatenation of M q ∈ R l×P and M a ∈ R l×P and represents how each passage state can be matched with the question and the candidate answer.", "We refer to c ∈ R 2l , which is a single column of C, as a co-matching state that concurrently matches a passage state with both the question and the candidate answer.", "Hierarchical Aggregation In order to capture the sentence structure of the passage, we further modify the model presented earlier and build a hierarchical LSTM (Tang et al., 2015) on top of the co-matching states.", "Specifically, we first split the passage into sentences and we use P 1 , P 2 , .", ".", ".", ", P N to represent these sentences, where N is the number of sentences in the passage.", "For each triplet {P n , Q, A}, n ∈ [1, N ], we can get the co-matching states C n through Eqn.", "(1-3) .", "Then we build a bi-directional LSTM followed by max pooling on top of the comatching states of each sentence as follows: h s n = MaxPooling (Bi-LSTM (C n )) , (4) where the function MaxPooling(·) is the row-wise max pooling operation.", "h s n ∈ R l , n ∈ [1, N ] is the sentence-level aggregation of the co-matching states.", "All these representations will be further integrated by another Bi-LSTM to get the final triplet matching representation.", "where H s ∈ R l×N is the concatenation of all the sentence-level representations and it is the input of a higher level LSTM.", "h t ∈ R l is the final output of the matching between the sequences of the passage, the question and the candidate answer.", "Objective function For each candidate answer A i , we can build its matching representation h t i ∈ R l with the question and the passage through Eqn.", "(5).", "Our loss function is computed as follows: L(A i |P, Q) = − log exp(w T h t i ) 4 j=1 exp(w T h t j ) , (6) where w ∈ R l is a parameter to learn.", "Experiment To evaluate the effectiveness of our hierarchical co-matching model, we use the RACE dataset (Lai et al., 2017) , which consists of two subsets: RACE-M comes from middle school examinations while RACE-H comes from high school examinations.", "RACE is the combination of the two.", "We compare our model with a number of baseline models.", "We also compare with two variants of our model for an ablation study.", "Comparison with Baselines We compare our model with the following baselines: • Sliding Window based method (Richardson et al., 2013) computes the matching score based on the sum of the tf-idf values of the matched words between the question-answer pair and each subpassage with a fixed a window size.", "• Stanford Attentive Reader (AR) (Chen et al., 2016) first builds a question-related passage representation through attention mechanism and then compares it with each candidate answer representation to get the answer probabilities.", "• GA (Dhingra et al., 2017) uses gated attention mechanism with multiple hops to extract the question-related information of the passage and compares it with candidate answers.", "• ElimiNet (Soham et al., 2017) tries to first eliminate the most irrelevant choices and then select the best answer.", "• HAF (Zhou et al., 2018) considers not only the matching between the three sequences, namely, passage, question and candidate answer, but also the matching between the candidate answers.", "• MUSIC (Xu et al., 2017) integrates different sequence matching strategies into the model and also adds a unit of multi-step reasoning for selecting the answer.", "Besides, we also report the following two results as reference points: Turkers is the performance of Amazon Turkers on a randomly sampled subset of the RACE test set.", "Ceiling is the percentage of the unambiguous questions with a correct answer in a subset of the test set.", "The performance of our model together with the baselines are shown in Table 2 .", "We can see that our proposed complete model, Hier-Co-Matching, achieved the best performance among all the public results.", "Still, there is a huge gap between the best machine reading performance and the human performance, showing the great potential for further research.", "Ablation Study Moreover, we conduct an ablation study of our model architecture.", "In this study, we are mainly interested in the contribution of each component introduced in this work to our final results.", "We studied two key factors: (1) the comatching module and (2) the hierarchical aggregation approach.", "We observed a 4 percentage performance decrease by replacing the co-matching module with a single matching state (i.e., only M a in Eqn (3)) by directly concatenating the question with each candidate answer (Yin et al., 2016) .", "We also observe about 2 percentage decrease when we treat the passage as a plain sequence, and run a two-layer LSTM (to ensure the numbers of parameters are comparable) over the whole passage instead of the hierarchical LSTM.", "Question Type Analysis We also conducted an analysis on what types of questions our model can handle better.", "We find that our model obtains similar performance on the \"wh\" questions such as \"why,\" \"what,\" \"when\" and \"where\" questions, on which the performance is usually around 50%.", "We also check statement-justification questions with the keyword \"true\" (e.g., \"Which of the following statements is true\"), negation questions with the keyword \"not\" (e.g., \"which of the following is not true\"), and summarization questions with the keyword \"title\" (e.g., \"what is the best title for the passage?", "\"), and their performance is 51%, 52% and 48%, respectively.", "We can see that the performance of our model on different types of questions in the RACE dataset is quite similar.", "However, our model is only based on wordlevel matching and may not have the ability of reasoning.", "In order to answer questions that require summarization, inference or reasoning, we still need to further explore the dataset and improve the model.", "Finally, we further compared our model to the baseline, which concatenates the question with each candidate answer, and our model can achieve better performance on different types of questions.", "For example, on the subset of the questions with pronouns, our model can achieve better accuracy of 49.8% than 47.9%.", "Similarly, on statement-justification questions with the keyword \"true\", our model could achieve better accuracy of 51% than 47%.", "Conclusions In this paper, we proposed a co-matching model for multi-choice reading comprehension.", "The model consists of a co-matching component and a hierarchical aggregation component.", "We showed that our model could achieve state-of-the-art performance on the RACE dataset.", "In the future, we will adapt the idea of co-matching and hierarchical aggregation to the standard open-domain QA setting for answer candidate reranking .", "We will also further study how to explicitly model inference and reasoning on the RACE dataset." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "3", "4" ], "paper_header_content": [ "Introduction", "Model", "Co-matching", "Hierarchical Aggregation", "Objective function", "Experiment", "Conclusions" ] }
GEM-SciDuet-train-96#paper-1251#slide-0
Reading Comprehension
The task: to answer questions given a passage of text Childrens Book Test [Hill et al. 2016] NarrativeQA [Kocisky et al.
The task: to answer questions given a passage of text Childrens Book Test [Hill et al. 2016] NarrativeQA [Kocisky et al.
[]
GEM-SciDuet-train-96#paper-1251#slide-1
1251
A Co-Matching Model for Multi-choice Reading Comprehension
Multi-choice reading comprehension is a challenging task, which involves the matching between a passage and a question-answer pair. This paper proposes a new co-matching approach to this problem, which jointly models whether a passage can match both a question and a candidate answer. Experimental results on the RACE dataset demonstrate that our approach achieves state-of-the-art performance.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119 ], "paper_content_text": [ "Introduction Enabling machines to understand natural language text is arguably the ultimate goal of natural language processing, and the task of machine reading comprehension is an intermediate step towards this ultimate goal (Richardson et al., 2013; Hermann et al., 2015; Hill et al., 2015; Rajpurkar et al., 2016; Nguyen et al., 2016) .", "Recently, Lai et al.", "(2017) released a new multi-choice machine comprehension dataset called RACE that was extracted from middle and high school English examinations in China.", "Figure 1 shows an example passage and two related questions from RACE.", "The key difference between RACE and previously released machine comprehension datasets (e.g., the CNN/Daily Mail dataset (Hermann et al., 2015) and SQuAD (Rajpurkar et al., 2016) ) is that the answers in RACE often cannot be directly extracted from the given passages, as illustrated by the two example questions (Q1 & Q2) in Figure 1 .", "Thus, answering these questions is more challenging and requires more inferences.", "Previous approaches to machine comprehension are usually based on pairwise sequence matching, where either the passage is matched against the sequence that concatenates both the question and a candidate answer (Yin et al., 2016) , or the passage is matched against the question alone followed by a second step of selecting an answer using the matching result of the first step (Lai et al., 2017; Zhou et al., 2018) .", "However, these approaches may not be suitable for multi-choice reading comprehension since questions and answers are often equally important.", "Matching the passage only against the question may not be meaningful and may lead to loss of information from the original passage, as we can see from the first example question in Figure 1 .", "On the other hand, concatenating the question and the answer into a single sequence for matching may not work, either, due to the loss of interaction information between a question and an answer.", "As illustrated by Q2 in Figure 1 , the model may need to recognize what \"he\" and \"it\" in candidate answer (c) refer to in the question, in order to select (c) as the correct answer.", "This observation of the RACE dataset shows that we face a new challenge of matching sequence triplets (i.e., passage, question and answer) instead of pairwise matching.", "In this paper, we propose a new model to match a question-answer pair to a given passage.", "Our comatching approach explicitly treats the question and the candidate answer as two sequences and jointly matches them to the given passage.", "Specifically, for each position in the passage, we compute two attention-weighted vectors, where one is from the question and the other from the candidate answer.", "Then, two matching representations are constructed: the first one matches the passage with the question while the second one matches the passage with the candidate answer.", "These two newly constructed matching representations together form a co-matching state.", "Intuitively, it encodes the locational information of the question and the candidate answer matched to a specific context of the passage.", "Finally, we apply a hierar-Passage: My father wasn't a king, he was a taxi driver, but I am a prince-Prince Renato II, of the country Pontinha , an island fort on Funchal harbour.", "In 1903, the king of Portugal sold the land to a wealthy British family, the Blandys, who make Madeira wine.", "Fourteen years ago the family decided to sell it for just EUR25,000, but nobody wanted to buy it either.", "I met Blandy at a party and he asked if I'd like to buy the island.", "Of course I said yes, but I had no money-I was just an art teacher.", "I tried to find some business partners, who all thought I was crazy.", "So I sold some of my possessions, put my savings together and bought it.", "Of course, my family and my friends-all thought I was mad ...", "If l want to have a national flag, it could be blue today, red tomorrow.", "... My family sometimes drops by, and other people come every day because the country is free for tourists to visit ... Q1: Which statement of the following is true?", "Q2: How did the author get the island?", "a.", "The author made his living by driving.", "a.", "It was a present from Blandy.", "b.", "The author's wife supported to buy the island.", "b.", "The king sold it to him.", "c. Blue and red are the main colors of his national flag.", "c. He bought it from Blandy.", "d. People can travel around the island free of charge.", "d. He inherited from his father.", "chical LSTM (Tang et al., 2015) over the sequence of co-matching states at different positions of the passage.", "Information is aggregated from wordlevel to sentence-level and then from sentencelevel to document-level.", "In this way, our model can better deal with the questions that require evidence scattered in different sentences in the passage.", "Our model improves the state-of-the-art model by 3 percentage on the RACE dataset.", "Our code will be released under https://github.", "com/shuohangwang/comatch.", "Model For the task of multi-choice reading comprehension, the machine is given a passage, a question and a set of candidate answers.", "The goal is to select the correct answer from the candidates.", "Let us use P ∈ R d×P , Q ∈ R d×Q and A ∈ R d×A to represent the passage, the question and a candidate answer, respectively, where each word in each sequence is represented by an embedding vector.", "d is the dimensionality of the embeddings, and P , Q, and A are the lengths of these sequences.", "Overall our model works as follows.", "For each candidate answer, our model constructs a vector that represents the matching of P with both Q and A.", "The vectors of all candidate answers are then used for answer selection.", "Because we simultaneously match P with Q and A, we call this a comatching model.", "In Section 2.1 we introduce the word-level co-matching mechanism.", "Then in Section 2.2 we introduce a hierarchical aggregation process.", "Finally in Section 2.3 we present the objective function.", "An overview of our co-matching model is shown in Figure 2 .", "Co-matching The co-matching part of our model aims to match the passage with the question and the candidate answer at the word-level.", "Inspired by some previous work (Wang and Jiang, 2016; Trischler et al., 2016) , we first use bi-directional LSTMs (Hochreiter and Schmidhuber, 1997) to pre-process the sequences as follows: H p = Bi-LSTM(P), H q = Bi-LSTM(Q), H a = Bi-LSTM(A), (1) where H p ∈ R l×P , H q ∈ R l×Q and H a ∈ R l×A are the sequences of hidden states generated by the bi-directional LSTMs.", "We then make use of the attention mechanism to match each state in the passage to an aggregated representation of the question and the candidate answer.", "The attention vectors are computed as follows: G q = SoftMax (W g H q + b g ⊗ e Q ) T H p , G a = SoftMax (W g H a + b g ⊗ e Q ) T H p , H q = H q G q , H a = H a G a , (2) where W g ∈ R l×l and b g ∈ R l are the parameters to learn.", "e Q ∈ R Q is a vector of all 1s and it is used to repeat the bias vector into the matrix.", "G q ∈ R Q×P and G a ∈ R A×P are the attention M q = ReLU W m H q H p H q ⊗ H p + b m , M a = ReLU W m H a H p H a ⊗ H p + b m , C = M q M a , (3) where W g ∈ R l×2l and b g ∈ R l are the parameters to learn.", "· · is the column-wise concatenation of two matrices, and · · and · ⊗ · are the elementwise subtraction and multiplication between two matrices, which are used to build better matching representations (Tai et al., 2015; .", "M q ∈ R l×P represents the matching between the hidden states of the passage and the corresponding attention-weighted representations of the question.", "Similarly, we match the passage with the candidate answer and represent the matching results using M a ∈ R l×P .", "Finally C ∈ R 2l×P is the concatenation of M q ∈ R l×P and M a ∈ R l×P and represents how each passage state can be matched with the question and the candidate answer.", "We refer to c ∈ R 2l , which is a single column of C, as a co-matching state that concurrently matches a passage state with both the question and the candidate answer.", "Hierarchical Aggregation In order to capture the sentence structure of the passage, we further modify the model presented earlier and build a hierarchical LSTM (Tang et al., 2015) on top of the co-matching states.", "Specifically, we first split the passage into sentences and we use P 1 , P 2 , .", ".", ".", ", P N to represent these sentences, where N is the number of sentences in the passage.", "For each triplet {P n , Q, A}, n ∈ [1, N ], we can get the co-matching states C n through Eqn.", "(1-3) .", "Then we build a bi-directional LSTM followed by max pooling on top of the comatching states of each sentence as follows: h s n = MaxPooling (Bi-LSTM (C n )) , (4) where the function MaxPooling(·) is the row-wise max pooling operation.", "h s n ∈ R l , n ∈ [1, N ] is the sentence-level aggregation of the co-matching states.", "All these representations will be further integrated by another Bi-LSTM to get the final triplet matching representation.", "where H s ∈ R l×N is the concatenation of all the sentence-level representations and it is the input of a higher level LSTM.", "h t ∈ R l is the final output of the matching between the sequences of the passage, the question and the candidate answer.", "Objective function For each candidate answer A i , we can build its matching representation h t i ∈ R l with the question and the passage through Eqn.", "(5).", "Our loss function is computed as follows: L(A i |P, Q) = − log exp(w T h t i ) 4 j=1 exp(w T h t j ) , (6) where w ∈ R l is a parameter to learn.", "Experiment To evaluate the effectiveness of our hierarchical co-matching model, we use the RACE dataset (Lai et al., 2017) , which consists of two subsets: RACE-M comes from middle school examinations while RACE-H comes from high school examinations.", "RACE is the combination of the two.", "We compare our model with a number of baseline models.", "We also compare with two variants of our model for an ablation study.", "Comparison with Baselines We compare our model with the following baselines: • Sliding Window based method (Richardson et al., 2013) computes the matching score based on the sum of the tf-idf values of the matched words between the question-answer pair and each subpassage with a fixed a window size.", "• Stanford Attentive Reader (AR) (Chen et al., 2016) first builds a question-related passage representation through attention mechanism and then compares it with each candidate answer representation to get the answer probabilities.", "• GA (Dhingra et al., 2017) uses gated attention mechanism with multiple hops to extract the question-related information of the passage and compares it with candidate answers.", "• ElimiNet (Soham et al., 2017) tries to first eliminate the most irrelevant choices and then select the best answer.", "• HAF (Zhou et al., 2018) considers not only the matching between the three sequences, namely, passage, question and candidate answer, but also the matching between the candidate answers.", "• MUSIC (Xu et al., 2017) integrates different sequence matching strategies into the model and also adds a unit of multi-step reasoning for selecting the answer.", "Besides, we also report the following two results as reference points: Turkers is the performance of Amazon Turkers on a randomly sampled subset of the RACE test set.", "Ceiling is the percentage of the unambiguous questions with a correct answer in a subset of the test set.", "The performance of our model together with the baselines are shown in Table 2 .", "We can see that our proposed complete model, Hier-Co-Matching, achieved the best performance among all the public results.", "Still, there is a huge gap between the best machine reading performance and the human performance, showing the great potential for further research.", "Ablation Study Moreover, we conduct an ablation study of our model architecture.", "In this study, we are mainly interested in the contribution of each component introduced in this work to our final results.", "We studied two key factors: (1) the comatching module and (2) the hierarchical aggregation approach.", "We observed a 4 percentage performance decrease by replacing the co-matching module with a single matching state (i.e., only M a in Eqn (3)) by directly concatenating the question with each candidate answer (Yin et al., 2016) .", "We also observe about 2 percentage decrease when we treat the passage as a plain sequence, and run a two-layer LSTM (to ensure the numbers of parameters are comparable) over the whole passage instead of the hierarchical LSTM.", "Question Type Analysis We also conducted an analysis on what types of questions our model can handle better.", "We find that our model obtains similar performance on the \"wh\" questions such as \"why,\" \"what,\" \"when\" and \"where\" questions, on which the performance is usually around 50%.", "We also check statement-justification questions with the keyword \"true\" (e.g., \"Which of the following statements is true\"), negation questions with the keyword \"not\" (e.g., \"which of the following is not true\"), and summarization questions with the keyword \"title\" (e.g., \"what is the best title for the passage?", "\"), and their performance is 51%, 52% and 48%, respectively.", "We can see that the performance of our model on different types of questions in the RACE dataset is quite similar.", "However, our model is only based on wordlevel matching and may not have the ability of reasoning.", "In order to answer questions that require summarization, inference or reasoning, we still need to further explore the dataset and improve the model.", "Finally, we further compared our model to the baseline, which concatenates the question with each candidate answer, and our model can achieve better performance on different types of questions.", "For example, on the subset of the questions with pronouns, our model can achieve better accuracy of 49.8% than 47.9%.", "Similarly, on statement-justification questions with the keyword \"true\", our model could achieve better accuracy of 51% than 47%.", "Conclusions In this paper, we proposed a co-matching model for multi-choice reading comprehension.", "The model consists of a co-matching component and a hierarchical aggregation component.", "We showed that our model could achieve state-of-the-art performance on the RACE dataset.", "In the future, we will adapt the idea of co-matching and hierarchical aggregation to the standard open-domain QA setting for answer candidate reranking .", "We will also further study how to explicitly model inference and reasoning on the RACE dataset." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "3", "4" ], "paper_header_content": [ "Introduction", "Model", "Co-matching", "Hierarchical Aggregation", "Objective function", "Experiment", "Conclusions" ] }
GEM-SciDuet-train-96#paper-1251#slide-1
Race
Passage: My father wasnt a king, he was a taxi driver, but I met Blandy at a party and he asked if Id like to buy the island. Of course I said yes but I had no money-I was just an art teacher. I tried to find some business partners, who all thought I was crazy. So I sold some of my possessions, put my savings together and bought it ... Question: How did the author get the island? a. It was a present from Blandy. b. The king sold it to him. c. He bought it from Blandy. d. He inherited from his father. Challenge: to jointly model passage, question and candidate answers
Passage: My father wasnt a king, he was a taxi driver, but I met Blandy at a party and he asked if Id like to buy the island. Of course I said yes but I had no money-I was just an art teacher. I tried to find some business partners, who all thought I was crazy. So I sold some of my possessions, put my savings together and bought it ... Question: How did the author get the island? a. It was a present from Blandy. b. The king sold it to him. c. He bought it from Blandy. d. He inherited from his father. Challenge: to jointly model passage, question and candidate answers
[]
GEM-SciDuet-train-96#paper-1251#slide-2
1251
A Co-Matching Model for Multi-choice Reading Comprehension
Multi-choice reading comprehension is a challenging task, which involves the matching between a passage and a question-answer pair. This paper proposes a new co-matching approach to this problem, which jointly models whether a passage can match both a question and a candidate answer. Experimental results on the RACE dataset demonstrate that our approach achieves state-of-the-art performance.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119 ], "paper_content_text": [ "Introduction Enabling machines to understand natural language text is arguably the ultimate goal of natural language processing, and the task of machine reading comprehension is an intermediate step towards this ultimate goal (Richardson et al., 2013; Hermann et al., 2015; Hill et al., 2015; Rajpurkar et al., 2016; Nguyen et al., 2016) .", "Recently, Lai et al.", "(2017) released a new multi-choice machine comprehension dataset called RACE that was extracted from middle and high school English examinations in China.", "Figure 1 shows an example passage and two related questions from RACE.", "The key difference between RACE and previously released machine comprehension datasets (e.g., the CNN/Daily Mail dataset (Hermann et al., 2015) and SQuAD (Rajpurkar et al., 2016) ) is that the answers in RACE often cannot be directly extracted from the given passages, as illustrated by the two example questions (Q1 & Q2) in Figure 1 .", "Thus, answering these questions is more challenging and requires more inferences.", "Previous approaches to machine comprehension are usually based on pairwise sequence matching, where either the passage is matched against the sequence that concatenates both the question and a candidate answer (Yin et al., 2016) , or the passage is matched against the question alone followed by a second step of selecting an answer using the matching result of the first step (Lai et al., 2017; Zhou et al., 2018) .", "However, these approaches may not be suitable for multi-choice reading comprehension since questions and answers are often equally important.", "Matching the passage only against the question may not be meaningful and may lead to loss of information from the original passage, as we can see from the first example question in Figure 1 .", "On the other hand, concatenating the question and the answer into a single sequence for matching may not work, either, due to the loss of interaction information between a question and an answer.", "As illustrated by Q2 in Figure 1 , the model may need to recognize what \"he\" and \"it\" in candidate answer (c) refer to in the question, in order to select (c) as the correct answer.", "This observation of the RACE dataset shows that we face a new challenge of matching sequence triplets (i.e., passage, question and answer) instead of pairwise matching.", "In this paper, we propose a new model to match a question-answer pair to a given passage.", "Our comatching approach explicitly treats the question and the candidate answer as two sequences and jointly matches them to the given passage.", "Specifically, for each position in the passage, we compute two attention-weighted vectors, where one is from the question and the other from the candidate answer.", "Then, two matching representations are constructed: the first one matches the passage with the question while the second one matches the passage with the candidate answer.", "These two newly constructed matching representations together form a co-matching state.", "Intuitively, it encodes the locational information of the question and the candidate answer matched to a specific context of the passage.", "Finally, we apply a hierar-Passage: My father wasn't a king, he was a taxi driver, but I am a prince-Prince Renato II, of the country Pontinha , an island fort on Funchal harbour.", "In 1903, the king of Portugal sold the land to a wealthy British family, the Blandys, who make Madeira wine.", "Fourteen years ago the family decided to sell it for just EUR25,000, but nobody wanted to buy it either.", "I met Blandy at a party and he asked if I'd like to buy the island.", "Of course I said yes, but I had no money-I was just an art teacher.", "I tried to find some business partners, who all thought I was crazy.", "So I sold some of my possessions, put my savings together and bought it.", "Of course, my family and my friends-all thought I was mad ...", "If l want to have a national flag, it could be blue today, red tomorrow.", "... My family sometimes drops by, and other people come every day because the country is free for tourists to visit ... Q1: Which statement of the following is true?", "Q2: How did the author get the island?", "a.", "The author made his living by driving.", "a.", "It was a present from Blandy.", "b.", "The author's wife supported to buy the island.", "b.", "The king sold it to him.", "c. Blue and red are the main colors of his national flag.", "c. He bought it from Blandy.", "d. People can travel around the island free of charge.", "d. He inherited from his father.", "chical LSTM (Tang et al., 2015) over the sequence of co-matching states at different positions of the passage.", "Information is aggregated from wordlevel to sentence-level and then from sentencelevel to document-level.", "In this way, our model can better deal with the questions that require evidence scattered in different sentences in the passage.", "Our model improves the state-of-the-art model by 3 percentage on the RACE dataset.", "Our code will be released under https://github.", "com/shuohangwang/comatch.", "Model For the task of multi-choice reading comprehension, the machine is given a passage, a question and a set of candidate answers.", "The goal is to select the correct answer from the candidates.", "Let us use P ∈ R d×P , Q ∈ R d×Q and A ∈ R d×A to represent the passage, the question and a candidate answer, respectively, where each word in each sequence is represented by an embedding vector.", "d is the dimensionality of the embeddings, and P , Q, and A are the lengths of these sequences.", "Overall our model works as follows.", "For each candidate answer, our model constructs a vector that represents the matching of P with both Q and A.", "The vectors of all candidate answers are then used for answer selection.", "Because we simultaneously match P with Q and A, we call this a comatching model.", "In Section 2.1 we introduce the word-level co-matching mechanism.", "Then in Section 2.2 we introduce a hierarchical aggregation process.", "Finally in Section 2.3 we present the objective function.", "An overview of our co-matching model is shown in Figure 2 .", "Co-matching The co-matching part of our model aims to match the passage with the question and the candidate answer at the word-level.", "Inspired by some previous work (Wang and Jiang, 2016; Trischler et al., 2016) , we first use bi-directional LSTMs (Hochreiter and Schmidhuber, 1997) to pre-process the sequences as follows: H p = Bi-LSTM(P), H q = Bi-LSTM(Q), H a = Bi-LSTM(A), (1) where H p ∈ R l×P , H q ∈ R l×Q and H a ∈ R l×A are the sequences of hidden states generated by the bi-directional LSTMs.", "We then make use of the attention mechanism to match each state in the passage to an aggregated representation of the question and the candidate answer.", "The attention vectors are computed as follows: G q = SoftMax (W g H q + b g ⊗ e Q ) T H p , G a = SoftMax (W g H a + b g ⊗ e Q ) T H p , H q = H q G q , H a = H a G a , (2) where W g ∈ R l×l and b g ∈ R l are the parameters to learn.", "e Q ∈ R Q is a vector of all 1s and it is used to repeat the bias vector into the matrix.", "G q ∈ R Q×P and G a ∈ R A×P are the attention M q = ReLU W m H q H p H q ⊗ H p + b m , M a = ReLU W m H a H p H a ⊗ H p + b m , C = M q M a , (3) where W g ∈ R l×2l and b g ∈ R l are the parameters to learn.", "· · is the column-wise concatenation of two matrices, and · · and · ⊗ · are the elementwise subtraction and multiplication between two matrices, which are used to build better matching representations (Tai et al., 2015; .", "M q ∈ R l×P represents the matching between the hidden states of the passage and the corresponding attention-weighted representations of the question.", "Similarly, we match the passage with the candidate answer and represent the matching results using M a ∈ R l×P .", "Finally C ∈ R 2l×P is the concatenation of M q ∈ R l×P and M a ∈ R l×P and represents how each passage state can be matched with the question and the candidate answer.", "We refer to c ∈ R 2l , which is a single column of C, as a co-matching state that concurrently matches a passage state with both the question and the candidate answer.", "Hierarchical Aggregation In order to capture the sentence structure of the passage, we further modify the model presented earlier and build a hierarchical LSTM (Tang et al., 2015) on top of the co-matching states.", "Specifically, we first split the passage into sentences and we use P 1 , P 2 , .", ".", ".", ", P N to represent these sentences, where N is the number of sentences in the passage.", "For each triplet {P n , Q, A}, n ∈ [1, N ], we can get the co-matching states C n through Eqn.", "(1-3) .", "Then we build a bi-directional LSTM followed by max pooling on top of the comatching states of each sentence as follows: h s n = MaxPooling (Bi-LSTM (C n )) , (4) where the function MaxPooling(·) is the row-wise max pooling operation.", "h s n ∈ R l , n ∈ [1, N ] is the sentence-level aggregation of the co-matching states.", "All these representations will be further integrated by another Bi-LSTM to get the final triplet matching representation.", "where H s ∈ R l×N is the concatenation of all the sentence-level representations and it is the input of a higher level LSTM.", "h t ∈ R l is the final output of the matching between the sequences of the passage, the question and the candidate answer.", "Objective function For each candidate answer A i , we can build its matching representation h t i ∈ R l with the question and the passage through Eqn.", "(5).", "Our loss function is computed as follows: L(A i |P, Q) = − log exp(w T h t i ) 4 j=1 exp(w T h t j ) , (6) where w ∈ R l is a parameter to learn.", "Experiment To evaluate the effectiveness of our hierarchical co-matching model, we use the RACE dataset (Lai et al., 2017) , which consists of two subsets: RACE-M comes from middle school examinations while RACE-H comes from high school examinations.", "RACE is the combination of the two.", "We compare our model with a number of baseline models.", "We also compare with two variants of our model for an ablation study.", "Comparison with Baselines We compare our model with the following baselines: • Sliding Window based method (Richardson et al., 2013) computes the matching score based on the sum of the tf-idf values of the matched words between the question-answer pair and each subpassage with a fixed a window size.", "• Stanford Attentive Reader (AR) (Chen et al., 2016) first builds a question-related passage representation through attention mechanism and then compares it with each candidate answer representation to get the answer probabilities.", "• GA (Dhingra et al., 2017) uses gated attention mechanism with multiple hops to extract the question-related information of the passage and compares it with candidate answers.", "• ElimiNet (Soham et al., 2017) tries to first eliminate the most irrelevant choices and then select the best answer.", "• HAF (Zhou et al., 2018) considers not only the matching between the three sequences, namely, passage, question and candidate answer, but also the matching between the candidate answers.", "• MUSIC (Xu et al., 2017) integrates different sequence matching strategies into the model and also adds a unit of multi-step reasoning for selecting the answer.", "Besides, we also report the following two results as reference points: Turkers is the performance of Amazon Turkers on a randomly sampled subset of the RACE test set.", "Ceiling is the percentage of the unambiguous questions with a correct answer in a subset of the test set.", "The performance of our model together with the baselines are shown in Table 2 .", "We can see that our proposed complete model, Hier-Co-Matching, achieved the best performance among all the public results.", "Still, there is a huge gap between the best machine reading performance and the human performance, showing the great potential for further research.", "Ablation Study Moreover, we conduct an ablation study of our model architecture.", "In this study, we are mainly interested in the contribution of each component introduced in this work to our final results.", "We studied two key factors: (1) the comatching module and (2) the hierarchical aggregation approach.", "We observed a 4 percentage performance decrease by replacing the co-matching module with a single matching state (i.e., only M a in Eqn (3)) by directly concatenating the question with each candidate answer (Yin et al., 2016) .", "We also observe about 2 percentage decrease when we treat the passage as a plain sequence, and run a two-layer LSTM (to ensure the numbers of parameters are comparable) over the whole passage instead of the hierarchical LSTM.", "Question Type Analysis We also conducted an analysis on what types of questions our model can handle better.", "We find that our model obtains similar performance on the \"wh\" questions such as \"why,\" \"what,\" \"when\" and \"where\" questions, on which the performance is usually around 50%.", "We also check statement-justification questions with the keyword \"true\" (e.g., \"Which of the following statements is true\"), negation questions with the keyword \"not\" (e.g., \"which of the following is not true\"), and summarization questions with the keyword \"title\" (e.g., \"what is the best title for the passage?", "\"), and their performance is 51%, 52% and 48%, respectively.", "We can see that the performance of our model on different types of questions in the RACE dataset is quite similar.", "However, our model is only based on wordlevel matching and may not have the ability of reasoning.", "In order to answer questions that require summarization, inference or reasoning, we still need to further explore the dataset and improve the model.", "Finally, we further compared our model to the baseline, which concatenates the question with each candidate answer, and our model can achieve better performance on different types of questions.", "For example, on the subset of the questions with pronouns, our model can achieve better accuracy of 49.8% than 47.9%.", "Similarly, on statement-justification questions with the keyword \"true\", our model could achieve better accuracy of 51% than 47%.", "Conclusions In this paper, we proposed a co-matching model for multi-choice reading comprehension.", "The model consists of a co-matching component and a hierarchical aggregation component.", "We showed that our model could achieve state-of-the-art performance on the RACE dataset.", "In the future, we will adapt the idea of co-matching and hierarchical aggregation to the standard open-domain QA setting for answer candidate reranking .", "We will also further study how to explicitly model inference and reasoning on the RACE dataset." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "3", "4" ], "paper_header_content": [ "Introduction", "Model", "Co-matching", "Hierarchical Aggregation", "Objective function", "Experiment", "Conclusions" ] }
GEM-SciDuet-train-96#paper-1251#slide-2
Related Work
Converted to sequence pair matching [Yin et al., 2016] Each candidate answer is concatenated with the question The concatenated sequences are matched against the passage b. ranking scores give meaningful representations for questions like Which statement of the following Question and answers are not clearly separated. Interaction information between a question and an answer d. is lost. Matching sequences pair by pair [Lai et al., 2017] Match passage and question first Then this representation is used to match candidate answers Q-specific passage representation matching Limitation: a. Matching P & Q may not
Converted to sequence pair matching [Yin et al., 2016] Each candidate answer is concatenated with the question The concatenated sequences are matched against the passage b. ranking scores give meaningful representations for questions like Which statement of the following Question and answers are not clearly separated. Interaction information between a question and an answer d. is lost. Matching sequences pair by pair [Lai et al., 2017] Match passage and question first Then this representation is used to match candidate answers Q-specific passage representation matching Limitation: a. Matching P & Q may not
[]
GEM-SciDuet-train-96#paper-1251#slide-3
1251
A Co-Matching Model for Multi-choice Reading Comprehension
Multi-choice reading comprehension is a challenging task, which involves the matching between a passage and a question-answer pair. This paper proposes a new co-matching approach to this problem, which jointly models whether a passage can match both a question and a candidate answer. Experimental results on the RACE dataset demonstrate that our approach achieves state-of-the-art performance.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119 ], "paper_content_text": [ "Introduction Enabling machines to understand natural language text is arguably the ultimate goal of natural language processing, and the task of machine reading comprehension is an intermediate step towards this ultimate goal (Richardson et al., 2013; Hermann et al., 2015; Hill et al., 2015; Rajpurkar et al., 2016; Nguyen et al., 2016) .", "Recently, Lai et al.", "(2017) released a new multi-choice machine comprehension dataset called RACE that was extracted from middle and high school English examinations in China.", "Figure 1 shows an example passage and two related questions from RACE.", "The key difference between RACE and previously released machine comprehension datasets (e.g., the CNN/Daily Mail dataset (Hermann et al., 2015) and SQuAD (Rajpurkar et al., 2016) ) is that the answers in RACE often cannot be directly extracted from the given passages, as illustrated by the two example questions (Q1 & Q2) in Figure 1 .", "Thus, answering these questions is more challenging and requires more inferences.", "Previous approaches to machine comprehension are usually based on pairwise sequence matching, where either the passage is matched against the sequence that concatenates both the question and a candidate answer (Yin et al., 2016) , or the passage is matched against the question alone followed by a second step of selecting an answer using the matching result of the first step (Lai et al., 2017; Zhou et al., 2018) .", "However, these approaches may not be suitable for multi-choice reading comprehension since questions and answers are often equally important.", "Matching the passage only against the question may not be meaningful and may lead to loss of information from the original passage, as we can see from the first example question in Figure 1 .", "On the other hand, concatenating the question and the answer into a single sequence for matching may not work, either, due to the loss of interaction information between a question and an answer.", "As illustrated by Q2 in Figure 1 , the model may need to recognize what \"he\" and \"it\" in candidate answer (c) refer to in the question, in order to select (c) as the correct answer.", "This observation of the RACE dataset shows that we face a new challenge of matching sequence triplets (i.e., passage, question and answer) instead of pairwise matching.", "In this paper, we propose a new model to match a question-answer pair to a given passage.", "Our comatching approach explicitly treats the question and the candidate answer as two sequences and jointly matches them to the given passage.", "Specifically, for each position in the passage, we compute two attention-weighted vectors, where one is from the question and the other from the candidate answer.", "Then, two matching representations are constructed: the first one matches the passage with the question while the second one matches the passage with the candidate answer.", "These two newly constructed matching representations together form a co-matching state.", "Intuitively, it encodes the locational information of the question and the candidate answer matched to a specific context of the passage.", "Finally, we apply a hierar-Passage: My father wasn't a king, he was a taxi driver, but I am a prince-Prince Renato II, of the country Pontinha , an island fort on Funchal harbour.", "In 1903, the king of Portugal sold the land to a wealthy British family, the Blandys, who make Madeira wine.", "Fourteen years ago the family decided to sell it for just EUR25,000, but nobody wanted to buy it either.", "I met Blandy at a party and he asked if I'd like to buy the island.", "Of course I said yes, but I had no money-I was just an art teacher.", "I tried to find some business partners, who all thought I was crazy.", "So I sold some of my possessions, put my savings together and bought it.", "Of course, my family and my friends-all thought I was mad ...", "If l want to have a national flag, it could be blue today, red tomorrow.", "... My family sometimes drops by, and other people come every day because the country is free for tourists to visit ... Q1: Which statement of the following is true?", "Q2: How did the author get the island?", "a.", "The author made his living by driving.", "a.", "It was a present from Blandy.", "b.", "The author's wife supported to buy the island.", "b.", "The king sold it to him.", "c. Blue and red are the main colors of his national flag.", "c. He bought it from Blandy.", "d. People can travel around the island free of charge.", "d. He inherited from his father.", "chical LSTM (Tang et al., 2015) over the sequence of co-matching states at different positions of the passage.", "Information is aggregated from wordlevel to sentence-level and then from sentencelevel to document-level.", "In this way, our model can better deal with the questions that require evidence scattered in different sentences in the passage.", "Our model improves the state-of-the-art model by 3 percentage on the RACE dataset.", "Our code will be released under https://github.", "com/shuohangwang/comatch.", "Model For the task of multi-choice reading comprehension, the machine is given a passage, a question and a set of candidate answers.", "The goal is to select the correct answer from the candidates.", "Let us use P ∈ R d×P , Q ∈ R d×Q and A ∈ R d×A to represent the passage, the question and a candidate answer, respectively, where each word in each sequence is represented by an embedding vector.", "d is the dimensionality of the embeddings, and P , Q, and A are the lengths of these sequences.", "Overall our model works as follows.", "For each candidate answer, our model constructs a vector that represents the matching of P with both Q and A.", "The vectors of all candidate answers are then used for answer selection.", "Because we simultaneously match P with Q and A, we call this a comatching model.", "In Section 2.1 we introduce the word-level co-matching mechanism.", "Then in Section 2.2 we introduce a hierarchical aggregation process.", "Finally in Section 2.3 we present the objective function.", "An overview of our co-matching model is shown in Figure 2 .", "Co-matching The co-matching part of our model aims to match the passage with the question and the candidate answer at the word-level.", "Inspired by some previous work (Wang and Jiang, 2016; Trischler et al., 2016) , we first use bi-directional LSTMs (Hochreiter and Schmidhuber, 1997) to pre-process the sequences as follows: H p = Bi-LSTM(P), H q = Bi-LSTM(Q), H a = Bi-LSTM(A), (1) where H p ∈ R l×P , H q ∈ R l×Q and H a ∈ R l×A are the sequences of hidden states generated by the bi-directional LSTMs.", "We then make use of the attention mechanism to match each state in the passage to an aggregated representation of the question and the candidate answer.", "The attention vectors are computed as follows: G q = SoftMax (W g H q + b g ⊗ e Q ) T H p , G a = SoftMax (W g H a + b g ⊗ e Q ) T H p , H q = H q G q , H a = H a G a , (2) where W g ∈ R l×l and b g ∈ R l are the parameters to learn.", "e Q ∈ R Q is a vector of all 1s and it is used to repeat the bias vector into the matrix.", "G q ∈ R Q×P and G a ∈ R A×P are the attention M q = ReLU W m H q H p H q ⊗ H p + b m , M a = ReLU W m H a H p H a ⊗ H p + b m , C = M q M a , (3) where W g ∈ R l×2l and b g ∈ R l are the parameters to learn.", "· · is the column-wise concatenation of two matrices, and · · and · ⊗ · are the elementwise subtraction and multiplication between two matrices, which are used to build better matching representations (Tai et al., 2015; .", "M q ∈ R l×P represents the matching between the hidden states of the passage and the corresponding attention-weighted representations of the question.", "Similarly, we match the passage with the candidate answer and represent the matching results using M a ∈ R l×P .", "Finally C ∈ R 2l×P is the concatenation of M q ∈ R l×P and M a ∈ R l×P and represents how each passage state can be matched with the question and the candidate answer.", "We refer to c ∈ R 2l , which is a single column of C, as a co-matching state that concurrently matches a passage state with both the question and the candidate answer.", "Hierarchical Aggregation In order to capture the sentence structure of the passage, we further modify the model presented earlier and build a hierarchical LSTM (Tang et al., 2015) on top of the co-matching states.", "Specifically, we first split the passage into sentences and we use P 1 , P 2 , .", ".", ".", ", P N to represent these sentences, where N is the number of sentences in the passage.", "For each triplet {P n , Q, A}, n ∈ [1, N ], we can get the co-matching states C n through Eqn.", "(1-3) .", "Then we build a bi-directional LSTM followed by max pooling on top of the comatching states of each sentence as follows: h s n = MaxPooling (Bi-LSTM (C n )) , (4) where the function MaxPooling(·) is the row-wise max pooling operation.", "h s n ∈ R l , n ∈ [1, N ] is the sentence-level aggregation of the co-matching states.", "All these representations will be further integrated by another Bi-LSTM to get the final triplet matching representation.", "where H s ∈ R l×N is the concatenation of all the sentence-level representations and it is the input of a higher level LSTM.", "h t ∈ R l is the final output of the matching between the sequences of the passage, the question and the candidate answer.", "Objective function For each candidate answer A i , we can build its matching representation h t i ∈ R l with the question and the passage through Eqn.", "(5).", "Our loss function is computed as follows: L(A i |P, Q) = − log exp(w T h t i ) 4 j=1 exp(w T h t j ) , (6) where w ∈ R l is a parameter to learn.", "Experiment To evaluate the effectiveness of our hierarchical co-matching model, we use the RACE dataset (Lai et al., 2017) , which consists of two subsets: RACE-M comes from middle school examinations while RACE-H comes from high school examinations.", "RACE is the combination of the two.", "We compare our model with a number of baseline models.", "We also compare with two variants of our model for an ablation study.", "Comparison with Baselines We compare our model with the following baselines: • Sliding Window based method (Richardson et al., 2013) computes the matching score based on the sum of the tf-idf values of the matched words between the question-answer pair and each subpassage with a fixed a window size.", "• Stanford Attentive Reader (AR) (Chen et al., 2016) first builds a question-related passage representation through attention mechanism and then compares it with each candidate answer representation to get the answer probabilities.", "• GA (Dhingra et al., 2017) uses gated attention mechanism with multiple hops to extract the question-related information of the passage and compares it with candidate answers.", "• ElimiNet (Soham et al., 2017) tries to first eliminate the most irrelevant choices and then select the best answer.", "• HAF (Zhou et al., 2018) considers not only the matching between the three sequences, namely, passage, question and candidate answer, but also the matching between the candidate answers.", "• MUSIC (Xu et al., 2017) integrates different sequence matching strategies into the model and also adds a unit of multi-step reasoning for selecting the answer.", "Besides, we also report the following two results as reference points: Turkers is the performance of Amazon Turkers on a randomly sampled subset of the RACE test set.", "Ceiling is the percentage of the unambiguous questions with a correct answer in a subset of the test set.", "The performance of our model together with the baselines are shown in Table 2 .", "We can see that our proposed complete model, Hier-Co-Matching, achieved the best performance among all the public results.", "Still, there is a huge gap between the best machine reading performance and the human performance, showing the great potential for further research.", "Ablation Study Moreover, we conduct an ablation study of our model architecture.", "In this study, we are mainly interested in the contribution of each component introduced in this work to our final results.", "We studied two key factors: (1) the comatching module and (2) the hierarchical aggregation approach.", "We observed a 4 percentage performance decrease by replacing the co-matching module with a single matching state (i.e., only M a in Eqn (3)) by directly concatenating the question with each candidate answer (Yin et al., 2016) .", "We also observe about 2 percentage decrease when we treat the passage as a plain sequence, and run a two-layer LSTM (to ensure the numbers of parameters are comparable) over the whole passage instead of the hierarchical LSTM.", "Question Type Analysis We also conducted an analysis on what types of questions our model can handle better.", "We find that our model obtains similar performance on the \"wh\" questions such as \"why,\" \"what,\" \"when\" and \"where\" questions, on which the performance is usually around 50%.", "We also check statement-justification questions with the keyword \"true\" (e.g., \"Which of the following statements is true\"), negation questions with the keyword \"not\" (e.g., \"which of the following is not true\"), and summarization questions with the keyword \"title\" (e.g., \"what is the best title for the passage?", "\"), and their performance is 51%, 52% and 48%, respectively.", "We can see that the performance of our model on different types of questions in the RACE dataset is quite similar.", "However, our model is only based on wordlevel matching and may not have the ability of reasoning.", "In order to answer questions that require summarization, inference or reasoning, we still need to further explore the dataset and improve the model.", "Finally, we further compared our model to the baseline, which concatenates the question with each candidate answer, and our model can achieve better performance on different types of questions.", "For example, on the subset of the questions with pronouns, our model can achieve better accuracy of 49.8% than 47.9%.", "Similarly, on statement-justification questions with the keyword \"true\", our model could achieve better accuracy of 51% than 47%.", "Conclusions In this paper, we proposed a co-matching model for multi-choice reading comprehension.", "The model consists of a co-matching component and a hierarchical aggregation component.", "We showed that our model could achieve state-of-the-art performance on the RACE dataset.", "In the future, we will adapt the idea of co-matching and hierarchical aggregation to the standard open-domain QA setting for answer candidate reranking .", "We will also further study how to explicitly model inference and reasoning on the RACE dataset." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "3", "4" ], "paper_header_content": [ "Introduction", "Model", "Co-matching", "Hierarchical Aggregation", "Objective function", "Experiment", "Conclusions" ] }
GEM-SciDuet-train-96#paper-1251#slide-3
Our Solution
Co-match each sentence in the passage with the question and the candidates answers separately. Make use of the alignments between sequences as follows: Question: How did the author get the island? Candidate Answer: He bought it from Blandy Hierarchically aggregate the co-matching representations of (sentence, question, answer) triplets for final scoring.
Co-match each sentence in the passage with the question and the candidates answers separately. Make use of the alignments between sequences as follows: Question: How did the author get the island? Candidate Answer: He bought it from Blandy Hierarchically aggregate the co-matching representations of (sentence, question, answer) triplets for final scoring.
[]
GEM-SciDuet-train-96#paper-1251#slide-4
1251
A Co-Matching Model for Multi-choice Reading Comprehension
Multi-choice reading comprehension is a challenging task, which involves the matching between a passage and a question-answer pair. This paper proposes a new co-matching approach to this problem, which jointly models whether a passage can match both a question and a candidate answer. Experimental results on the RACE dataset demonstrate that our approach achieves state-of-the-art performance.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119 ], "paper_content_text": [ "Introduction Enabling machines to understand natural language text is arguably the ultimate goal of natural language processing, and the task of machine reading comprehension is an intermediate step towards this ultimate goal (Richardson et al., 2013; Hermann et al., 2015; Hill et al., 2015; Rajpurkar et al., 2016; Nguyen et al., 2016) .", "Recently, Lai et al.", "(2017) released a new multi-choice machine comprehension dataset called RACE that was extracted from middle and high school English examinations in China.", "Figure 1 shows an example passage and two related questions from RACE.", "The key difference between RACE and previously released machine comprehension datasets (e.g., the CNN/Daily Mail dataset (Hermann et al., 2015) and SQuAD (Rajpurkar et al., 2016) ) is that the answers in RACE often cannot be directly extracted from the given passages, as illustrated by the two example questions (Q1 & Q2) in Figure 1 .", "Thus, answering these questions is more challenging and requires more inferences.", "Previous approaches to machine comprehension are usually based on pairwise sequence matching, where either the passage is matched against the sequence that concatenates both the question and a candidate answer (Yin et al., 2016) , or the passage is matched against the question alone followed by a second step of selecting an answer using the matching result of the first step (Lai et al., 2017; Zhou et al., 2018) .", "However, these approaches may not be suitable for multi-choice reading comprehension since questions and answers are often equally important.", "Matching the passage only against the question may not be meaningful and may lead to loss of information from the original passage, as we can see from the first example question in Figure 1 .", "On the other hand, concatenating the question and the answer into a single sequence for matching may not work, either, due to the loss of interaction information between a question and an answer.", "As illustrated by Q2 in Figure 1 , the model may need to recognize what \"he\" and \"it\" in candidate answer (c) refer to in the question, in order to select (c) as the correct answer.", "This observation of the RACE dataset shows that we face a new challenge of matching sequence triplets (i.e., passage, question and answer) instead of pairwise matching.", "In this paper, we propose a new model to match a question-answer pair to a given passage.", "Our comatching approach explicitly treats the question and the candidate answer as two sequences and jointly matches them to the given passage.", "Specifically, for each position in the passage, we compute two attention-weighted vectors, where one is from the question and the other from the candidate answer.", "Then, two matching representations are constructed: the first one matches the passage with the question while the second one matches the passage with the candidate answer.", "These two newly constructed matching representations together form a co-matching state.", "Intuitively, it encodes the locational information of the question and the candidate answer matched to a specific context of the passage.", "Finally, we apply a hierar-Passage: My father wasn't a king, he was a taxi driver, but I am a prince-Prince Renato II, of the country Pontinha , an island fort on Funchal harbour.", "In 1903, the king of Portugal sold the land to a wealthy British family, the Blandys, who make Madeira wine.", "Fourteen years ago the family decided to sell it for just EUR25,000, but nobody wanted to buy it either.", "I met Blandy at a party and he asked if I'd like to buy the island.", "Of course I said yes, but I had no money-I was just an art teacher.", "I tried to find some business partners, who all thought I was crazy.", "So I sold some of my possessions, put my savings together and bought it.", "Of course, my family and my friends-all thought I was mad ...", "If l want to have a national flag, it could be blue today, red tomorrow.", "... My family sometimes drops by, and other people come every day because the country is free for tourists to visit ... Q1: Which statement of the following is true?", "Q2: How did the author get the island?", "a.", "The author made his living by driving.", "a.", "It was a present from Blandy.", "b.", "The author's wife supported to buy the island.", "b.", "The king sold it to him.", "c. Blue and red are the main colors of his national flag.", "c. He bought it from Blandy.", "d. People can travel around the island free of charge.", "d. He inherited from his father.", "chical LSTM (Tang et al., 2015) over the sequence of co-matching states at different positions of the passage.", "Information is aggregated from wordlevel to sentence-level and then from sentencelevel to document-level.", "In this way, our model can better deal with the questions that require evidence scattered in different sentences in the passage.", "Our model improves the state-of-the-art model by 3 percentage on the RACE dataset.", "Our code will be released under https://github.", "com/shuohangwang/comatch.", "Model For the task of multi-choice reading comprehension, the machine is given a passage, a question and a set of candidate answers.", "The goal is to select the correct answer from the candidates.", "Let us use P ∈ R d×P , Q ∈ R d×Q and A ∈ R d×A to represent the passage, the question and a candidate answer, respectively, where each word in each sequence is represented by an embedding vector.", "d is the dimensionality of the embeddings, and P , Q, and A are the lengths of these sequences.", "Overall our model works as follows.", "For each candidate answer, our model constructs a vector that represents the matching of P with both Q and A.", "The vectors of all candidate answers are then used for answer selection.", "Because we simultaneously match P with Q and A, we call this a comatching model.", "In Section 2.1 we introduce the word-level co-matching mechanism.", "Then in Section 2.2 we introduce a hierarchical aggregation process.", "Finally in Section 2.3 we present the objective function.", "An overview of our co-matching model is shown in Figure 2 .", "Co-matching The co-matching part of our model aims to match the passage with the question and the candidate answer at the word-level.", "Inspired by some previous work (Wang and Jiang, 2016; Trischler et al., 2016) , we first use bi-directional LSTMs (Hochreiter and Schmidhuber, 1997) to pre-process the sequences as follows: H p = Bi-LSTM(P), H q = Bi-LSTM(Q), H a = Bi-LSTM(A), (1) where H p ∈ R l×P , H q ∈ R l×Q and H a ∈ R l×A are the sequences of hidden states generated by the bi-directional LSTMs.", "We then make use of the attention mechanism to match each state in the passage to an aggregated representation of the question and the candidate answer.", "The attention vectors are computed as follows: G q = SoftMax (W g H q + b g ⊗ e Q ) T H p , G a = SoftMax (W g H a + b g ⊗ e Q ) T H p , H q = H q G q , H a = H a G a , (2) where W g ∈ R l×l and b g ∈ R l are the parameters to learn.", "e Q ∈ R Q is a vector of all 1s and it is used to repeat the bias vector into the matrix.", "G q ∈ R Q×P and G a ∈ R A×P are the attention M q = ReLU W m H q H p H q ⊗ H p + b m , M a = ReLU W m H a H p H a ⊗ H p + b m , C = M q M a , (3) where W g ∈ R l×2l and b g ∈ R l are the parameters to learn.", "· · is the column-wise concatenation of two matrices, and · · and · ⊗ · are the elementwise subtraction and multiplication between two matrices, which are used to build better matching representations (Tai et al., 2015; .", "M q ∈ R l×P represents the matching between the hidden states of the passage and the corresponding attention-weighted representations of the question.", "Similarly, we match the passage with the candidate answer and represent the matching results using M a ∈ R l×P .", "Finally C ∈ R 2l×P is the concatenation of M q ∈ R l×P and M a ∈ R l×P and represents how each passage state can be matched with the question and the candidate answer.", "We refer to c ∈ R 2l , which is a single column of C, as a co-matching state that concurrently matches a passage state with both the question and the candidate answer.", "Hierarchical Aggregation In order to capture the sentence structure of the passage, we further modify the model presented earlier and build a hierarchical LSTM (Tang et al., 2015) on top of the co-matching states.", "Specifically, we first split the passage into sentences and we use P 1 , P 2 , .", ".", ".", ", P N to represent these sentences, where N is the number of sentences in the passage.", "For each triplet {P n , Q, A}, n ∈ [1, N ], we can get the co-matching states C n through Eqn.", "(1-3) .", "Then we build a bi-directional LSTM followed by max pooling on top of the comatching states of each sentence as follows: h s n = MaxPooling (Bi-LSTM (C n )) , (4) where the function MaxPooling(·) is the row-wise max pooling operation.", "h s n ∈ R l , n ∈ [1, N ] is the sentence-level aggregation of the co-matching states.", "All these representations will be further integrated by another Bi-LSTM to get the final triplet matching representation.", "where H s ∈ R l×N is the concatenation of all the sentence-level representations and it is the input of a higher level LSTM.", "h t ∈ R l is the final output of the matching between the sequences of the passage, the question and the candidate answer.", "Objective function For each candidate answer A i , we can build its matching representation h t i ∈ R l with the question and the passage through Eqn.", "(5).", "Our loss function is computed as follows: L(A i |P, Q) = − log exp(w T h t i ) 4 j=1 exp(w T h t j ) , (6) where w ∈ R l is a parameter to learn.", "Experiment To evaluate the effectiveness of our hierarchical co-matching model, we use the RACE dataset (Lai et al., 2017) , which consists of two subsets: RACE-M comes from middle school examinations while RACE-H comes from high school examinations.", "RACE is the combination of the two.", "We compare our model with a number of baseline models.", "We also compare with two variants of our model for an ablation study.", "Comparison with Baselines We compare our model with the following baselines: • Sliding Window based method (Richardson et al., 2013) computes the matching score based on the sum of the tf-idf values of the matched words between the question-answer pair and each subpassage with a fixed a window size.", "• Stanford Attentive Reader (AR) (Chen et al., 2016) first builds a question-related passage representation through attention mechanism and then compares it with each candidate answer representation to get the answer probabilities.", "• GA (Dhingra et al., 2017) uses gated attention mechanism with multiple hops to extract the question-related information of the passage and compares it with candidate answers.", "• ElimiNet (Soham et al., 2017) tries to first eliminate the most irrelevant choices and then select the best answer.", "• HAF (Zhou et al., 2018) considers not only the matching between the three sequences, namely, passage, question and candidate answer, but also the matching between the candidate answers.", "• MUSIC (Xu et al., 2017) integrates different sequence matching strategies into the model and also adds a unit of multi-step reasoning for selecting the answer.", "Besides, we also report the following two results as reference points: Turkers is the performance of Amazon Turkers on a randomly sampled subset of the RACE test set.", "Ceiling is the percentage of the unambiguous questions with a correct answer in a subset of the test set.", "The performance of our model together with the baselines are shown in Table 2 .", "We can see that our proposed complete model, Hier-Co-Matching, achieved the best performance among all the public results.", "Still, there is a huge gap between the best machine reading performance and the human performance, showing the great potential for further research.", "Ablation Study Moreover, we conduct an ablation study of our model architecture.", "In this study, we are mainly interested in the contribution of each component introduced in this work to our final results.", "We studied two key factors: (1) the comatching module and (2) the hierarchical aggregation approach.", "We observed a 4 percentage performance decrease by replacing the co-matching module with a single matching state (i.e., only M a in Eqn (3)) by directly concatenating the question with each candidate answer (Yin et al., 2016) .", "We also observe about 2 percentage decrease when we treat the passage as a plain sequence, and run a two-layer LSTM (to ensure the numbers of parameters are comparable) over the whole passage instead of the hierarchical LSTM.", "Question Type Analysis We also conducted an analysis on what types of questions our model can handle better.", "We find that our model obtains similar performance on the \"wh\" questions such as \"why,\" \"what,\" \"when\" and \"where\" questions, on which the performance is usually around 50%.", "We also check statement-justification questions with the keyword \"true\" (e.g., \"Which of the following statements is true\"), negation questions with the keyword \"not\" (e.g., \"which of the following is not true\"), and summarization questions with the keyword \"title\" (e.g., \"what is the best title for the passage?", "\"), and their performance is 51%, 52% and 48%, respectively.", "We can see that the performance of our model on different types of questions in the RACE dataset is quite similar.", "However, our model is only based on wordlevel matching and may not have the ability of reasoning.", "In order to answer questions that require summarization, inference or reasoning, we still need to further explore the dataset and improve the model.", "Finally, we further compared our model to the baseline, which concatenates the question with each candidate answer, and our model can achieve better performance on different types of questions.", "For example, on the subset of the questions with pronouns, our model can achieve better accuracy of 49.8% than 47.9%.", "Similarly, on statement-justification questions with the keyword \"true\", our model could achieve better accuracy of 51% than 47%.", "Conclusions In this paper, we proposed a co-matching model for multi-choice reading comprehension.", "The model consists of a co-matching component and a hierarchical aggregation component.", "We showed that our model could achieve state-of-the-art performance on the RACE dataset.", "In the future, we will adapt the idea of co-matching and hierarchical aggregation to the standard open-domain QA setting for answer candidate reranking .", "We will also further study how to explicitly model inference and reasoning on the RACE dataset." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "3", "4" ], "paper_header_content": [ "Introduction", "Model", "Co-matching", "Hierarchical Aggregation", "Objective function", "Experiment", "Conclusions" ] }
GEM-SciDuet-train-96#paper-1251#slide-4
Co Matching
For every word in sentence, we match it with the attention-weighted vectors computed based on the question and the candidate answer, respectively. Question: How did the author get the island? Candidate Answer: He bought it from Blandy
For every word in sentence, we match it with the attention-weighted vectors computed based on the question and the candidate answer, respectively. Question: How did the author get the island? Candidate Answer: He bought it from Blandy
[]
GEM-SciDuet-train-96#paper-1251#slide-5
1251
A Co-Matching Model for Multi-choice Reading Comprehension
Multi-choice reading comprehension is a challenging task, which involves the matching between a passage and a question-answer pair. This paper proposes a new co-matching approach to this problem, which jointly models whether a passage can match both a question and a candidate answer. Experimental results on the RACE dataset demonstrate that our approach achieves state-of-the-art performance.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119 ], "paper_content_text": [ "Introduction Enabling machines to understand natural language text is arguably the ultimate goal of natural language processing, and the task of machine reading comprehension is an intermediate step towards this ultimate goal (Richardson et al., 2013; Hermann et al., 2015; Hill et al., 2015; Rajpurkar et al., 2016; Nguyen et al., 2016) .", "Recently, Lai et al.", "(2017) released a new multi-choice machine comprehension dataset called RACE that was extracted from middle and high school English examinations in China.", "Figure 1 shows an example passage and two related questions from RACE.", "The key difference between RACE and previously released machine comprehension datasets (e.g., the CNN/Daily Mail dataset (Hermann et al., 2015) and SQuAD (Rajpurkar et al., 2016) ) is that the answers in RACE often cannot be directly extracted from the given passages, as illustrated by the two example questions (Q1 & Q2) in Figure 1 .", "Thus, answering these questions is more challenging and requires more inferences.", "Previous approaches to machine comprehension are usually based on pairwise sequence matching, where either the passage is matched against the sequence that concatenates both the question and a candidate answer (Yin et al., 2016) , or the passage is matched against the question alone followed by a second step of selecting an answer using the matching result of the first step (Lai et al., 2017; Zhou et al., 2018) .", "However, these approaches may not be suitable for multi-choice reading comprehension since questions and answers are often equally important.", "Matching the passage only against the question may not be meaningful and may lead to loss of information from the original passage, as we can see from the first example question in Figure 1 .", "On the other hand, concatenating the question and the answer into a single sequence for matching may not work, either, due to the loss of interaction information between a question and an answer.", "As illustrated by Q2 in Figure 1 , the model may need to recognize what \"he\" and \"it\" in candidate answer (c) refer to in the question, in order to select (c) as the correct answer.", "This observation of the RACE dataset shows that we face a new challenge of matching sequence triplets (i.e., passage, question and answer) instead of pairwise matching.", "In this paper, we propose a new model to match a question-answer pair to a given passage.", "Our comatching approach explicitly treats the question and the candidate answer as two sequences and jointly matches them to the given passage.", "Specifically, for each position in the passage, we compute two attention-weighted vectors, where one is from the question and the other from the candidate answer.", "Then, two matching representations are constructed: the first one matches the passage with the question while the second one matches the passage with the candidate answer.", "These two newly constructed matching representations together form a co-matching state.", "Intuitively, it encodes the locational information of the question and the candidate answer matched to a specific context of the passage.", "Finally, we apply a hierar-Passage: My father wasn't a king, he was a taxi driver, but I am a prince-Prince Renato II, of the country Pontinha , an island fort on Funchal harbour.", "In 1903, the king of Portugal sold the land to a wealthy British family, the Blandys, who make Madeira wine.", "Fourteen years ago the family decided to sell it for just EUR25,000, but nobody wanted to buy it either.", "I met Blandy at a party and he asked if I'd like to buy the island.", "Of course I said yes, but I had no money-I was just an art teacher.", "I tried to find some business partners, who all thought I was crazy.", "So I sold some of my possessions, put my savings together and bought it.", "Of course, my family and my friends-all thought I was mad ...", "If l want to have a national flag, it could be blue today, red tomorrow.", "... My family sometimes drops by, and other people come every day because the country is free for tourists to visit ... Q1: Which statement of the following is true?", "Q2: How did the author get the island?", "a.", "The author made his living by driving.", "a.", "It was a present from Blandy.", "b.", "The author's wife supported to buy the island.", "b.", "The king sold it to him.", "c. Blue and red are the main colors of his national flag.", "c. He bought it from Blandy.", "d. People can travel around the island free of charge.", "d. He inherited from his father.", "chical LSTM (Tang et al., 2015) over the sequence of co-matching states at different positions of the passage.", "Information is aggregated from wordlevel to sentence-level and then from sentencelevel to document-level.", "In this way, our model can better deal with the questions that require evidence scattered in different sentences in the passage.", "Our model improves the state-of-the-art model by 3 percentage on the RACE dataset.", "Our code will be released under https://github.", "com/shuohangwang/comatch.", "Model For the task of multi-choice reading comprehension, the machine is given a passage, a question and a set of candidate answers.", "The goal is to select the correct answer from the candidates.", "Let us use P ∈ R d×P , Q ∈ R d×Q and A ∈ R d×A to represent the passage, the question and a candidate answer, respectively, where each word in each sequence is represented by an embedding vector.", "d is the dimensionality of the embeddings, and P , Q, and A are the lengths of these sequences.", "Overall our model works as follows.", "For each candidate answer, our model constructs a vector that represents the matching of P with both Q and A.", "The vectors of all candidate answers are then used for answer selection.", "Because we simultaneously match P with Q and A, we call this a comatching model.", "In Section 2.1 we introduce the word-level co-matching mechanism.", "Then in Section 2.2 we introduce a hierarchical aggregation process.", "Finally in Section 2.3 we present the objective function.", "An overview of our co-matching model is shown in Figure 2 .", "Co-matching The co-matching part of our model aims to match the passage with the question and the candidate answer at the word-level.", "Inspired by some previous work (Wang and Jiang, 2016; Trischler et al., 2016) , we first use bi-directional LSTMs (Hochreiter and Schmidhuber, 1997) to pre-process the sequences as follows: H p = Bi-LSTM(P), H q = Bi-LSTM(Q), H a = Bi-LSTM(A), (1) where H p ∈ R l×P , H q ∈ R l×Q and H a ∈ R l×A are the sequences of hidden states generated by the bi-directional LSTMs.", "We then make use of the attention mechanism to match each state in the passage to an aggregated representation of the question and the candidate answer.", "The attention vectors are computed as follows: G q = SoftMax (W g H q + b g ⊗ e Q ) T H p , G a = SoftMax (W g H a + b g ⊗ e Q ) T H p , H q = H q G q , H a = H a G a , (2) where W g ∈ R l×l and b g ∈ R l are the parameters to learn.", "e Q ∈ R Q is a vector of all 1s and it is used to repeat the bias vector into the matrix.", "G q ∈ R Q×P and G a ∈ R A×P are the attention M q = ReLU W m H q H p H q ⊗ H p + b m , M a = ReLU W m H a H p H a ⊗ H p + b m , C = M q M a , (3) where W g ∈ R l×2l and b g ∈ R l are the parameters to learn.", "· · is the column-wise concatenation of two matrices, and · · and · ⊗ · are the elementwise subtraction and multiplication between two matrices, which are used to build better matching representations (Tai et al., 2015; .", "M q ∈ R l×P represents the matching between the hidden states of the passage and the corresponding attention-weighted representations of the question.", "Similarly, we match the passage with the candidate answer and represent the matching results using M a ∈ R l×P .", "Finally C ∈ R 2l×P is the concatenation of M q ∈ R l×P and M a ∈ R l×P and represents how each passage state can be matched with the question and the candidate answer.", "We refer to c ∈ R 2l , which is a single column of C, as a co-matching state that concurrently matches a passage state with both the question and the candidate answer.", "Hierarchical Aggregation In order to capture the sentence structure of the passage, we further modify the model presented earlier and build a hierarchical LSTM (Tang et al., 2015) on top of the co-matching states.", "Specifically, we first split the passage into sentences and we use P 1 , P 2 , .", ".", ".", ", P N to represent these sentences, where N is the number of sentences in the passage.", "For each triplet {P n , Q, A}, n ∈ [1, N ], we can get the co-matching states C n through Eqn.", "(1-3) .", "Then we build a bi-directional LSTM followed by max pooling on top of the comatching states of each sentence as follows: h s n = MaxPooling (Bi-LSTM (C n )) , (4) where the function MaxPooling(·) is the row-wise max pooling operation.", "h s n ∈ R l , n ∈ [1, N ] is the sentence-level aggregation of the co-matching states.", "All these representations will be further integrated by another Bi-LSTM to get the final triplet matching representation.", "where H s ∈ R l×N is the concatenation of all the sentence-level representations and it is the input of a higher level LSTM.", "h t ∈ R l is the final output of the matching between the sequences of the passage, the question and the candidate answer.", "Objective function For each candidate answer A i , we can build its matching representation h t i ∈ R l with the question and the passage through Eqn.", "(5).", "Our loss function is computed as follows: L(A i |P, Q) = − log exp(w T h t i ) 4 j=1 exp(w T h t j ) , (6) where w ∈ R l is a parameter to learn.", "Experiment To evaluate the effectiveness of our hierarchical co-matching model, we use the RACE dataset (Lai et al., 2017) , which consists of two subsets: RACE-M comes from middle school examinations while RACE-H comes from high school examinations.", "RACE is the combination of the two.", "We compare our model with a number of baseline models.", "We also compare with two variants of our model for an ablation study.", "Comparison with Baselines We compare our model with the following baselines: • Sliding Window based method (Richardson et al., 2013) computes the matching score based on the sum of the tf-idf values of the matched words between the question-answer pair and each subpassage with a fixed a window size.", "• Stanford Attentive Reader (AR) (Chen et al., 2016) first builds a question-related passage representation through attention mechanism and then compares it with each candidate answer representation to get the answer probabilities.", "• GA (Dhingra et al., 2017) uses gated attention mechanism with multiple hops to extract the question-related information of the passage and compares it with candidate answers.", "• ElimiNet (Soham et al., 2017) tries to first eliminate the most irrelevant choices and then select the best answer.", "• HAF (Zhou et al., 2018) considers not only the matching between the three sequences, namely, passage, question and candidate answer, but also the matching between the candidate answers.", "• MUSIC (Xu et al., 2017) integrates different sequence matching strategies into the model and also adds a unit of multi-step reasoning for selecting the answer.", "Besides, we also report the following two results as reference points: Turkers is the performance of Amazon Turkers on a randomly sampled subset of the RACE test set.", "Ceiling is the percentage of the unambiguous questions with a correct answer in a subset of the test set.", "The performance of our model together with the baselines are shown in Table 2 .", "We can see that our proposed complete model, Hier-Co-Matching, achieved the best performance among all the public results.", "Still, there is a huge gap between the best machine reading performance and the human performance, showing the great potential for further research.", "Ablation Study Moreover, we conduct an ablation study of our model architecture.", "In this study, we are mainly interested in the contribution of each component introduced in this work to our final results.", "We studied two key factors: (1) the comatching module and (2) the hierarchical aggregation approach.", "We observed a 4 percentage performance decrease by replacing the co-matching module with a single matching state (i.e., only M a in Eqn (3)) by directly concatenating the question with each candidate answer (Yin et al., 2016) .", "We also observe about 2 percentage decrease when we treat the passage as a plain sequence, and run a two-layer LSTM (to ensure the numbers of parameters are comparable) over the whole passage instead of the hierarchical LSTM.", "Question Type Analysis We also conducted an analysis on what types of questions our model can handle better.", "We find that our model obtains similar performance on the \"wh\" questions such as \"why,\" \"what,\" \"when\" and \"where\" questions, on which the performance is usually around 50%.", "We also check statement-justification questions with the keyword \"true\" (e.g., \"Which of the following statements is true\"), negation questions with the keyword \"not\" (e.g., \"which of the following is not true\"), and summarization questions with the keyword \"title\" (e.g., \"what is the best title for the passage?", "\"), and their performance is 51%, 52% and 48%, respectively.", "We can see that the performance of our model on different types of questions in the RACE dataset is quite similar.", "However, our model is only based on wordlevel matching and may not have the ability of reasoning.", "In order to answer questions that require summarization, inference or reasoning, we still need to further explore the dataset and improve the model.", "Finally, we further compared our model to the baseline, which concatenates the question with each candidate answer, and our model can achieve better performance on different types of questions.", "For example, on the subset of the questions with pronouns, our model can achieve better accuracy of 49.8% than 47.9%.", "Similarly, on statement-justification questions with the keyword \"true\", our model could achieve better accuracy of 51% than 47%.", "Conclusions In this paper, we proposed a co-matching model for multi-choice reading comprehension.", "The model consists of a co-matching component and a hierarchical aggregation component.", "We showed that our model could achieve state-of-the-art performance on the RACE dataset.", "In the future, we will adapt the idea of co-matching and hierarchical aggregation to the standard open-domain QA setting for answer candidate reranking .", "We will also further study how to explicitly model inference and reasoning on the RACE dataset." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "3", "4" ], "paper_header_content": [ "Introduction", "Model", "Co-matching", "Hierarchical Aggregation", "Objective function", "Experiment", "Conclusions" ] }
GEM-SciDuet-train-96#paper-1251#slide-5
Framework
Question 1st Sentence Candidate of Passage answer ACL 2018 Representation for ranking candidates 2nd Sentence ACL 2018 Nth Sentence of Passage of Passage
Question 1st Sentence Candidate of Passage answer ACL 2018 Representation for ranking candidates 2nd Sentence ACL 2018 Nth Sentence of Passage of Passage
[]
GEM-SciDuet-train-96#paper-1251#slide-6
1251
A Co-Matching Model for Multi-choice Reading Comprehension
Multi-choice reading comprehension is a challenging task, which involves the matching between a passage and a question-answer pair. This paper proposes a new co-matching approach to this problem, which jointly models whether a passage can match both a question and a candidate answer. Experimental results on the RACE dataset demonstrate that our approach achieves state-of-the-art performance.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119 ], "paper_content_text": [ "Introduction Enabling machines to understand natural language text is arguably the ultimate goal of natural language processing, and the task of machine reading comprehension is an intermediate step towards this ultimate goal (Richardson et al., 2013; Hermann et al., 2015; Hill et al., 2015; Rajpurkar et al., 2016; Nguyen et al., 2016) .", "Recently, Lai et al.", "(2017) released a new multi-choice machine comprehension dataset called RACE that was extracted from middle and high school English examinations in China.", "Figure 1 shows an example passage and two related questions from RACE.", "The key difference between RACE and previously released machine comprehension datasets (e.g., the CNN/Daily Mail dataset (Hermann et al., 2015) and SQuAD (Rajpurkar et al., 2016) ) is that the answers in RACE often cannot be directly extracted from the given passages, as illustrated by the two example questions (Q1 & Q2) in Figure 1 .", "Thus, answering these questions is more challenging and requires more inferences.", "Previous approaches to machine comprehension are usually based on pairwise sequence matching, where either the passage is matched against the sequence that concatenates both the question and a candidate answer (Yin et al., 2016) , or the passage is matched against the question alone followed by a second step of selecting an answer using the matching result of the first step (Lai et al., 2017; Zhou et al., 2018) .", "However, these approaches may not be suitable for multi-choice reading comprehension since questions and answers are often equally important.", "Matching the passage only against the question may not be meaningful and may lead to loss of information from the original passage, as we can see from the first example question in Figure 1 .", "On the other hand, concatenating the question and the answer into a single sequence for matching may not work, either, due to the loss of interaction information between a question and an answer.", "As illustrated by Q2 in Figure 1 , the model may need to recognize what \"he\" and \"it\" in candidate answer (c) refer to in the question, in order to select (c) as the correct answer.", "This observation of the RACE dataset shows that we face a new challenge of matching sequence triplets (i.e., passage, question and answer) instead of pairwise matching.", "In this paper, we propose a new model to match a question-answer pair to a given passage.", "Our comatching approach explicitly treats the question and the candidate answer as two sequences and jointly matches them to the given passage.", "Specifically, for each position in the passage, we compute two attention-weighted vectors, where one is from the question and the other from the candidate answer.", "Then, two matching representations are constructed: the first one matches the passage with the question while the second one matches the passage with the candidate answer.", "These two newly constructed matching representations together form a co-matching state.", "Intuitively, it encodes the locational information of the question and the candidate answer matched to a specific context of the passage.", "Finally, we apply a hierar-Passage: My father wasn't a king, he was a taxi driver, but I am a prince-Prince Renato II, of the country Pontinha , an island fort on Funchal harbour.", "In 1903, the king of Portugal sold the land to a wealthy British family, the Blandys, who make Madeira wine.", "Fourteen years ago the family decided to sell it for just EUR25,000, but nobody wanted to buy it either.", "I met Blandy at a party and he asked if I'd like to buy the island.", "Of course I said yes, but I had no money-I was just an art teacher.", "I tried to find some business partners, who all thought I was crazy.", "So I sold some of my possessions, put my savings together and bought it.", "Of course, my family and my friends-all thought I was mad ...", "If l want to have a national flag, it could be blue today, red tomorrow.", "... My family sometimes drops by, and other people come every day because the country is free for tourists to visit ... Q1: Which statement of the following is true?", "Q2: How did the author get the island?", "a.", "The author made his living by driving.", "a.", "It was a present from Blandy.", "b.", "The author's wife supported to buy the island.", "b.", "The king sold it to him.", "c. Blue and red are the main colors of his national flag.", "c. He bought it from Blandy.", "d. People can travel around the island free of charge.", "d. He inherited from his father.", "chical LSTM (Tang et al., 2015) over the sequence of co-matching states at different positions of the passage.", "Information is aggregated from wordlevel to sentence-level and then from sentencelevel to document-level.", "In this way, our model can better deal with the questions that require evidence scattered in different sentences in the passage.", "Our model improves the state-of-the-art model by 3 percentage on the RACE dataset.", "Our code will be released under https://github.", "com/shuohangwang/comatch.", "Model For the task of multi-choice reading comprehension, the machine is given a passage, a question and a set of candidate answers.", "The goal is to select the correct answer from the candidates.", "Let us use P ∈ R d×P , Q ∈ R d×Q and A ∈ R d×A to represent the passage, the question and a candidate answer, respectively, where each word in each sequence is represented by an embedding vector.", "d is the dimensionality of the embeddings, and P , Q, and A are the lengths of these sequences.", "Overall our model works as follows.", "For each candidate answer, our model constructs a vector that represents the matching of P with both Q and A.", "The vectors of all candidate answers are then used for answer selection.", "Because we simultaneously match P with Q and A, we call this a comatching model.", "In Section 2.1 we introduce the word-level co-matching mechanism.", "Then in Section 2.2 we introduce a hierarchical aggregation process.", "Finally in Section 2.3 we present the objective function.", "An overview of our co-matching model is shown in Figure 2 .", "Co-matching The co-matching part of our model aims to match the passage with the question and the candidate answer at the word-level.", "Inspired by some previous work (Wang and Jiang, 2016; Trischler et al., 2016) , we first use bi-directional LSTMs (Hochreiter and Schmidhuber, 1997) to pre-process the sequences as follows: H p = Bi-LSTM(P), H q = Bi-LSTM(Q), H a = Bi-LSTM(A), (1) where H p ∈ R l×P , H q ∈ R l×Q and H a ∈ R l×A are the sequences of hidden states generated by the bi-directional LSTMs.", "We then make use of the attention mechanism to match each state in the passage to an aggregated representation of the question and the candidate answer.", "The attention vectors are computed as follows: G q = SoftMax (W g H q + b g ⊗ e Q ) T H p , G a = SoftMax (W g H a + b g ⊗ e Q ) T H p , H q = H q G q , H a = H a G a , (2) where W g ∈ R l×l and b g ∈ R l are the parameters to learn.", "e Q ∈ R Q is a vector of all 1s and it is used to repeat the bias vector into the matrix.", "G q ∈ R Q×P and G a ∈ R A×P are the attention M q = ReLU W m H q H p H q ⊗ H p + b m , M a = ReLU W m H a H p H a ⊗ H p + b m , C = M q M a , (3) where W g ∈ R l×2l and b g ∈ R l are the parameters to learn.", "· · is the column-wise concatenation of two matrices, and · · and · ⊗ · are the elementwise subtraction and multiplication between two matrices, which are used to build better matching representations (Tai et al., 2015; .", "M q ∈ R l×P represents the matching between the hidden states of the passage and the corresponding attention-weighted representations of the question.", "Similarly, we match the passage with the candidate answer and represent the matching results using M a ∈ R l×P .", "Finally C ∈ R 2l×P is the concatenation of M q ∈ R l×P and M a ∈ R l×P and represents how each passage state can be matched with the question and the candidate answer.", "We refer to c ∈ R 2l , which is a single column of C, as a co-matching state that concurrently matches a passage state with both the question and the candidate answer.", "Hierarchical Aggregation In order to capture the sentence structure of the passage, we further modify the model presented earlier and build a hierarchical LSTM (Tang et al., 2015) on top of the co-matching states.", "Specifically, we first split the passage into sentences and we use P 1 , P 2 , .", ".", ".", ", P N to represent these sentences, where N is the number of sentences in the passage.", "For each triplet {P n , Q, A}, n ∈ [1, N ], we can get the co-matching states C n through Eqn.", "(1-3) .", "Then we build a bi-directional LSTM followed by max pooling on top of the comatching states of each sentence as follows: h s n = MaxPooling (Bi-LSTM (C n )) , (4) where the function MaxPooling(·) is the row-wise max pooling operation.", "h s n ∈ R l , n ∈ [1, N ] is the sentence-level aggregation of the co-matching states.", "All these representations will be further integrated by another Bi-LSTM to get the final triplet matching representation.", "where H s ∈ R l×N is the concatenation of all the sentence-level representations and it is the input of a higher level LSTM.", "h t ∈ R l is the final output of the matching between the sequences of the passage, the question and the candidate answer.", "Objective function For each candidate answer A i , we can build its matching representation h t i ∈ R l with the question and the passage through Eqn.", "(5).", "Our loss function is computed as follows: L(A i |P, Q) = − log exp(w T h t i ) 4 j=1 exp(w T h t j ) , (6) where w ∈ R l is a parameter to learn.", "Experiment To evaluate the effectiveness of our hierarchical co-matching model, we use the RACE dataset (Lai et al., 2017) , which consists of two subsets: RACE-M comes from middle school examinations while RACE-H comes from high school examinations.", "RACE is the combination of the two.", "We compare our model with a number of baseline models.", "We also compare with two variants of our model for an ablation study.", "Comparison with Baselines We compare our model with the following baselines: • Sliding Window based method (Richardson et al., 2013) computes the matching score based on the sum of the tf-idf values of the matched words between the question-answer pair and each subpassage with a fixed a window size.", "• Stanford Attentive Reader (AR) (Chen et al., 2016) first builds a question-related passage representation through attention mechanism and then compares it with each candidate answer representation to get the answer probabilities.", "• GA (Dhingra et al., 2017) uses gated attention mechanism with multiple hops to extract the question-related information of the passage and compares it with candidate answers.", "• ElimiNet (Soham et al., 2017) tries to first eliminate the most irrelevant choices and then select the best answer.", "• HAF (Zhou et al., 2018) considers not only the matching between the three sequences, namely, passage, question and candidate answer, but also the matching between the candidate answers.", "• MUSIC (Xu et al., 2017) integrates different sequence matching strategies into the model and also adds a unit of multi-step reasoning for selecting the answer.", "Besides, we also report the following two results as reference points: Turkers is the performance of Amazon Turkers on a randomly sampled subset of the RACE test set.", "Ceiling is the percentage of the unambiguous questions with a correct answer in a subset of the test set.", "The performance of our model together with the baselines are shown in Table 2 .", "We can see that our proposed complete model, Hier-Co-Matching, achieved the best performance among all the public results.", "Still, there is a huge gap between the best machine reading performance and the human performance, showing the great potential for further research.", "Ablation Study Moreover, we conduct an ablation study of our model architecture.", "In this study, we are mainly interested in the contribution of each component introduced in this work to our final results.", "We studied two key factors: (1) the comatching module and (2) the hierarchical aggregation approach.", "We observed a 4 percentage performance decrease by replacing the co-matching module with a single matching state (i.e., only M a in Eqn (3)) by directly concatenating the question with each candidate answer (Yin et al., 2016) .", "We also observe about 2 percentage decrease when we treat the passage as a plain sequence, and run a two-layer LSTM (to ensure the numbers of parameters are comparable) over the whole passage instead of the hierarchical LSTM.", "Question Type Analysis We also conducted an analysis on what types of questions our model can handle better.", "We find that our model obtains similar performance on the \"wh\" questions such as \"why,\" \"what,\" \"when\" and \"where\" questions, on which the performance is usually around 50%.", "We also check statement-justification questions with the keyword \"true\" (e.g., \"Which of the following statements is true\"), negation questions with the keyword \"not\" (e.g., \"which of the following is not true\"), and summarization questions with the keyword \"title\" (e.g., \"what is the best title for the passage?", "\"), and their performance is 51%, 52% and 48%, respectively.", "We can see that the performance of our model on different types of questions in the RACE dataset is quite similar.", "However, our model is only based on wordlevel matching and may not have the ability of reasoning.", "In order to answer questions that require summarization, inference or reasoning, we still need to further explore the dataset and improve the model.", "Finally, we further compared our model to the baseline, which concatenates the question with each candidate answer, and our model can achieve better performance on different types of questions.", "For example, on the subset of the questions with pronouns, our model can achieve better accuracy of 49.8% than 47.9%.", "Similarly, on statement-justification questions with the keyword \"true\", our model could achieve better accuracy of 51% than 47%.", "Conclusions In this paper, we proposed a co-matching model for multi-choice reading comprehension.", "The model consists of a co-matching component and a hierarchical aggregation component.", "We showed that our model could achieve state-of-the-art performance on the RACE dataset.", "In the future, we will adapt the idea of co-matching and hierarchical aggregation to the standard open-domain QA setting for answer candidate reranking .", "We will also further study how to explicitly model inference and reasoning on the RACE dataset." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "3", "4" ], "paper_header_content": [ "Introduction", "Model", "Co-matching", "Hierarchical Aggregation", "Objective function", "Experiment", "Conclusions" ] }
GEM-SciDuet-train-96#paper-1251#slide-6
Experiments
Our Hier-Co-Matching achieved the best performance compared with previous work. We studied two key factors: (1) the co-matching module (2) the hierarchical aggregation approach
Our Hier-Co-Matching achieved the best performance compared with previous work. We studied two key factors: (1) the co-matching module (2) the hierarchical aggregation approach
[]
GEM-SciDuet-train-96#paper-1251#slide-7
1251
A Co-Matching Model for Multi-choice Reading Comprehension
Multi-choice reading comprehension is a challenging task, which involves the matching between a passage and a question-answer pair. This paper proposes a new co-matching approach to this problem, which jointly models whether a passage can match both a question and a candidate answer. Experimental results on the RACE dataset demonstrate that our approach achieves state-of-the-art performance.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119 ], "paper_content_text": [ "Introduction Enabling machines to understand natural language text is arguably the ultimate goal of natural language processing, and the task of machine reading comprehension is an intermediate step towards this ultimate goal (Richardson et al., 2013; Hermann et al., 2015; Hill et al., 2015; Rajpurkar et al., 2016; Nguyen et al., 2016) .", "Recently, Lai et al.", "(2017) released a new multi-choice machine comprehension dataset called RACE that was extracted from middle and high school English examinations in China.", "Figure 1 shows an example passage and two related questions from RACE.", "The key difference between RACE and previously released machine comprehension datasets (e.g., the CNN/Daily Mail dataset (Hermann et al., 2015) and SQuAD (Rajpurkar et al., 2016) ) is that the answers in RACE often cannot be directly extracted from the given passages, as illustrated by the two example questions (Q1 & Q2) in Figure 1 .", "Thus, answering these questions is more challenging and requires more inferences.", "Previous approaches to machine comprehension are usually based on pairwise sequence matching, where either the passage is matched against the sequence that concatenates both the question and a candidate answer (Yin et al., 2016) , or the passage is matched against the question alone followed by a second step of selecting an answer using the matching result of the first step (Lai et al., 2017; Zhou et al., 2018) .", "However, these approaches may not be suitable for multi-choice reading comprehension since questions and answers are often equally important.", "Matching the passage only against the question may not be meaningful and may lead to loss of information from the original passage, as we can see from the first example question in Figure 1 .", "On the other hand, concatenating the question and the answer into a single sequence for matching may not work, either, due to the loss of interaction information between a question and an answer.", "As illustrated by Q2 in Figure 1 , the model may need to recognize what \"he\" and \"it\" in candidate answer (c) refer to in the question, in order to select (c) as the correct answer.", "This observation of the RACE dataset shows that we face a new challenge of matching sequence triplets (i.e., passage, question and answer) instead of pairwise matching.", "In this paper, we propose a new model to match a question-answer pair to a given passage.", "Our comatching approach explicitly treats the question and the candidate answer as two sequences and jointly matches them to the given passage.", "Specifically, for each position in the passage, we compute two attention-weighted vectors, where one is from the question and the other from the candidate answer.", "Then, two matching representations are constructed: the first one matches the passage with the question while the second one matches the passage with the candidate answer.", "These two newly constructed matching representations together form a co-matching state.", "Intuitively, it encodes the locational information of the question and the candidate answer matched to a specific context of the passage.", "Finally, we apply a hierar-Passage: My father wasn't a king, he was a taxi driver, but I am a prince-Prince Renato II, of the country Pontinha , an island fort on Funchal harbour.", "In 1903, the king of Portugal sold the land to a wealthy British family, the Blandys, who make Madeira wine.", "Fourteen years ago the family decided to sell it for just EUR25,000, but nobody wanted to buy it either.", "I met Blandy at a party and he asked if I'd like to buy the island.", "Of course I said yes, but I had no money-I was just an art teacher.", "I tried to find some business partners, who all thought I was crazy.", "So I sold some of my possessions, put my savings together and bought it.", "Of course, my family and my friends-all thought I was mad ...", "If l want to have a national flag, it could be blue today, red tomorrow.", "... My family sometimes drops by, and other people come every day because the country is free for tourists to visit ... Q1: Which statement of the following is true?", "Q2: How did the author get the island?", "a.", "The author made his living by driving.", "a.", "It was a present from Blandy.", "b.", "The author's wife supported to buy the island.", "b.", "The king sold it to him.", "c. Blue and red are the main colors of his national flag.", "c. He bought it from Blandy.", "d. People can travel around the island free of charge.", "d. He inherited from his father.", "chical LSTM (Tang et al., 2015) over the sequence of co-matching states at different positions of the passage.", "Information is aggregated from wordlevel to sentence-level and then from sentencelevel to document-level.", "In this way, our model can better deal with the questions that require evidence scattered in different sentences in the passage.", "Our model improves the state-of-the-art model by 3 percentage on the RACE dataset.", "Our code will be released under https://github.", "com/shuohangwang/comatch.", "Model For the task of multi-choice reading comprehension, the machine is given a passage, a question and a set of candidate answers.", "The goal is to select the correct answer from the candidates.", "Let us use P ∈ R d×P , Q ∈ R d×Q and A ∈ R d×A to represent the passage, the question and a candidate answer, respectively, where each word in each sequence is represented by an embedding vector.", "d is the dimensionality of the embeddings, and P , Q, and A are the lengths of these sequences.", "Overall our model works as follows.", "For each candidate answer, our model constructs a vector that represents the matching of P with both Q and A.", "The vectors of all candidate answers are then used for answer selection.", "Because we simultaneously match P with Q and A, we call this a comatching model.", "In Section 2.1 we introduce the word-level co-matching mechanism.", "Then in Section 2.2 we introduce a hierarchical aggregation process.", "Finally in Section 2.3 we present the objective function.", "An overview of our co-matching model is shown in Figure 2 .", "Co-matching The co-matching part of our model aims to match the passage with the question and the candidate answer at the word-level.", "Inspired by some previous work (Wang and Jiang, 2016; Trischler et al., 2016) , we first use bi-directional LSTMs (Hochreiter and Schmidhuber, 1997) to pre-process the sequences as follows: H p = Bi-LSTM(P), H q = Bi-LSTM(Q), H a = Bi-LSTM(A), (1) where H p ∈ R l×P , H q ∈ R l×Q and H a ∈ R l×A are the sequences of hidden states generated by the bi-directional LSTMs.", "We then make use of the attention mechanism to match each state in the passage to an aggregated representation of the question and the candidate answer.", "The attention vectors are computed as follows: G q = SoftMax (W g H q + b g ⊗ e Q ) T H p , G a = SoftMax (W g H a + b g ⊗ e Q ) T H p , H q = H q G q , H a = H a G a , (2) where W g ∈ R l×l and b g ∈ R l are the parameters to learn.", "e Q ∈ R Q is a vector of all 1s and it is used to repeat the bias vector into the matrix.", "G q ∈ R Q×P and G a ∈ R A×P are the attention M q = ReLU W m H q H p H q ⊗ H p + b m , M a = ReLU W m H a H p H a ⊗ H p + b m , C = M q M a , (3) where W g ∈ R l×2l and b g ∈ R l are the parameters to learn.", "· · is the column-wise concatenation of two matrices, and · · and · ⊗ · are the elementwise subtraction and multiplication between two matrices, which are used to build better matching representations (Tai et al., 2015; .", "M q ∈ R l×P represents the matching between the hidden states of the passage and the corresponding attention-weighted representations of the question.", "Similarly, we match the passage with the candidate answer and represent the matching results using M a ∈ R l×P .", "Finally C ∈ R 2l×P is the concatenation of M q ∈ R l×P and M a ∈ R l×P and represents how each passage state can be matched with the question and the candidate answer.", "We refer to c ∈ R 2l , which is a single column of C, as a co-matching state that concurrently matches a passage state with both the question and the candidate answer.", "Hierarchical Aggregation In order to capture the sentence structure of the passage, we further modify the model presented earlier and build a hierarchical LSTM (Tang et al., 2015) on top of the co-matching states.", "Specifically, we first split the passage into sentences and we use P 1 , P 2 , .", ".", ".", ", P N to represent these sentences, where N is the number of sentences in the passage.", "For each triplet {P n , Q, A}, n ∈ [1, N ], we can get the co-matching states C n through Eqn.", "(1-3) .", "Then we build a bi-directional LSTM followed by max pooling on top of the comatching states of each sentence as follows: h s n = MaxPooling (Bi-LSTM (C n )) , (4) where the function MaxPooling(·) is the row-wise max pooling operation.", "h s n ∈ R l , n ∈ [1, N ] is the sentence-level aggregation of the co-matching states.", "All these representations will be further integrated by another Bi-LSTM to get the final triplet matching representation.", "where H s ∈ R l×N is the concatenation of all the sentence-level representations and it is the input of a higher level LSTM.", "h t ∈ R l is the final output of the matching between the sequences of the passage, the question and the candidate answer.", "Objective function For each candidate answer A i , we can build its matching representation h t i ∈ R l with the question and the passage through Eqn.", "(5).", "Our loss function is computed as follows: L(A i |P, Q) = − log exp(w T h t i ) 4 j=1 exp(w T h t j ) , (6) where w ∈ R l is a parameter to learn.", "Experiment To evaluate the effectiveness of our hierarchical co-matching model, we use the RACE dataset (Lai et al., 2017) , which consists of two subsets: RACE-M comes from middle school examinations while RACE-H comes from high school examinations.", "RACE is the combination of the two.", "We compare our model with a number of baseline models.", "We also compare with two variants of our model for an ablation study.", "Comparison with Baselines We compare our model with the following baselines: • Sliding Window based method (Richardson et al., 2013) computes the matching score based on the sum of the tf-idf values of the matched words between the question-answer pair and each subpassage with a fixed a window size.", "• Stanford Attentive Reader (AR) (Chen et al., 2016) first builds a question-related passage representation through attention mechanism and then compares it with each candidate answer representation to get the answer probabilities.", "• GA (Dhingra et al., 2017) uses gated attention mechanism with multiple hops to extract the question-related information of the passage and compares it with candidate answers.", "• ElimiNet (Soham et al., 2017) tries to first eliminate the most irrelevant choices and then select the best answer.", "• HAF (Zhou et al., 2018) considers not only the matching between the three sequences, namely, passage, question and candidate answer, but also the matching between the candidate answers.", "• MUSIC (Xu et al., 2017) integrates different sequence matching strategies into the model and also adds a unit of multi-step reasoning for selecting the answer.", "Besides, we also report the following two results as reference points: Turkers is the performance of Amazon Turkers on a randomly sampled subset of the RACE test set.", "Ceiling is the percentage of the unambiguous questions with a correct answer in a subset of the test set.", "The performance of our model together with the baselines are shown in Table 2 .", "We can see that our proposed complete model, Hier-Co-Matching, achieved the best performance among all the public results.", "Still, there is a huge gap between the best machine reading performance and the human performance, showing the great potential for further research.", "Ablation Study Moreover, we conduct an ablation study of our model architecture.", "In this study, we are mainly interested in the contribution of each component introduced in this work to our final results.", "We studied two key factors: (1) the comatching module and (2) the hierarchical aggregation approach.", "We observed a 4 percentage performance decrease by replacing the co-matching module with a single matching state (i.e., only M a in Eqn (3)) by directly concatenating the question with each candidate answer (Yin et al., 2016) .", "We also observe about 2 percentage decrease when we treat the passage as a plain sequence, and run a two-layer LSTM (to ensure the numbers of parameters are comparable) over the whole passage instead of the hierarchical LSTM.", "Question Type Analysis We also conducted an analysis on what types of questions our model can handle better.", "We find that our model obtains similar performance on the \"wh\" questions such as \"why,\" \"what,\" \"when\" and \"where\" questions, on which the performance is usually around 50%.", "We also check statement-justification questions with the keyword \"true\" (e.g., \"Which of the following statements is true\"), negation questions with the keyword \"not\" (e.g., \"which of the following is not true\"), and summarization questions with the keyword \"title\" (e.g., \"what is the best title for the passage?", "\"), and their performance is 51%, 52% and 48%, respectively.", "We can see that the performance of our model on different types of questions in the RACE dataset is quite similar.", "However, our model is only based on wordlevel matching and may not have the ability of reasoning.", "In order to answer questions that require summarization, inference or reasoning, we still need to further explore the dataset and improve the model.", "Finally, we further compared our model to the baseline, which concatenates the question with each candidate answer, and our model can achieve better performance on different types of questions.", "For example, on the subset of the questions with pronouns, our model can achieve better accuracy of 49.8% than 47.9%.", "Similarly, on statement-justification questions with the keyword \"true\", our model could achieve better accuracy of 51% than 47%.", "Conclusions In this paper, we proposed a co-matching model for multi-choice reading comprehension.", "The model consists of a co-matching component and a hierarchical aggregation component.", "We showed that our model could achieve state-of-the-art performance on the RACE dataset.", "In the future, we will adapt the idea of co-matching and hierarchical aggregation to the standard open-domain QA setting for answer candidate reranking .", "We will also further study how to explicitly model inference and reasoning on the RACE dataset." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "3", "4" ], "paper_header_content": [ "Introduction", "Model", "Co-matching", "Hierarchical Aggregation", "Objective function", "Experiment", "Conclusions" ] }
GEM-SciDuet-train-96#paper-1251#slide-7
Conclusions
We proposed a hierarchical co-matching model for answering multi-choice reading comprehension questions. We showed that our model could achieve state-of-the-art performance on the RACE dataset. There is still much room for improvement on RACE given the low absolute performance. Latest results by OpenAI: 59%
We proposed a hierarchical co-matching model for answering multi-choice reading comprehension questions. We showed that our model could achieve state-of-the-art performance on the RACE dataset. There is still much room for improvement on RACE given the low absolute performance. Latest results by OpenAI: 59%
[]
GEM-SciDuet-train-97#paper-1252#slide-0
1252
Scoring Lexical Entailment with a Supervised Directional Similarity Network
We present the Supervised Directional Similarity Network (SDSN), a novel neural architecture for learning task-specific transformation functions on top of generalpurpose word embeddings. Relying on only a limited amount of supervision from task-specific scores on a subset of the vocabulary, our architecture is able to generalise and transform a general-purpose distributional vector space to model the relation of lexical entailment. Experiments show excellent performance on scoring graded lexical entailment, raising the stateof-the-art on the HyperLex dataset by approximately 25%.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118 ], "paper_content_text": [ "Introduction Standard word embedding models (Mikolov et al., 2013; Pennington et al., 2014; Bojanowski et al., 2017) are based on the distributional hypothesis by Harris (1954) .", "However, purely distributional models coalesce various lexico-semantic relations (e.g., synonymy, antonymy, hypernymy) into a joint distributed representation.", "To address this, previous work has focused on introducing supervision into individual word embeddings, allowing them to better capture the desired lexical properties.", "For example, Faruqui et al.", "(2015) and Wieting et al.", "(2015) proposed methods for using annotated lexical relations to condition the vector space and bring synonymous words closer together.", "Mrkšić et al.", "(2016) and Mrkšić et al.", "(2017) improved the optimisation function and introduced an additional constraint for pushing antonym pairs further apart.", "While these methods integrate hand-crafted features from external lexical resources with distributional information, they improve only the embeddings of words that have annotated lexical relations in the training resource.", "In this work, we propose a novel approach to leveraging external knowledge with generalpurpose unsupervised embeddings, focusing on the directional graded lexical entailment task , whereas previous work has mostly investigated simpler non-directional semantic similarity tasks.", "Instead of optimising individual word embeddings, our model uses general-purpose embeddings and optimises a separate neural component to adapt these to the specific task.", "In particular, our neural Supervised Directional Similarity Network (SDSN) dynamically produces task-specific embeddings optimised for scoring the asymmetric lexical entailment relation between any two words, regardless of their presence in the training resource.", "Our results with task-specific embeddings indicate large improvements on the HyperLex dataset, a standard graded lexical entailment benchmark.", "The model also yields improvements on a simpler nongraded entailment detection task.", "The Task of Grading Lexical Entailment In graded lexical entailment, the goal is to make fine-grained assertions regarding the directional hierarchical semantic relationships between concepts .", "The task is grounded in theories of concept (proto)typicality and category vagueness from cognitive science (Rosch, 1975; Kamp and Partee, 1995) , and aims at answering the following question: \"To what degree is X a type of Y ?\".", "It quantifies the degree of lexical entailment instead of providing only a binary yes/no decision on the relationship between the concepts X and Y , as done in hypernymy detection tasks (Kotlerman et al., 2010; Weeds et al., 2014; Santus et al., 2014; Kiela et al., 2015; Shwartz et al., 2017) .", "Graded lexical entailment provides finer-grained judgements on a continuous scale.", "For instance, the word pair (girl → person) has been rated highly with 9.85/10 by the HyperLex annotators.", "The pair (guest → person) has received a slightly lower score of 7.22, as a prototypical guest is often a person but there can be exceptions.", "In contrast, the score for the reversed pair (person → guest) is only judged at 2.88.", "As demonstrated by and Nickel and Kiela (2017) , standard general-purpose representation models trained in an unsupervised way purely on distributional information are unfit for this task and unable to surpass the performance of simple frequency baselines (see also Table 1 ).", "Therefore, in what follows, we describe a novel supervised framework for constructing task-specific word embeddings, optimised for the graded entailment task at hand.", "System Architecture The network architecture can be seen in Figure 1 .", "The system receives a pair of words as input and predicts a score that represents the strength of the given lexical relation.", "In the graded entailment task, we would like the model to return a high score for (biology → science), as biology is a type of science, but a low score for (point → pencil).", "We start by mapping both input words to corresponding word embeddings w 1 and w 2 .", "The embeddings come from a standard distributional vector space, pre-trained on a large unannotated corpus, and are not fine-tuned during training.", "An element-wise gating operation is then applied to each word, conditioned on the other word: g 1 = σ(W g 1 w 1 + b g 1 ) (1) g 2 = σ(W g 2 w 2 + b g 2 ) (2) w 1 = w 1 g 2 (3) w 2 = w 2 g 1 (4) where W g 1 and W g 2 are weight matrices, b g 1 and b g 2 are bias vectors, σ() is the logistic function and indicates element-wise multiplication.", "This operation allows the network to first observe the candidate hypernym w 2 and then decide which features are important when analysing the hyponym w 1 .", "For example, when deciding whether seal is a type of animal, the model is able to first see the word animal and then apply a mask that blocks out features of the word seal that are not related to nature.", "During development we found it best to apply this gating in both directions, therefore we condition each word based on the other.", "Each of the word representations is then passed through a non-linear layer with tanh activation, mapping the words to a new space that is more suitable for the given task: m 1 = tanh(W m 1 w 1 + b m 1 ) (5) m 2 = tanh(W m 2 w 2 + b m 2 ) (6) where W m 1 , W m 2 , b m 1 and b m 2 are trainable parameters.", "The input embeddings are trained to predict surrounding words on a large unannotated corpus using the skip-gram objective (Mikolov et al., 2013) , making the resulting vector space reflect (a broad relation of) semantic relatedness but unsuitable for lexical entailment .", "The mapping stage allows the network to learn a transformation function from the general skip-gram embeddings to a task-specific space for lexical entailment.", "In addition, the two weight matrices enable asymmetric reasoning, allowing the network to learn separate mappings for hyponyms and hypernyms.", "We then use a supervised composition function for combining the two representations and returning a confidence score as output.", "Rei et al.", "(2017) described a generalised version of cosine similarity for metaphor detection, constructing a supervised operation and learning individual weights for each feature.", "We apply a similar approach here and modify it to predict a relation score: d = m 1 m 2 (7) h = tanh(W h d + b h ) (8) y = S · σ(a(W y h + b y )) (9) where W h , b h , a, W y and b y are trainable parameters.", "The annotated labels of lexical relations are generally in a fixed range, therefore we base the output function on logistic regression, which also restricts the range of the predicted scores.", "b y allows for the function to be shifted as necessary and a controls the slope of the sigmoid.", "S is the value of the maximum score in the dataset, scaling the resulting value to the correct range.", "The output y represents the confidence that the two input words are in a lexical entailment relation.", "We optimise the model by minimising the mean squared distance between the predicted score y and the gold-standard scoreŷ: L = i (y i −ŷ i ) 2 (10) Sparse Distributional Features (SDF).", "Word embeddings are well-suited for capturing distributional similarity, but they have trouble encoding features such as word frequency, or the number of unique contexts the word has appeared in.", "This information becomes important when deciding whether one word entails another, as the system needs to determine when a concept is more general and subsumes the other.", "We construct classical sparse distributional word vectors and use them to extract 5 unique features for every word pair, to complement the features extracted from neural embeddings: • Regular cosine similarity between the sparse distributional vectors of both words.", "• The sparse weighted cosine measure, described by Rei and Briscoe (2014) , comparing the weighted ranks of different distributional contexts.", "The measure is directional and assigns more importance to the features of the broader term.", "We include this weighted cosine in both directions.", "• The proportion of shared unique contexts, compared to the number of contexts for one word.", "This measure is able to capture whether one of the words appears in a subset of the contexts, compared to the other word.", "This feature is also directional and is therefore included in both directions.", "We build the sparse distributional word vectors from two versions of the British National Corpus (Leech, 1992) .", "The first counts contexts simply based on a window of size 3.", "The second uses a parsed version of the BNC (Andersen et al., 2008) and extracts contexts based on dependency relations.", "In both cases, the features are weighted using pointwise mutual information.", "Each of the five features is calculated separately for the two vector spaces, resulting in 10 corpus-based features.", "We integrate them into the network by conditioning the hidden layer h on this vector: h = tanh(W h d + W x x + b h ) (11) where x is the feature vector of length 10 and W x is the corresponding weight matrix.", "Additional Supervision (AS).", "Methods such as retrofitting (Faruqui et al., 2015) , ATTRACT-REPEL (Mrkšić et al., 2017) and Poincaré embeddings (Nickel and Kiela, 2017 ) make use of handannotated lexical relations for optimising word representations such that they capture the desired properties (so-called embedding specialisation).", "We also experiment with incorporating these resources, but instead of adjusting the individual word embeddings, we use them to optimise the shared network weights.", "This teaches the model to find useful regularities in general-purpose word embeddings, which can then be equally applied to all words in the embedding vocabulary.", "For hyponym detection, we extract examples from WordNet (Miller, 1995) and the Paraphrase Database (PPDB 2.0) (Pavlick et al., 2015) .", "We use WordNet synonyms and hyponyms as positive examples, along with antonyms and hypernyms as negative examples.", "In order to prevent the network from biasing towards specific words that have numerous annotated relations, we limit them to a maximum of 10 examples per word.", "From the PPDB we extract the Equivalence relations as positive examples and the Exclusion relations as negative word pairs.", "The final dataset contains 102,586 positive pairs and 42,958 negative pairs.", "However, only binary labels are attached to all word pairs, whereas the task requires predicting a graded score.", "Initial experiments with optimising the network to predict the minimal and maximal possible score for these cases did not lead to improved performance.", "Therefore, we instead make use of a hinge loss function that optimises the network to only push these examples to the correct side of the decision boundary: L = i max((y −ŷ) 2 − ( S 2 − R) 2 , 0) (12) where S is the maximum score in the range and and R is a margin parameter.", "By minimising Equation 12, the model is only updated based on examples that are not yet on the correct side of the boundary, including a margin.", "This prevents us from penalising the model for predicting a score with slight variations, as the extracted examples are not annotated with sufficient granularity.", "When optimising the model, we first perform one pretraining pass over these additional word pairs before proceeding with the regular training process.", "Evaluation SDSN Training Setup.", "As input to the SDSN network we use 300-dimensional dependency-based word embeddings by Levy and Goldberg (2014) .", "Layers m 1 and m 2 also have size 300 and layer h has size 100.", "For regularisation, we apply dropout to the embeddings with p = 0.5.", "The margin R is set to 1 for the supervised pre-training stage.", "The model is optimised using AdaDelta (Zeiler, 2012) with learning rate 1.0.", "In order to control for random noise, we run each experiment with 10 different random seeds and average the results.", "Our code and detailed configuration files will be made available online.", "1 Evaluation Data.", "We evaluate graded lexical entailment on the HyperLex dataset which contains 2,616 word pairs in total scored for the asymmetric graded lexical entailment relation.", "Following a standard practice, we report Spearman's ρ correlation of the model output to the given human-annotated scores.", "We conduct experiments on two standard data splits for supervised learning: random split and lexical split.", "In the random split the data is randomly divided into training, validation, and test subsets containing 1831, 130, and 655 word pairs, respectively.", "In the lexical split, proposed by Levy et al.", "(2015) , there is no lexical overlap between training and test subsets.", "This prevents the effect of lexical memorisation, as supervised models tend to learn an independent property of a single concept in the pair instead of learning a relation between the two concepts.", "In this setup training, validation, and test sets contain 1133, 85, and 269 word pairs, respectively.", "2 Since plenty of related research on lexical entailment is still focused on the simpler binary detection of asymmetric relations, we also run experiments on the large binary detection HypeNet dataset (Shwartz et al., 2016) , where the SDSN output is converted to binary decisions.", "We again report scores for both random and lexical split.", "Results and Analysis.", "The results on two Hyper-Lex splits are presented in Table 1 , along with the best configurations reported by .", "We refer the interested reader to the original Hy-perLex paper for a detailed description of the best performing baseline models.", "The Supervised Directional Similarity Network (SDSN) achieves substantially better scores than all other tested systems, despite relying on a much simpler supervision signal.", "The previous top approaches, including the Paragram+CF embeddings, make use of numerous annotations provided by WordNet or similarly rich lexical resources, while for SDSN and SDSN+SDF only use the designated relation-specific training set and corpus statistics.", "By also including these extra training instances (SDSN+SDF+AS), we can gain additional perfor- mance and push the correlation to 0.692 on the random split and 0.544 on the lexical split of Hy-perLex, an improvement of approximately 25% to the standard supervised training regime.", "In Table 3 we provide some example output from the final SDSN+SDF+AS model.", "It is able to successfully assign a high score to (captain, officer) and also identify with high confidence that wing is not a type of airplane, even though they are semantically related.", "As an example of incorrect output, the model fails to assign a high score to (prince, royalty), possibly due to the usage patterns of these words being different in context.", "In contrast, it assigns an unexpectedly high score to (kid, parent), likely due to the high distributional similarity of these words.", "Glavaš and Ponzetto (2017) proposed a related dual tensor model for the binary detection of asymmetric relations (Dual-T).", "In order to compare our system to theirs, we train our model on HypeNet and convert the output to binary decisions.", "We also compare SDSN to the best reported models of Shwartz et al.", "(2016) and Roller and Erk (2016) , which combine distributional and pattern-based information for hypernymy detection (HypeNethybrid and H-feature, respectively).", "3 We do not include additional WordNet and PPDB examples in these experiments, as the HypeNet data already subsumes most of them.", "As can be seen in Table 2 , our SDSN+SDF model achieves the best results also on the HypeNet dataset, outperforming previous models on both data splits.", "Conclusion We introduce a novel neural architecture for mapping and specialising a vector space based on limited supervision.", "While prior work has focused only on optimising individual word embeddings available in external resources, our model uses 3 For more detail on the baseline models, we refer the reader to the original papers.", "Table 3 : Example word pairs from the HyperLex development set.", "S is the human-annotated score in the HyperLex dataset.", "P is the predicted score using the SDSN+SDF+AS model.", "general-purpose embeddings and optimises a separate neural component to adapt these to the specific task, generalising to unseen data.", "The system achieves new state-of-the-art results on the task of scoring graded lexical entailment.", "Future work could apply the model to other lexical relations or extend it to cover multiple relations simultaneously." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5" ], "paper_header_content": [ "Introduction", "The Task of Grading Lexical Entailment", "System Architecture", "Evaluation", "Conclusion" ] }
GEM-SciDuet-train-97#paper-1252#slide-0
Lexical Relations
Task: Graded lexical entailment To what degree is X a type of Y? girl person guest person person guest Useful for query expansion, natural language inference, paraphrasing, machine translation, etc. Distributional vectors are not great for directional lexical relations carrot vegetable new old BUT these mostly affect words that are in the training data
Task: Graded lexical entailment To what degree is X a type of Y? girl person guest person person guest Useful for query expansion, natural language inference, paraphrasing, machine translation, etc. Distributional vectors are not great for directional lexical relations carrot vegetable new old BUT these mostly affect words that are in the training data
[]
GEM-SciDuet-train-97#paper-1252#slide-1
1252
Scoring Lexical Entailment with a Supervised Directional Similarity Network
We present the Supervised Directional Similarity Network (SDSN), a novel neural architecture for learning task-specific transformation functions on top of generalpurpose word embeddings. Relying on only a limited amount of supervision from task-specific scores on a subset of the vocabulary, our architecture is able to generalise and transform a general-purpose distributional vector space to model the relation of lexical entailment. Experiments show excellent performance on scoring graded lexical entailment, raising the stateof-the-art on the HyperLex dataset by approximately 25%.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118 ], "paper_content_text": [ "Introduction Standard word embedding models (Mikolov et al., 2013; Pennington et al., 2014; Bojanowski et al., 2017) are based on the distributional hypothesis by Harris (1954) .", "However, purely distributional models coalesce various lexico-semantic relations (e.g., synonymy, antonymy, hypernymy) into a joint distributed representation.", "To address this, previous work has focused on introducing supervision into individual word embeddings, allowing them to better capture the desired lexical properties.", "For example, Faruqui et al.", "(2015) and Wieting et al.", "(2015) proposed methods for using annotated lexical relations to condition the vector space and bring synonymous words closer together.", "Mrkšić et al.", "(2016) and Mrkšić et al.", "(2017) improved the optimisation function and introduced an additional constraint for pushing antonym pairs further apart.", "While these methods integrate hand-crafted features from external lexical resources with distributional information, they improve only the embeddings of words that have annotated lexical relations in the training resource.", "In this work, we propose a novel approach to leveraging external knowledge with generalpurpose unsupervised embeddings, focusing on the directional graded lexical entailment task , whereas previous work has mostly investigated simpler non-directional semantic similarity tasks.", "Instead of optimising individual word embeddings, our model uses general-purpose embeddings and optimises a separate neural component to adapt these to the specific task.", "In particular, our neural Supervised Directional Similarity Network (SDSN) dynamically produces task-specific embeddings optimised for scoring the asymmetric lexical entailment relation between any two words, regardless of their presence in the training resource.", "Our results with task-specific embeddings indicate large improvements on the HyperLex dataset, a standard graded lexical entailment benchmark.", "The model also yields improvements on a simpler nongraded entailment detection task.", "The Task of Grading Lexical Entailment In graded lexical entailment, the goal is to make fine-grained assertions regarding the directional hierarchical semantic relationships between concepts .", "The task is grounded in theories of concept (proto)typicality and category vagueness from cognitive science (Rosch, 1975; Kamp and Partee, 1995) , and aims at answering the following question: \"To what degree is X a type of Y ?\".", "It quantifies the degree of lexical entailment instead of providing only a binary yes/no decision on the relationship between the concepts X and Y , as done in hypernymy detection tasks (Kotlerman et al., 2010; Weeds et al., 2014; Santus et al., 2014; Kiela et al., 2015; Shwartz et al., 2017) .", "Graded lexical entailment provides finer-grained judgements on a continuous scale.", "For instance, the word pair (girl → person) has been rated highly with 9.85/10 by the HyperLex annotators.", "The pair (guest → person) has received a slightly lower score of 7.22, as a prototypical guest is often a person but there can be exceptions.", "In contrast, the score for the reversed pair (person → guest) is only judged at 2.88.", "As demonstrated by and Nickel and Kiela (2017) , standard general-purpose representation models trained in an unsupervised way purely on distributional information are unfit for this task and unable to surpass the performance of simple frequency baselines (see also Table 1 ).", "Therefore, in what follows, we describe a novel supervised framework for constructing task-specific word embeddings, optimised for the graded entailment task at hand.", "System Architecture The network architecture can be seen in Figure 1 .", "The system receives a pair of words as input and predicts a score that represents the strength of the given lexical relation.", "In the graded entailment task, we would like the model to return a high score for (biology → science), as biology is a type of science, but a low score for (point → pencil).", "We start by mapping both input words to corresponding word embeddings w 1 and w 2 .", "The embeddings come from a standard distributional vector space, pre-trained on a large unannotated corpus, and are not fine-tuned during training.", "An element-wise gating operation is then applied to each word, conditioned on the other word: g 1 = σ(W g 1 w 1 + b g 1 ) (1) g 2 = σ(W g 2 w 2 + b g 2 ) (2) w 1 = w 1 g 2 (3) w 2 = w 2 g 1 (4) where W g 1 and W g 2 are weight matrices, b g 1 and b g 2 are bias vectors, σ() is the logistic function and indicates element-wise multiplication.", "This operation allows the network to first observe the candidate hypernym w 2 and then decide which features are important when analysing the hyponym w 1 .", "For example, when deciding whether seal is a type of animal, the model is able to first see the word animal and then apply a mask that blocks out features of the word seal that are not related to nature.", "During development we found it best to apply this gating in both directions, therefore we condition each word based on the other.", "Each of the word representations is then passed through a non-linear layer with tanh activation, mapping the words to a new space that is more suitable for the given task: m 1 = tanh(W m 1 w 1 + b m 1 ) (5) m 2 = tanh(W m 2 w 2 + b m 2 ) (6) where W m 1 , W m 2 , b m 1 and b m 2 are trainable parameters.", "The input embeddings are trained to predict surrounding words on a large unannotated corpus using the skip-gram objective (Mikolov et al., 2013) , making the resulting vector space reflect (a broad relation of) semantic relatedness but unsuitable for lexical entailment .", "The mapping stage allows the network to learn a transformation function from the general skip-gram embeddings to a task-specific space for lexical entailment.", "In addition, the two weight matrices enable asymmetric reasoning, allowing the network to learn separate mappings for hyponyms and hypernyms.", "We then use a supervised composition function for combining the two representations and returning a confidence score as output.", "Rei et al.", "(2017) described a generalised version of cosine similarity for metaphor detection, constructing a supervised operation and learning individual weights for each feature.", "We apply a similar approach here and modify it to predict a relation score: d = m 1 m 2 (7) h = tanh(W h d + b h ) (8) y = S · σ(a(W y h + b y )) (9) where W h , b h , a, W y and b y are trainable parameters.", "The annotated labels of lexical relations are generally in a fixed range, therefore we base the output function on logistic regression, which also restricts the range of the predicted scores.", "b y allows for the function to be shifted as necessary and a controls the slope of the sigmoid.", "S is the value of the maximum score in the dataset, scaling the resulting value to the correct range.", "The output y represents the confidence that the two input words are in a lexical entailment relation.", "We optimise the model by minimising the mean squared distance between the predicted score y and the gold-standard scoreŷ: L = i (y i −ŷ i ) 2 (10) Sparse Distributional Features (SDF).", "Word embeddings are well-suited for capturing distributional similarity, but they have trouble encoding features such as word frequency, or the number of unique contexts the word has appeared in.", "This information becomes important when deciding whether one word entails another, as the system needs to determine when a concept is more general and subsumes the other.", "We construct classical sparse distributional word vectors and use them to extract 5 unique features for every word pair, to complement the features extracted from neural embeddings: • Regular cosine similarity between the sparse distributional vectors of both words.", "• The sparse weighted cosine measure, described by Rei and Briscoe (2014) , comparing the weighted ranks of different distributional contexts.", "The measure is directional and assigns more importance to the features of the broader term.", "We include this weighted cosine in both directions.", "• The proportion of shared unique contexts, compared to the number of contexts for one word.", "This measure is able to capture whether one of the words appears in a subset of the contexts, compared to the other word.", "This feature is also directional and is therefore included in both directions.", "We build the sparse distributional word vectors from two versions of the British National Corpus (Leech, 1992) .", "The first counts contexts simply based on a window of size 3.", "The second uses a parsed version of the BNC (Andersen et al., 2008) and extracts contexts based on dependency relations.", "In both cases, the features are weighted using pointwise mutual information.", "Each of the five features is calculated separately for the two vector spaces, resulting in 10 corpus-based features.", "We integrate them into the network by conditioning the hidden layer h on this vector: h = tanh(W h d + W x x + b h ) (11) where x is the feature vector of length 10 and W x is the corresponding weight matrix.", "Additional Supervision (AS).", "Methods such as retrofitting (Faruqui et al., 2015) , ATTRACT-REPEL (Mrkšić et al., 2017) and Poincaré embeddings (Nickel and Kiela, 2017 ) make use of handannotated lexical relations for optimising word representations such that they capture the desired properties (so-called embedding specialisation).", "We also experiment with incorporating these resources, but instead of adjusting the individual word embeddings, we use them to optimise the shared network weights.", "This teaches the model to find useful regularities in general-purpose word embeddings, which can then be equally applied to all words in the embedding vocabulary.", "For hyponym detection, we extract examples from WordNet (Miller, 1995) and the Paraphrase Database (PPDB 2.0) (Pavlick et al., 2015) .", "We use WordNet synonyms and hyponyms as positive examples, along with antonyms and hypernyms as negative examples.", "In order to prevent the network from biasing towards specific words that have numerous annotated relations, we limit them to a maximum of 10 examples per word.", "From the PPDB we extract the Equivalence relations as positive examples and the Exclusion relations as negative word pairs.", "The final dataset contains 102,586 positive pairs and 42,958 negative pairs.", "However, only binary labels are attached to all word pairs, whereas the task requires predicting a graded score.", "Initial experiments with optimising the network to predict the minimal and maximal possible score for these cases did not lead to improved performance.", "Therefore, we instead make use of a hinge loss function that optimises the network to only push these examples to the correct side of the decision boundary: L = i max((y −ŷ) 2 − ( S 2 − R) 2 , 0) (12) where S is the maximum score in the range and and R is a margin parameter.", "By minimising Equation 12, the model is only updated based on examples that are not yet on the correct side of the boundary, including a margin.", "This prevents us from penalising the model for predicting a score with slight variations, as the extracted examples are not annotated with sufficient granularity.", "When optimising the model, we first perform one pretraining pass over these additional word pairs before proceeding with the regular training process.", "Evaluation SDSN Training Setup.", "As input to the SDSN network we use 300-dimensional dependency-based word embeddings by Levy and Goldberg (2014) .", "Layers m 1 and m 2 also have size 300 and layer h has size 100.", "For regularisation, we apply dropout to the embeddings with p = 0.5.", "The margin R is set to 1 for the supervised pre-training stage.", "The model is optimised using AdaDelta (Zeiler, 2012) with learning rate 1.0.", "In order to control for random noise, we run each experiment with 10 different random seeds and average the results.", "Our code and detailed configuration files will be made available online.", "1 Evaluation Data.", "We evaluate graded lexical entailment on the HyperLex dataset which contains 2,616 word pairs in total scored for the asymmetric graded lexical entailment relation.", "Following a standard practice, we report Spearman's ρ correlation of the model output to the given human-annotated scores.", "We conduct experiments on two standard data splits for supervised learning: random split and lexical split.", "In the random split the data is randomly divided into training, validation, and test subsets containing 1831, 130, and 655 word pairs, respectively.", "In the lexical split, proposed by Levy et al.", "(2015) , there is no lexical overlap between training and test subsets.", "This prevents the effect of lexical memorisation, as supervised models tend to learn an independent property of a single concept in the pair instead of learning a relation between the two concepts.", "In this setup training, validation, and test sets contain 1133, 85, and 269 word pairs, respectively.", "2 Since plenty of related research on lexical entailment is still focused on the simpler binary detection of asymmetric relations, we also run experiments on the large binary detection HypeNet dataset (Shwartz et al., 2016) , where the SDSN output is converted to binary decisions.", "We again report scores for both random and lexical split.", "Results and Analysis.", "The results on two Hyper-Lex splits are presented in Table 1 , along with the best configurations reported by .", "We refer the interested reader to the original Hy-perLex paper for a detailed description of the best performing baseline models.", "The Supervised Directional Similarity Network (SDSN) achieves substantially better scores than all other tested systems, despite relying on a much simpler supervision signal.", "The previous top approaches, including the Paragram+CF embeddings, make use of numerous annotations provided by WordNet or similarly rich lexical resources, while for SDSN and SDSN+SDF only use the designated relation-specific training set and corpus statistics.", "By also including these extra training instances (SDSN+SDF+AS), we can gain additional perfor- mance and push the correlation to 0.692 on the random split and 0.544 on the lexical split of Hy-perLex, an improvement of approximately 25% to the standard supervised training regime.", "In Table 3 we provide some example output from the final SDSN+SDF+AS model.", "It is able to successfully assign a high score to (captain, officer) and also identify with high confidence that wing is not a type of airplane, even though they are semantically related.", "As an example of incorrect output, the model fails to assign a high score to (prince, royalty), possibly due to the usage patterns of these words being different in context.", "In contrast, it assigns an unexpectedly high score to (kid, parent), likely due to the high distributional similarity of these words.", "Glavaš and Ponzetto (2017) proposed a related dual tensor model for the binary detection of asymmetric relations (Dual-T).", "In order to compare our system to theirs, we train our model on HypeNet and convert the output to binary decisions.", "We also compare SDSN to the best reported models of Shwartz et al.", "(2016) and Roller and Erk (2016) , which combine distributional and pattern-based information for hypernymy detection (HypeNethybrid and H-feature, respectively).", "3 We do not include additional WordNet and PPDB examples in these experiments, as the HypeNet data already subsumes most of them.", "As can be seen in Table 2 , our SDSN+SDF model achieves the best results also on the HypeNet dataset, outperforming previous models on both data splits.", "Conclusion We introduce a novel neural architecture for mapping and specialising a vector space based on limited supervision.", "While prior work has focused only on optimising individual word embeddings available in external resources, our model uses 3 For more detail on the baseline models, we refer the reader to the original papers.", "Table 3 : Example word pairs from the HyperLex development set.", "S is the human-annotated score in the HyperLex dataset.", "P is the predicted score using the SDSN+SDF+AS model.", "general-purpose embeddings and optimises a separate neural component to adapt these to the specific task, generalising to unseen data.", "The system achieves new state-of-the-art results on the task of scoring graded lexical entailment.", "Future work could apply the model to other lexical relations or extend it to cover multiple relations simultaneously." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5" ], "paper_header_content": [ "Introduction", "The Task of Grading Lexical Entailment", "System Architecture", "Evaluation", "Conclusion" ] }
GEM-SciDuet-train-97#paper-1252#slide-1
Main Idea
Specialized network for directional lexical relations Train the network to discover task-specific regularities in the embeddings
Specialized network for directional lexical relations Train the network to discover task-specific regularities in the embeddings
[]
GEM-SciDuet-train-97#paper-1252#slide-2
1252
Scoring Lexical Entailment with a Supervised Directional Similarity Network
We present the Supervised Directional Similarity Network (SDSN), a novel neural architecture for learning task-specific transformation functions on top of generalpurpose word embeddings. Relying on only a limited amount of supervision from task-specific scores on a subset of the vocabulary, our architecture is able to generalise and transform a general-purpose distributional vector space to model the relation of lexical entailment. Experiments show excellent performance on scoring graded lexical entailment, raising the stateof-the-art on the HyperLex dataset by approximately 25%.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118 ], "paper_content_text": [ "Introduction Standard word embedding models (Mikolov et al., 2013; Pennington et al., 2014; Bojanowski et al., 2017) are based on the distributional hypothesis by Harris (1954) .", "However, purely distributional models coalesce various lexico-semantic relations (e.g., synonymy, antonymy, hypernymy) into a joint distributed representation.", "To address this, previous work has focused on introducing supervision into individual word embeddings, allowing them to better capture the desired lexical properties.", "For example, Faruqui et al.", "(2015) and Wieting et al.", "(2015) proposed methods for using annotated lexical relations to condition the vector space and bring synonymous words closer together.", "Mrkšić et al.", "(2016) and Mrkšić et al.", "(2017) improved the optimisation function and introduced an additional constraint for pushing antonym pairs further apart.", "While these methods integrate hand-crafted features from external lexical resources with distributional information, they improve only the embeddings of words that have annotated lexical relations in the training resource.", "In this work, we propose a novel approach to leveraging external knowledge with generalpurpose unsupervised embeddings, focusing on the directional graded lexical entailment task , whereas previous work has mostly investigated simpler non-directional semantic similarity tasks.", "Instead of optimising individual word embeddings, our model uses general-purpose embeddings and optimises a separate neural component to adapt these to the specific task.", "In particular, our neural Supervised Directional Similarity Network (SDSN) dynamically produces task-specific embeddings optimised for scoring the asymmetric lexical entailment relation between any two words, regardless of their presence in the training resource.", "Our results with task-specific embeddings indicate large improvements on the HyperLex dataset, a standard graded lexical entailment benchmark.", "The model also yields improvements on a simpler nongraded entailment detection task.", "The Task of Grading Lexical Entailment In graded lexical entailment, the goal is to make fine-grained assertions regarding the directional hierarchical semantic relationships between concepts .", "The task is grounded in theories of concept (proto)typicality and category vagueness from cognitive science (Rosch, 1975; Kamp and Partee, 1995) , and aims at answering the following question: \"To what degree is X a type of Y ?\".", "It quantifies the degree of lexical entailment instead of providing only a binary yes/no decision on the relationship between the concepts X and Y , as done in hypernymy detection tasks (Kotlerman et al., 2010; Weeds et al., 2014; Santus et al., 2014; Kiela et al., 2015; Shwartz et al., 2017) .", "Graded lexical entailment provides finer-grained judgements on a continuous scale.", "For instance, the word pair (girl → person) has been rated highly with 9.85/10 by the HyperLex annotators.", "The pair (guest → person) has received a slightly lower score of 7.22, as a prototypical guest is often a person but there can be exceptions.", "In contrast, the score for the reversed pair (person → guest) is only judged at 2.88.", "As demonstrated by and Nickel and Kiela (2017) , standard general-purpose representation models trained in an unsupervised way purely on distributional information are unfit for this task and unable to surpass the performance of simple frequency baselines (see also Table 1 ).", "Therefore, in what follows, we describe a novel supervised framework for constructing task-specific word embeddings, optimised for the graded entailment task at hand.", "System Architecture The network architecture can be seen in Figure 1 .", "The system receives a pair of words as input and predicts a score that represents the strength of the given lexical relation.", "In the graded entailment task, we would like the model to return a high score for (biology → science), as biology is a type of science, but a low score for (point → pencil).", "We start by mapping both input words to corresponding word embeddings w 1 and w 2 .", "The embeddings come from a standard distributional vector space, pre-trained on a large unannotated corpus, and are not fine-tuned during training.", "An element-wise gating operation is then applied to each word, conditioned on the other word: g 1 = σ(W g 1 w 1 + b g 1 ) (1) g 2 = σ(W g 2 w 2 + b g 2 ) (2) w 1 = w 1 g 2 (3) w 2 = w 2 g 1 (4) where W g 1 and W g 2 are weight matrices, b g 1 and b g 2 are bias vectors, σ() is the logistic function and indicates element-wise multiplication.", "This operation allows the network to first observe the candidate hypernym w 2 and then decide which features are important when analysing the hyponym w 1 .", "For example, when deciding whether seal is a type of animal, the model is able to first see the word animal and then apply a mask that blocks out features of the word seal that are not related to nature.", "During development we found it best to apply this gating in both directions, therefore we condition each word based on the other.", "Each of the word representations is then passed through a non-linear layer with tanh activation, mapping the words to a new space that is more suitable for the given task: m 1 = tanh(W m 1 w 1 + b m 1 ) (5) m 2 = tanh(W m 2 w 2 + b m 2 ) (6) where W m 1 , W m 2 , b m 1 and b m 2 are trainable parameters.", "The input embeddings are trained to predict surrounding words on a large unannotated corpus using the skip-gram objective (Mikolov et al., 2013) , making the resulting vector space reflect (a broad relation of) semantic relatedness but unsuitable for lexical entailment .", "The mapping stage allows the network to learn a transformation function from the general skip-gram embeddings to a task-specific space for lexical entailment.", "In addition, the two weight matrices enable asymmetric reasoning, allowing the network to learn separate mappings for hyponyms and hypernyms.", "We then use a supervised composition function for combining the two representations and returning a confidence score as output.", "Rei et al.", "(2017) described a generalised version of cosine similarity for metaphor detection, constructing a supervised operation and learning individual weights for each feature.", "We apply a similar approach here and modify it to predict a relation score: d = m 1 m 2 (7) h = tanh(W h d + b h ) (8) y = S · σ(a(W y h + b y )) (9) where W h , b h , a, W y and b y are trainable parameters.", "The annotated labels of lexical relations are generally in a fixed range, therefore we base the output function on logistic regression, which also restricts the range of the predicted scores.", "b y allows for the function to be shifted as necessary and a controls the slope of the sigmoid.", "S is the value of the maximum score in the dataset, scaling the resulting value to the correct range.", "The output y represents the confidence that the two input words are in a lexical entailment relation.", "We optimise the model by minimising the mean squared distance between the predicted score y and the gold-standard scoreŷ: L = i (y i −ŷ i ) 2 (10) Sparse Distributional Features (SDF).", "Word embeddings are well-suited for capturing distributional similarity, but they have trouble encoding features such as word frequency, or the number of unique contexts the word has appeared in.", "This information becomes important when deciding whether one word entails another, as the system needs to determine when a concept is more general and subsumes the other.", "We construct classical sparse distributional word vectors and use them to extract 5 unique features for every word pair, to complement the features extracted from neural embeddings: • Regular cosine similarity between the sparse distributional vectors of both words.", "• The sparse weighted cosine measure, described by Rei and Briscoe (2014) , comparing the weighted ranks of different distributional contexts.", "The measure is directional and assigns more importance to the features of the broader term.", "We include this weighted cosine in both directions.", "• The proportion of shared unique contexts, compared to the number of contexts for one word.", "This measure is able to capture whether one of the words appears in a subset of the contexts, compared to the other word.", "This feature is also directional and is therefore included in both directions.", "We build the sparse distributional word vectors from two versions of the British National Corpus (Leech, 1992) .", "The first counts contexts simply based on a window of size 3.", "The second uses a parsed version of the BNC (Andersen et al., 2008) and extracts contexts based on dependency relations.", "In both cases, the features are weighted using pointwise mutual information.", "Each of the five features is calculated separately for the two vector spaces, resulting in 10 corpus-based features.", "We integrate them into the network by conditioning the hidden layer h on this vector: h = tanh(W h d + W x x + b h ) (11) where x is the feature vector of length 10 and W x is the corresponding weight matrix.", "Additional Supervision (AS).", "Methods such as retrofitting (Faruqui et al., 2015) , ATTRACT-REPEL (Mrkšić et al., 2017) and Poincaré embeddings (Nickel and Kiela, 2017 ) make use of handannotated lexical relations for optimising word representations such that they capture the desired properties (so-called embedding specialisation).", "We also experiment with incorporating these resources, but instead of adjusting the individual word embeddings, we use them to optimise the shared network weights.", "This teaches the model to find useful regularities in general-purpose word embeddings, which can then be equally applied to all words in the embedding vocabulary.", "For hyponym detection, we extract examples from WordNet (Miller, 1995) and the Paraphrase Database (PPDB 2.0) (Pavlick et al., 2015) .", "We use WordNet synonyms and hyponyms as positive examples, along with antonyms and hypernyms as negative examples.", "In order to prevent the network from biasing towards specific words that have numerous annotated relations, we limit them to a maximum of 10 examples per word.", "From the PPDB we extract the Equivalence relations as positive examples and the Exclusion relations as negative word pairs.", "The final dataset contains 102,586 positive pairs and 42,958 negative pairs.", "However, only binary labels are attached to all word pairs, whereas the task requires predicting a graded score.", "Initial experiments with optimising the network to predict the minimal and maximal possible score for these cases did not lead to improved performance.", "Therefore, we instead make use of a hinge loss function that optimises the network to only push these examples to the correct side of the decision boundary: L = i max((y −ŷ) 2 − ( S 2 − R) 2 , 0) (12) where S is the maximum score in the range and and R is a margin parameter.", "By minimising Equation 12, the model is only updated based on examples that are not yet on the correct side of the boundary, including a margin.", "This prevents us from penalising the model for predicting a score with slight variations, as the extracted examples are not annotated with sufficient granularity.", "When optimising the model, we first perform one pretraining pass over these additional word pairs before proceeding with the regular training process.", "Evaluation SDSN Training Setup.", "As input to the SDSN network we use 300-dimensional dependency-based word embeddings by Levy and Goldberg (2014) .", "Layers m 1 and m 2 also have size 300 and layer h has size 100.", "For regularisation, we apply dropout to the embeddings with p = 0.5.", "The margin R is set to 1 for the supervised pre-training stage.", "The model is optimised using AdaDelta (Zeiler, 2012) with learning rate 1.0.", "In order to control for random noise, we run each experiment with 10 different random seeds and average the results.", "Our code and detailed configuration files will be made available online.", "1 Evaluation Data.", "We evaluate graded lexical entailment on the HyperLex dataset which contains 2,616 word pairs in total scored for the asymmetric graded lexical entailment relation.", "Following a standard practice, we report Spearman's ρ correlation of the model output to the given human-annotated scores.", "We conduct experiments on two standard data splits for supervised learning: random split and lexical split.", "In the random split the data is randomly divided into training, validation, and test subsets containing 1831, 130, and 655 word pairs, respectively.", "In the lexical split, proposed by Levy et al.", "(2015) , there is no lexical overlap between training and test subsets.", "This prevents the effect of lexical memorisation, as supervised models tend to learn an independent property of a single concept in the pair instead of learning a relation between the two concepts.", "In this setup training, validation, and test sets contain 1133, 85, and 269 word pairs, respectively.", "2 Since plenty of related research on lexical entailment is still focused on the simpler binary detection of asymmetric relations, we also run experiments on the large binary detection HypeNet dataset (Shwartz et al., 2016) , where the SDSN output is converted to binary decisions.", "We again report scores for both random and lexical split.", "Results and Analysis.", "The results on two Hyper-Lex splits are presented in Table 1 , along with the best configurations reported by .", "We refer the interested reader to the original Hy-perLex paper for a detailed description of the best performing baseline models.", "The Supervised Directional Similarity Network (SDSN) achieves substantially better scores than all other tested systems, despite relying on a much simpler supervision signal.", "The previous top approaches, including the Paragram+CF embeddings, make use of numerous annotations provided by WordNet or similarly rich lexical resources, while for SDSN and SDSN+SDF only use the designated relation-specific training set and corpus statistics.", "By also including these extra training instances (SDSN+SDF+AS), we can gain additional perfor- mance and push the correlation to 0.692 on the random split and 0.544 on the lexical split of Hy-perLex, an improvement of approximately 25% to the standard supervised training regime.", "In Table 3 we provide some example output from the final SDSN+SDF+AS model.", "It is able to successfully assign a high score to (captain, officer) and also identify with high confidence that wing is not a type of airplane, even though they are semantically related.", "As an example of incorrect output, the model fails to assign a high score to (prince, royalty), possibly due to the usage patterns of these words being different in context.", "In contrast, it assigns an unexpectedly high score to (kid, parent), likely due to the high distributional similarity of these words.", "Glavaš and Ponzetto (2017) proposed a related dual tensor model for the binary detection of asymmetric relations (Dual-T).", "In order to compare our system to theirs, we train our model on HypeNet and convert the output to binary decisions.", "We also compare SDSN to the best reported models of Shwartz et al.", "(2016) and Roller and Erk (2016) , which combine distributional and pattern-based information for hypernymy detection (HypeNethybrid and H-feature, respectively).", "3 We do not include additional WordNet and PPDB examples in these experiments, as the HypeNet data already subsumes most of them.", "As can be seen in Table 2 , our SDSN+SDF model achieves the best results also on the HypeNet dataset, outperforming previous models on both data splits.", "Conclusion We introduce a novel neural architecture for mapping and specialising a vector space based on limited supervision.", "While prior work has focused only on optimising individual word embeddings available in external resources, our model uses 3 For more detail on the baseline models, we refer the reader to the original papers.", "Table 3 : Example word pairs from the HyperLex development set.", "S is the human-annotated score in the HyperLex dataset.", "P is the predicted score using the SDSN+SDF+AS model.", "general-purpose embeddings and optimises a separate neural component to adapt these to the specific task, generalising to unseen data.", "The system achieves new state-of-the-art results on the task of scoring graded lexical entailment.", "Future work could apply the model to other lexical relations or extend it to cover multiple relations simultaneously." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5" ], "paper_header_content": [ "Introduction", "The Task of Grading Lexical Entailment", "System Architecture", "Evaluation", "Conclusion" ] }
GEM-SciDuet-train-97#paper-1252#slide-2
Supervised Directional Similarity Network
Fixed pre-trained word embeddings as input Predict a score indicating the strength of a specific lexical relation
Fixed pre-trained word embeddings as input Predict a score indicating the strength of a specific lexical relation
[]
GEM-SciDuet-train-97#paper-1252#slide-5
1252
Scoring Lexical Entailment with a Supervised Directional Similarity Network
We present the Supervised Directional Similarity Network (SDSN), a novel neural architecture for learning task-specific transformation functions on top of generalpurpose word embeddings. Relying on only a limited amount of supervision from task-specific scores on a subset of the vocabulary, our architecture is able to generalise and transform a general-purpose distributional vector space to model the relation of lexical entailment. Experiments show excellent performance on scoring graded lexical entailment, raising the stateof-the-art on the HyperLex dataset by approximately 25%.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118 ], "paper_content_text": [ "Introduction Standard word embedding models (Mikolov et al., 2013; Pennington et al., 2014; Bojanowski et al., 2017) are based on the distributional hypothesis by Harris (1954) .", "However, purely distributional models coalesce various lexico-semantic relations (e.g., synonymy, antonymy, hypernymy) into a joint distributed representation.", "To address this, previous work has focused on introducing supervision into individual word embeddings, allowing them to better capture the desired lexical properties.", "For example, Faruqui et al.", "(2015) and Wieting et al.", "(2015) proposed methods for using annotated lexical relations to condition the vector space and bring synonymous words closer together.", "Mrkšić et al.", "(2016) and Mrkšić et al.", "(2017) improved the optimisation function and introduced an additional constraint for pushing antonym pairs further apart.", "While these methods integrate hand-crafted features from external lexical resources with distributional information, they improve only the embeddings of words that have annotated lexical relations in the training resource.", "In this work, we propose a novel approach to leveraging external knowledge with generalpurpose unsupervised embeddings, focusing on the directional graded lexical entailment task , whereas previous work has mostly investigated simpler non-directional semantic similarity tasks.", "Instead of optimising individual word embeddings, our model uses general-purpose embeddings and optimises a separate neural component to adapt these to the specific task.", "In particular, our neural Supervised Directional Similarity Network (SDSN) dynamically produces task-specific embeddings optimised for scoring the asymmetric lexical entailment relation between any two words, regardless of their presence in the training resource.", "Our results with task-specific embeddings indicate large improvements on the HyperLex dataset, a standard graded lexical entailment benchmark.", "The model also yields improvements on a simpler nongraded entailment detection task.", "The Task of Grading Lexical Entailment In graded lexical entailment, the goal is to make fine-grained assertions regarding the directional hierarchical semantic relationships between concepts .", "The task is grounded in theories of concept (proto)typicality and category vagueness from cognitive science (Rosch, 1975; Kamp and Partee, 1995) , and aims at answering the following question: \"To what degree is X a type of Y ?\".", "It quantifies the degree of lexical entailment instead of providing only a binary yes/no decision on the relationship between the concepts X and Y , as done in hypernymy detection tasks (Kotlerman et al., 2010; Weeds et al., 2014; Santus et al., 2014; Kiela et al., 2015; Shwartz et al., 2017) .", "Graded lexical entailment provides finer-grained judgements on a continuous scale.", "For instance, the word pair (girl → person) has been rated highly with 9.85/10 by the HyperLex annotators.", "The pair (guest → person) has received a slightly lower score of 7.22, as a prototypical guest is often a person but there can be exceptions.", "In contrast, the score for the reversed pair (person → guest) is only judged at 2.88.", "As demonstrated by and Nickel and Kiela (2017) , standard general-purpose representation models trained in an unsupervised way purely on distributional information are unfit for this task and unable to surpass the performance of simple frequency baselines (see also Table 1 ).", "Therefore, in what follows, we describe a novel supervised framework for constructing task-specific word embeddings, optimised for the graded entailment task at hand.", "System Architecture The network architecture can be seen in Figure 1 .", "The system receives a pair of words as input and predicts a score that represents the strength of the given lexical relation.", "In the graded entailment task, we would like the model to return a high score for (biology → science), as biology is a type of science, but a low score for (point → pencil).", "We start by mapping both input words to corresponding word embeddings w 1 and w 2 .", "The embeddings come from a standard distributional vector space, pre-trained on a large unannotated corpus, and are not fine-tuned during training.", "An element-wise gating operation is then applied to each word, conditioned on the other word: g 1 = σ(W g 1 w 1 + b g 1 ) (1) g 2 = σ(W g 2 w 2 + b g 2 ) (2) w 1 = w 1 g 2 (3) w 2 = w 2 g 1 (4) where W g 1 and W g 2 are weight matrices, b g 1 and b g 2 are bias vectors, σ() is the logistic function and indicates element-wise multiplication.", "This operation allows the network to first observe the candidate hypernym w 2 and then decide which features are important when analysing the hyponym w 1 .", "For example, when deciding whether seal is a type of animal, the model is able to first see the word animal and then apply a mask that blocks out features of the word seal that are not related to nature.", "During development we found it best to apply this gating in both directions, therefore we condition each word based on the other.", "Each of the word representations is then passed through a non-linear layer with tanh activation, mapping the words to a new space that is more suitable for the given task: m 1 = tanh(W m 1 w 1 + b m 1 ) (5) m 2 = tanh(W m 2 w 2 + b m 2 ) (6) where W m 1 , W m 2 , b m 1 and b m 2 are trainable parameters.", "The input embeddings are trained to predict surrounding words on a large unannotated corpus using the skip-gram objective (Mikolov et al., 2013) , making the resulting vector space reflect (a broad relation of) semantic relatedness but unsuitable for lexical entailment .", "The mapping stage allows the network to learn a transformation function from the general skip-gram embeddings to a task-specific space for lexical entailment.", "In addition, the two weight matrices enable asymmetric reasoning, allowing the network to learn separate mappings for hyponyms and hypernyms.", "We then use a supervised composition function for combining the two representations and returning a confidence score as output.", "Rei et al.", "(2017) described a generalised version of cosine similarity for metaphor detection, constructing a supervised operation and learning individual weights for each feature.", "We apply a similar approach here and modify it to predict a relation score: d = m 1 m 2 (7) h = tanh(W h d + b h ) (8) y = S · σ(a(W y h + b y )) (9) where W h , b h , a, W y and b y are trainable parameters.", "The annotated labels of lexical relations are generally in a fixed range, therefore we base the output function on logistic regression, which also restricts the range of the predicted scores.", "b y allows for the function to be shifted as necessary and a controls the slope of the sigmoid.", "S is the value of the maximum score in the dataset, scaling the resulting value to the correct range.", "The output y represents the confidence that the two input words are in a lexical entailment relation.", "We optimise the model by minimising the mean squared distance between the predicted score y and the gold-standard scoreŷ: L = i (y i −ŷ i ) 2 (10) Sparse Distributional Features (SDF).", "Word embeddings are well-suited for capturing distributional similarity, but they have trouble encoding features such as word frequency, or the number of unique contexts the word has appeared in.", "This information becomes important when deciding whether one word entails another, as the system needs to determine when a concept is more general and subsumes the other.", "We construct classical sparse distributional word vectors and use them to extract 5 unique features for every word pair, to complement the features extracted from neural embeddings: • Regular cosine similarity between the sparse distributional vectors of both words.", "• The sparse weighted cosine measure, described by Rei and Briscoe (2014) , comparing the weighted ranks of different distributional contexts.", "The measure is directional and assigns more importance to the features of the broader term.", "We include this weighted cosine in both directions.", "• The proportion of shared unique contexts, compared to the number of contexts for one word.", "This measure is able to capture whether one of the words appears in a subset of the contexts, compared to the other word.", "This feature is also directional and is therefore included in both directions.", "We build the sparse distributional word vectors from two versions of the British National Corpus (Leech, 1992) .", "The first counts contexts simply based on a window of size 3.", "The second uses a parsed version of the BNC (Andersen et al., 2008) and extracts contexts based on dependency relations.", "In both cases, the features are weighted using pointwise mutual information.", "Each of the five features is calculated separately for the two vector spaces, resulting in 10 corpus-based features.", "We integrate them into the network by conditioning the hidden layer h on this vector: h = tanh(W h d + W x x + b h ) (11) where x is the feature vector of length 10 and W x is the corresponding weight matrix.", "Additional Supervision (AS).", "Methods such as retrofitting (Faruqui et al., 2015) , ATTRACT-REPEL (Mrkšić et al., 2017) and Poincaré embeddings (Nickel and Kiela, 2017 ) make use of handannotated lexical relations for optimising word representations such that they capture the desired properties (so-called embedding specialisation).", "We also experiment with incorporating these resources, but instead of adjusting the individual word embeddings, we use them to optimise the shared network weights.", "This teaches the model to find useful regularities in general-purpose word embeddings, which can then be equally applied to all words in the embedding vocabulary.", "For hyponym detection, we extract examples from WordNet (Miller, 1995) and the Paraphrase Database (PPDB 2.0) (Pavlick et al., 2015) .", "We use WordNet synonyms and hyponyms as positive examples, along with antonyms and hypernyms as negative examples.", "In order to prevent the network from biasing towards specific words that have numerous annotated relations, we limit them to a maximum of 10 examples per word.", "From the PPDB we extract the Equivalence relations as positive examples and the Exclusion relations as negative word pairs.", "The final dataset contains 102,586 positive pairs and 42,958 negative pairs.", "However, only binary labels are attached to all word pairs, whereas the task requires predicting a graded score.", "Initial experiments with optimising the network to predict the minimal and maximal possible score for these cases did not lead to improved performance.", "Therefore, we instead make use of a hinge loss function that optimises the network to only push these examples to the correct side of the decision boundary: L = i max((y −ŷ) 2 − ( S 2 − R) 2 , 0) (12) where S is the maximum score in the range and and R is a margin parameter.", "By minimising Equation 12, the model is only updated based on examples that are not yet on the correct side of the boundary, including a margin.", "This prevents us from penalising the model for predicting a score with slight variations, as the extracted examples are not annotated with sufficient granularity.", "When optimising the model, we first perform one pretraining pass over these additional word pairs before proceeding with the regular training process.", "Evaluation SDSN Training Setup.", "As input to the SDSN network we use 300-dimensional dependency-based word embeddings by Levy and Goldberg (2014) .", "Layers m 1 and m 2 also have size 300 and layer h has size 100.", "For regularisation, we apply dropout to the embeddings with p = 0.5.", "The margin R is set to 1 for the supervised pre-training stage.", "The model is optimised using AdaDelta (Zeiler, 2012) with learning rate 1.0.", "In order to control for random noise, we run each experiment with 10 different random seeds and average the results.", "Our code and detailed configuration files will be made available online.", "1 Evaluation Data.", "We evaluate graded lexical entailment on the HyperLex dataset which contains 2,616 word pairs in total scored for the asymmetric graded lexical entailment relation.", "Following a standard practice, we report Spearman's ρ correlation of the model output to the given human-annotated scores.", "We conduct experiments on two standard data splits for supervised learning: random split and lexical split.", "In the random split the data is randomly divided into training, validation, and test subsets containing 1831, 130, and 655 word pairs, respectively.", "In the lexical split, proposed by Levy et al.", "(2015) , there is no lexical overlap between training and test subsets.", "This prevents the effect of lexical memorisation, as supervised models tend to learn an independent property of a single concept in the pair instead of learning a relation between the two concepts.", "In this setup training, validation, and test sets contain 1133, 85, and 269 word pairs, respectively.", "2 Since plenty of related research on lexical entailment is still focused on the simpler binary detection of asymmetric relations, we also run experiments on the large binary detection HypeNet dataset (Shwartz et al., 2016) , where the SDSN output is converted to binary decisions.", "We again report scores for both random and lexical split.", "Results and Analysis.", "The results on two Hyper-Lex splits are presented in Table 1 , along with the best configurations reported by .", "We refer the interested reader to the original Hy-perLex paper for a detailed description of the best performing baseline models.", "The Supervised Directional Similarity Network (SDSN) achieves substantially better scores than all other tested systems, despite relying on a much simpler supervision signal.", "The previous top approaches, including the Paragram+CF embeddings, make use of numerous annotations provided by WordNet or similarly rich lexical resources, while for SDSN and SDSN+SDF only use the designated relation-specific training set and corpus statistics.", "By also including these extra training instances (SDSN+SDF+AS), we can gain additional perfor- mance and push the correlation to 0.692 on the random split and 0.544 on the lexical split of Hy-perLex, an improvement of approximately 25% to the standard supervised training regime.", "In Table 3 we provide some example output from the final SDSN+SDF+AS model.", "It is able to successfully assign a high score to (captain, officer) and also identify with high confidence that wing is not a type of airplane, even though they are semantically related.", "As an example of incorrect output, the model fails to assign a high score to (prince, royalty), possibly due to the usage patterns of these words being different in context.", "In contrast, it assigns an unexpectedly high score to (kid, parent), likely due to the high distributional similarity of these words.", "Glavaš and Ponzetto (2017) proposed a related dual tensor model for the binary detection of asymmetric relations (Dual-T).", "In order to compare our system to theirs, we train our model on HypeNet and convert the output to binary decisions.", "We also compare SDSN to the best reported models of Shwartz et al.", "(2016) and Roller and Erk (2016) , which combine distributional and pattern-based information for hypernymy detection (HypeNethybrid and H-feature, respectively).", "3 We do not include additional WordNet and PPDB examples in these experiments, as the HypeNet data already subsumes most of them.", "As can be seen in Table 2 , our SDSN+SDF model achieves the best results also on the HypeNet dataset, outperforming previous models on both data splits.", "Conclusion We introduce a novel neural architecture for mapping and specialising a vector space based on limited supervision.", "While prior work has focused only on optimising individual word embeddings available in external resources, our model uses 3 For more detail on the baseline models, we refer the reader to the original papers.", "Table 3 : Example word pairs from the HyperLex development set.", "S is the human-annotated score in the HyperLex dataset.", "P is the predicted score using the SDSN+SDF+AS model.", "general-purpose embeddings and optimises a separate neural component to adapt these to the specific task, generalising to unseen data.", "The system achieves new state-of-the-art results on the task of scoring graded lexical entailment.", "Future work could apply the model to other lexical relations or extend it to cover multiple relations simultaneously." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5" ], "paper_header_content": [ "Introduction", "The Task of Grading Lexical Entailment", "System Architecture", "Evaluation", "Conclusion" ] }
GEM-SciDuet-train-97#paper-1252#slide-5
SDSN Sparse Features
Features based on sparse distributional representations ratio of shared contexts
Features based on sparse distributional representations ratio of shared contexts
[]
GEM-SciDuet-train-97#paper-1252#slide-6
1252
Scoring Lexical Entailment with a Supervised Directional Similarity Network
We present the Supervised Directional Similarity Network (SDSN), a novel neural architecture for learning task-specific transformation functions on top of generalpurpose word embeddings. Relying on only a limited amount of supervision from task-specific scores on a subset of the vocabulary, our architecture is able to generalise and transform a general-purpose distributional vector space to model the relation of lexical entailment. Experiments show excellent performance on scoring graded lexical entailment, raising the stateof-the-art on the HyperLex dataset by approximately 25%.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118 ], "paper_content_text": [ "Introduction Standard word embedding models (Mikolov et al., 2013; Pennington et al., 2014; Bojanowski et al., 2017) are based on the distributional hypothesis by Harris (1954) .", "However, purely distributional models coalesce various lexico-semantic relations (e.g., synonymy, antonymy, hypernymy) into a joint distributed representation.", "To address this, previous work has focused on introducing supervision into individual word embeddings, allowing them to better capture the desired lexical properties.", "For example, Faruqui et al.", "(2015) and Wieting et al.", "(2015) proposed methods for using annotated lexical relations to condition the vector space and bring synonymous words closer together.", "Mrkšić et al.", "(2016) and Mrkšić et al.", "(2017) improved the optimisation function and introduced an additional constraint for pushing antonym pairs further apart.", "While these methods integrate hand-crafted features from external lexical resources with distributional information, they improve only the embeddings of words that have annotated lexical relations in the training resource.", "In this work, we propose a novel approach to leveraging external knowledge with generalpurpose unsupervised embeddings, focusing on the directional graded lexical entailment task , whereas previous work has mostly investigated simpler non-directional semantic similarity tasks.", "Instead of optimising individual word embeddings, our model uses general-purpose embeddings and optimises a separate neural component to adapt these to the specific task.", "In particular, our neural Supervised Directional Similarity Network (SDSN) dynamically produces task-specific embeddings optimised for scoring the asymmetric lexical entailment relation between any two words, regardless of their presence in the training resource.", "Our results with task-specific embeddings indicate large improvements on the HyperLex dataset, a standard graded lexical entailment benchmark.", "The model also yields improvements on a simpler nongraded entailment detection task.", "The Task of Grading Lexical Entailment In graded lexical entailment, the goal is to make fine-grained assertions regarding the directional hierarchical semantic relationships between concepts .", "The task is grounded in theories of concept (proto)typicality and category vagueness from cognitive science (Rosch, 1975; Kamp and Partee, 1995) , and aims at answering the following question: \"To what degree is X a type of Y ?\".", "It quantifies the degree of lexical entailment instead of providing only a binary yes/no decision on the relationship between the concepts X and Y , as done in hypernymy detection tasks (Kotlerman et al., 2010; Weeds et al., 2014; Santus et al., 2014; Kiela et al., 2015; Shwartz et al., 2017) .", "Graded lexical entailment provides finer-grained judgements on a continuous scale.", "For instance, the word pair (girl → person) has been rated highly with 9.85/10 by the HyperLex annotators.", "The pair (guest → person) has received a slightly lower score of 7.22, as a prototypical guest is often a person but there can be exceptions.", "In contrast, the score for the reversed pair (person → guest) is only judged at 2.88.", "As demonstrated by and Nickel and Kiela (2017) , standard general-purpose representation models trained in an unsupervised way purely on distributional information are unfit for this task and unable to surpass the performance of simple frequency baselines (see also Table 1 ).", "Therefore, in what follows, we describe a novel supervised framework for constructing task-specific word embeddings, optimised for the graded entailment task at hand.", "System Architecture The network architecture can be seen in Figure 1 .", "The system receives a pair of words as input and predicts a score that represents the strength of the given lexical relation.", "In the graded entailment task, we would like the model to return a high score for (biology → science), as biology is a type of science, but a low score for (point → pencil).", "We start by mapping both input words to corresponding word embeddings w 1 and w 2 .", "The embeddings come from a standard distributional vector space, pre-trained on a large unannotated corpus, and are not fine-tuned during training.", "An element-wise gating operation is then applied to each word, conditioned on the other word: g 1 = σ(W g 1 w 1 + b g 1 ) (1) g 2 = σ(W g 2 w 2 + b g 2 ) (2) w 1 = w 1 g 2 (3) w 2 = w 2 g 1 (4) where W g 1 and W g 2 are weight matrices, b g 1 and b g 2 are bias vectors, σ() is the logistic function and indicates element-wise multiplication.", "This operation allows the network to first observe the candidate hypernym w 2 and then decide which features are important when analysing the hyponym w 1 .", "For example, when deciding whether seal is a type of animal, the model is able to first see the word animal and then apply a mask that blocks out features of the word seal that are not related to nature.", "During development we found it best to apply this gating in both directions, therefore we condition each word based on the other.", "Each of the word representations is then passed through a non-linear layer with tanh activation, mapping the words to a new space that is more suitable for the given task: m 1 = tanh(W m 1 w 1 + b m 1 ) (5) m 2 = tanh(W m 2 w 2 + b m 2 ) (6) where W m 1 , W m 2 , b m 1 and b m 2 are trainable parameters.", "The input embeddings are trained to predict surrounding words on a large unannotated corpus using the skip-gram objective (Mikolov et al., 2013) , making the resulting vector space reflect (a broad relation of) semantic relatedness but unsuitable for lexical entailment .", "The mapping stage allows the network to learn a transformation function from the general skip-gram embeddings to a task-specific space for lexical entailment.", "In addition, the two weight matrices enable asymmetric reasoning, allowing the network to learn separate mappings for hyponyms and hypernyms.", "We then use a supervised composition function for combining the two representations and returning a confidence score as output.", "Rei et al.", "(2017) described a generalised version of cosine similarity for metaphor detection, constructing a supervised operation and learning individual weights for each feature.", "We apply a similar approach here and modify it to predict a relation score: d = m 1 m 2 (7) h = tanh(W h d + b h ) (8) y = S · σ(a(W y h + b y )) (9) where W h , b h , a, W y and b y are trainable parameters.", "The annotated labels of lexical relations are generally in a fixed range, therefore we base the output function on logistic regression, which also restricts the range of the predicted scores.", "b y allows for the function to be shifted as necessary and a controls the slope of the sigmoid.", "S is the value of the maximum score in the dataset, scaling the resulting value to the correct range.", "The output y represents the confidence that the two input words are in a lexical entailment relation.", "We optimise the model by minimising the mean squared distance between the predicted score y and the gold-standard scoreŷ: L = i (y i −ŷ i ) 2 (10) Sparse Distributional Features (SDF).", "Word embeddings are well-suited for capturing distributional similarity, but they have trouble encoding features such as word frequency, or the number of unique contexts the word has appeared in.", "This information becomes important when deciding whether one word entails another, as the system needs to determine when a concept is more general and subsumes the other.", "We construct classical sparse distributional word vectors and use them to extract 5 unique features for every word pair, to complement the features extracted from neural embeddings: • Regular cosine similarity between the sparse distributional vectors of both words.", "• The sparse weighted cosine measure, described by Rei and Briscoe (2014) , comparing the weighted ranks of different distributional contexts.", "The measure is directional and assigns more importance to the features of the broader term.", "We include this weighted cosine in both directions.", "• The proportion of shared unique contexts, compared to the number of contexts for one word.", "This measure is able to capture whether one of the words appears in a subset of the contexts, compared to the other word.", "This feature is also directional and is therefore included in both directions.", "We build the sparse distributional word vectors from two versions of the British National Corpus (Leech, 1992) .", "The first counts contexts simply based on a window of size 3.", "The second uses a parsed version of the BNC (Andersen et al., 2008) and extracts contexts based on dependency relations.", "In both cases, the features are weighted using pointwise mutual information.", "Each of the five features is calculated separately for the two vector spaces, resulting in 10 corpus-based features.", "We integrate them into the network by conditioning the hidden layer h on this vector: h = tanh(W h d + W x x + b h ) (11) where x is the feature vector of length 10 and W x is the corresponding weight matrix.", "Additional Supervision (AS).", "Methods such as retrofitting (Faruqui et al., 2015) , ATTRACT-REPEL (Mrkšić et al., 2017) and Poincaré embeddings (Nickel and Kiela, 2017 ) make use of handannotated lexical relations for optimising word representations such that they capture the desired properties (so-called embedding specialisation).", "We also experiment with incorporating these resources, but instead of adjusting the individual word embeddings, we use them to optimise the shared network weights.", "This teaches the model to find useful regularities in general-purpose word embeddings, which can then be equally applied to all words in the embedding vocabulary.", "For hyponym detection, we extract examples from WordNet (Miller, 1995) and the Paraphrase Database (PPDB 2.0) (Pavlick et al., 2015) .", "We use WordNet synonyms and hyponyms as positive examples, along with antonyms and hypernyms as negative examples.", "In order to prevent the network from biasing towards specific words that have numerous annotated relations, we limit them to a maximum of 10 examples per word.", "From the PPDB we extract the Equivalence relations as positive examples and the Exclusion relations as negative word pairs.", "The final dataset contains 102,586 positive pairs and 42,958 negative pairs.", "However, only binary labels are attached to all word pairs, whereas the task requires predicting a graded score.", "Initial experiments with optimising the network to predict the minimal and maximal possible score for these cases did not lead to improved performance.", "Therefore, we instead make use of a hinge loss function that optimises the network to only push these examples to the correct side of the decision boundary: L = i max((y −ŷ) 2 − ( S 2 − R) 2 , 0) (12) where S is the maximum score in the range and and R is a margin parameter.", "By minimising Equation 12, the model is only updated based on examples that are not yet on the correct side of the boundary, including a margin.", "This prevents us from penalising the model for predicting a score with slight variations, as the extracted examples are not annotated with sufficient granularity.", "When optimising the model, we first perform one pretraining pass over these additional word pairs before proceeding with the regular training process.", "Evaluation SDSN Training Setup.", "As input to the SDSN network we use 300-dimensional dependency-based word embeddings by Levy and Goldberg (2014) .", "Layers m 1 and m 2 also have size 300 and layer h has size 100.", "For regularisation, we apply dropout to the embeddings with p = 0.5.", "The margin R is set to 1 for the supervised pre-training stage.", "The model is optimised using AdaDelta (Zeiler, 2012) with learning rate 1.0.", "In order to control for random noise, we run each experiment with 10 different random seeds and average the results.", "Our code and detailed configuration files will be made available online.", "1 Evaluation Data.", "We evaluate graded lexical entailment on the HyperLex dataset which contains 2,616 word pairs in total scored for the asymmetric graded lexical entailment relation.", "Following a standard practice, we report Spearman's ρ correlation of the model output to the given human-annotated scores.", "We conduct experiments on two standard data splits for supervised learning: random split and lexical split.", "In the random split the data is randomly divided into training, validation, and test subsets containing 1831, 130, and 655 word pairs, respectively.", "In the lexical split, proposed by Levy et al.", "(2015) , there is no lexical overlap between training and test subsets.", "This prevents the effect of lexical memorisation, as supervised models tend to learn an independent property of a single concept in the pair instead of learning a relation between the two concepts.", "In this setup training, validation, and test sets contain 1133, 85, and 269 word pairs, respectively.", "2 Since plenty of related research on lexical entailment is still focused on the simpler binary detection of asymmetric relations, we also run experiments on the large binary detection HypeNet dataset (Shwartz et al., 2016) , where the SDSN output is converted to binary decisions.", "We again report scores for both random and lexical split.", "Results and Analysis.", "The results on two Hyper-Lex splits are presented in Table 1 , along with the best configurations reported by .", "We refer the interested reader to the original Hy-perLex paper for a detailed description of the best performing baseline models.", "The Supervised Directional Similarity Network (SDSN) achieves substantially better scores than all other tested systems, despite relying on a much simpler supervision signal.", "The previous top approaches, including the Paragram+CF embeddings, make use of numerous annotations provided by WordNet or similarly rich lexical resources, while for SDSN and SDSN+SDF only use the designated relation-specific training set and corpus statistics.", "By also including these extra training instances (SDSN+SDF+AS), we can gain additional perfor- mance and push the correlation to 0.692 on the random split and 0.544 on the lexical split of Hy-perLex, an improvement of approximately 25% to the standard supervised training regime.", "In Table 3 we provide some example output from the final SDSN+SDF+AS model.", "It is able to successfully assign a high score to (captain, officer) and also identify with high confidence that wing is not a type of airplane, even though they are semantically related.", "As an example of incorrect output, the model fails to assign a high score to (prince, royalty), possibly due to the usage patterns of these words being different in context.", "In contrast, it assigns an unexpectedly high score to (kid, parent), likely due to the high distributional similarity of these words.", "Glavaš and Ponzetto (2017) proposed a related dual tensor model for the binary detection of asymmetric relations (Dual-T).", "In order to compare our system to theirs, we train our model on HypeNet and convert the output to binary decisions.", "We also compare SDSN to the best reported models of Shwartz et al.", "(2016) and Roller and Erk (2016) , which combine distributional and pattern-based information for hypernymy detection (HypeNethybrid and H-feature, respectively).", "3 We do not include additional WordNet and PPDB examples in these experiments, as the HypeNet data already subsumes most of them.", "As can be seen in Table 2 , our SDSN+SDF model achieves the best results also on the HypeNet dataset, outperforming previous models on both data splits.", "Conclusion We introduce a novel neural architecture for mapping and specialising a vector space based on limited supervision.", "While prior work has focused only on optimising individual word embeddings available in external resources, our model uses 3 For more detail on the baseline models, we refer the reader to the original papers.", "Table 3 : Example word pairs from the HyperLex development set.", "S is the human-annotated score in the HyperLex dataset.", "P is the predicted score using the SDSN+SDF+AS model.", "general-purpose embeddings and optimises a separate neural component to adapt these to the specific task, generalising to unseen data.", "The system achieves new state-of-the-art results on the task of scoring graded lexical entailment.", "Future work could apply the model to other lexical relations or extend it to cover multiple relations simultaneously." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5" ], "paper_header_content": [ "Introduction", "The Task of Grading Lexical Entailment", "System Architecture", "Evaluation", "Conclusion" ] }
GEM-SciDuet-train-97#paper-1252#slide-6
SDSN Scoring
Mapping the representations to a score
Mapping the representations to a score
[]
GEM-SciDuet-train-97#paper-1252#slide-8
1252
Scoring Lexical Entailment with a Supervised Directional Similarity Network
We present the Supervised Directional Similarity Network (SDSN), a novel neural architecture for learning task-specific transformation functions on top of generalpurpose word embeddings. Relying on only a limited amount of supervision from task-specific scores on a subset of the vocabulary, our architecture is able to generalise and transform a general-purpose distributional vector space to model the relation of lexical entailment. Experiments show excellent performance on scoring graded lexical entailment, raising the stateof-the-art on the HyperLex dataset by approximately 25%.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118 ], "paper_content_text": [ "Introduction Standard word embedding models (Mikolov et al., 2013; Pennington et al., 2014; Bojanowski et al., 2017) are based on the distributional hypothesis by Harris (1954) .", "However, purely distributional models coalesce various lexico-semantic relations (e.g., synonymy, antonymy, hypernymy) into a joint distributed representation.", "To address this, previous work has focused on introducing supervision into individual word embeddings, allowing them to better capture the desired lexical properties.", "For example, Faruqui et al.", "(2015) and Wieting et al.", "(2015) proposed methods for using annotated lexical relations to condition the vector space and bring synonymous words closer together.", "Mrkšić et al.", "(2016) and Mrkšić et al.", "(2017) improved the optimisation function and introduced an additional constraint for pushing antonym pairs further apart.", "While these methods integrate hand-crafted features from external lexical resources with distributional information, they improve only the embeddings of words that have annotated lexical relations in the training resource.", "In this work, we propose a novel approach to leveraging external knowledge with generalpurpose unsupervised embeddings, focusing on the directional graded lexical entailment task , whereas previous work has mostly investigated simpler non-directional semantic similarity tasks.", "Instead of optimising individual word embeddings, our model uses general-purpose embeddings and optimises a separate neural component to adapt these to the specific task.", "In particular, our neural Supervised Directional Similarity Network (SDSN) dynamically produces task-specific embeddings optimised for scoring the asymmetric lexical entailment relation between any two words, regardless of their presence in the training resource.", "Our results with task-specific embeddings indicate large improvements on the HyperLex dataset, a standard graded lexical entailment benchmark.", "The model also yields improvements on a simpler nongraded entailment detection task.", "The Task of Grading Lexical Entailment In graded lexical entailment, the goal is to make fine-grained assertions regarding the directional hierarchical semantic relationships between concepts .", "The task is grounded in theories of concept (proto)typicality and category vagueness from cognitive science (Rosch, 1975; Kamp and Partee, 1995) , and aims at answering the following question: \"To what degree is X a type of Y ?\".", "It quantifies the degree of lexical entailment instead of providing only a binary yes/no decision on the relationship between the concepts X and Y , as done in hypernymy detection tasks (Kotlerman et al., 2010; Weeds et al., 2014; Santus et al., 2014; Kiela et al., 2015; Shwartz et al., 2017) .", "Graded lexical entailment provides finer-grained judgements on a continuous scale.", "For instance, the word pair (girl → person) has been rated highly with 9.85/10 by the HyperLex annotators.", "The pair (guest → person) has received a slightly lower score of 7.22, as a prototypical guest is often a person but there can be exceptions.", "In contrast, the score for the reversed pair (person → guest) is only judged at 2.88.", "As demonstrated by and Nickel and Kiela (2017) , standard general-purpose representation models trained in an unsupervised way purely on distributional information are unfit for this task and unable to surpass the performance of simple frequency baselines (see also Table 1 ).", "Therefore, in what follows, we describe a novel supervised framework for constructing task-specific word embeddings, optimised for the graded entailment task at hand.", "System Architecture The network architecture can be seen in Figure 1 .", "The system receives a pair of words as input and predicts a score that represents the strength of the given lexical relation.", "In the graded entailment task, we would like the model to return a high score for (biology → science), as biology is a type of science, but a low score for (point → pencil).", "We start by mapping both input words to corresponding word embeddings w 1 and w 2 .", "The embeddings come from a standard distributional vector space, pre-trained on a large unannotated corpus, and are not fine-tuned during training.", "An element-wise gating operation is then applied to each word, conditioned on the other word: g 1 = σ(W g 1 w 1 + b g 1 ) (1) g 2 = σ(W g 2 w 2 + b g 2 ) (2) w 1 = w 1 g 2 (3) w 2 = w 2 g 1 (4) where W g 1 and W g 2 are weight matrices, b g 1 and b g 2 are bias vectors, σ() is the logistic function and indicates element-wise multiplication.", "This operation allows the network to first observe the candidate hypernym w 2 and then decide which features are important when analysing the hyponym w 1 .", "For example, when deciding whether seal is a type of animal, the model is able to first see the word animal and then apply a mask that blocks out features of the word seal that are not related to nature.", "During development we found it best to apply this gating in both directions, therefore we condition each word based on the other.", "Each of the word representations is then passed through a non-linear layer with tanh activation, mapping the words to a new space that is more suitable for the given task: m 1 = tanh(W m 1 w 1 + b m 1 ) (5) m 2 = tanh(W m 2 w 2 + b m 2 ) (6) where W m 1 , W m 2 , b m 1 and b m 2 are trainable parameters.", "The input embeddings are trained to predict surrounding words on a large unannotated corpus using the skip-gram objective (Mikolov et al., 2013) , making the resulting vector space reflect (a broad relation of) semantic relatedness but unsuitable for lexical entailment .", "The mapping stage allows the network to learn a transformation function from the general skip-gram embeddings to a task-specific space for lexical entailment.", "In addition, the two weight matrices enable asymmetric reasoning, allowing the network to learn separate mappings for hyponyms and hypernyms.", "We then use a supervised composition function for combining the two representations and returning a confidence score as output.", "Rei et al.", "(2017) described a generalised version of cosine similarity for metaphor detection, constructing a supervised operation and learning individual weights for each feature.", "We apply a similar approach here and modify it to predict a relation score: d = m 1 m 2 (7) h = tanh(W h d + b h ) (8) y = S · σ(a(W y h + b y )) (9) where W h , b h , a, W y and b y are trainable parameters.", "The annotated labels of lexical relations are generally in a fixed range, therefore we base the output function on logistic regression, which also restricts the range of the predicted scores.", "b y allows for the function to be shifted as necessary and a controls the slope of the sigmoid.", "S is the value of the maximum score in the dataset, scaling the resulting value to the correct range.", "The output y represents the confidence that the two input words are in a lexical entailment relation.", "We optimise the model by minimising the mean squared distance between the predicted score y and the gold-standard scoreŷ: L = i (y i −ŷ i ) 2 (10) Sparse Distributional Features (SDF).", "Word embeddings are well-suited for capturing distributional similarity, but they have trouble encoding features such as word frequency, or the number of unique contexts the word has appeared in.", "This information becomes important when deciding whether one word entails another, as the system needs to determine when a concept is more general and subsumes the other.", "We construct classical sparse distributional word vectors and use them to extract 5 unique features for every word pair, to complement the features extracted from neural embeddings: • Regular cosine similarity between the sparse distributional vectors of both words.", "• The sparse weighted cosine measure, described by Rei and Briscoe (2014) , comparing the weighted ranks of different distributional contexts.", "The measure is directional and assigns more importance to the features of the broader term.", "We include this weighted cosine in both directions.", "• The proportion of shared unique contexts, compared to the number of contexts for one word.", "This measure is able to capture whether one of the words appears in a subset of the contexts, compared to the other word.", "This feature is also directional and is therefore included in both directions.", "We build the sparse distributional word vectors from two versions of the British National Corpus (Leech, 1992) .", "The first counts contexts simply based on a window of size 3.", "The second uses a parsed version of the BNC (Andersen et al., 2008) and extracts contexts based on dependency relations.", "In both cases, the features are weighted using pointwise mutual information.", "Each of the five features is calculated separately for the two vector spaces, resulting in 10 corpus-based features.", "We integrate them into the network by conditioning the hidden layer h on this vector: h = tanh(W h d + W x x + b h ) (11) where x is the feature vector of length 10 and W x is the corresponding weight matrix.", "Additional Supervision (AS).", "Methods such as retrofitting (Faruqui et al., 2015) , ATTRACT-REPEL (Mrkšić et al., 2017) and Poincaré embeddings (Nickel and Kiela, 2017 ) make use of handannotated lexical relations for optimising word representations such that they capture the desired properties (so-called embedding specialisation).", "We also experiment with incorporating these resources, but instead of adjusting the individual word embeddings, we use them to optimise the shared network weights.", "This teaches the model to find useful regularities in general-purpose word embeddings, which can then be equally applied to all words in the embedding vocabulary.", "For hyponym detection, we extract examples from WordNet (Miller, 1995) and the Paraphrase Database (PPDB 2.0) (Pavlick et al., 2015) .", "We use WordNet synonyms and hyponyms as positive examples, along with antonyms and hypernyms as negative examples.", "In order to prevent the network from biasing towards specific words that have numerous annotated relations, we limit them to a maximum of 10 examples per word.", "From the PPDB we extract the Equivalence relations as positive examples and the Exclusion relations as negative word pairs.", "The final dataset contains 102,586 positive pairs and 42,958 negative pairs.", "However, only binary labels are attached to all word pairs, whereas the task requires predicting a graded score.", "Initial experiments with optimising the network to predict the minimal and maximal possible score for these cases did not lead to improved performance.", "Therefore, we instead make use of a hinge loss function that optimises the network to only push these examples to the correct side of the decision boundary: L = i max((y −ŷ) 2 − ( S 2 − R) 2 , 0) (12) where S is the maximum score in the range and and R is a margin parameter.", "By minimising Equation 12, the model is only updated based on examples that are not yet on the correct side of the boundary, including a margin.", "This prevents us from penalising the model for predicting a score with slight variations, as the extracted examples are not annotated with sufficient granularity.", "When optimising the model, we first perform one pretraining pass over these additional word pairs before proceeding with the regular training process.", "Evaluation SDSN Training Setup.", "As input to the SDSN network we use 300-dimensional dependency-based word embeddings by Levy and Goldberg (2014) .", "Layers m 1 and m 2 also have size 300 and layer h has size 100.", "For regularisation, we apply dropout to the embeddings with p = 0.5.", "The margin R is set to 1 for the supervised pre-training stage.", "The model is optimised using AdaDelta (Zeiler, 2012) with learning rate 1.0.", "In order to control for random noise, we run each experiment with 10 different random seeds and average the results.", "Our code and detailed configuration files will be made available online.", "1 Evaluation Data.", "We evaluate graded lexical entailment on the HyperLex dataset which contains 2,616 word pairs in total scored for the asymmetric graded lexical entailment relation.", "Following a standard practice, we report Spearman's ρ correlation of the model output to the given human-annotated scores.", "We conduct experiments on two standard data splits for supervised learning: random split and lexical split.", "In the random split the data is randomly divided into training, validation, and test subsets containing 1831, 130, and 655 word pairs, respectively.", "In the lexical split, proposed by Levy et al.", "(2015) , there is no lexical overlap between training and test subsets.", "This prevents the effect of lexical memorisation, as supervised models tend to learn an independent property of a single concept in the pair instead of learning a relation between the two concepts.", "In this setup training, validation, and test sets contain 1133, 85, and 269 word pairs, respectively.", "2 Since plenty of related research on lexical entailment is still focused on the simpler binary detection of asymmetric relations, we also run experiments on the large binary detection HypeNet dataset (Shwartz et al., 2016) , where the SDSN output is converted to binary decisions.", "We again report scores for both random and lexical split.", "Results and Analysis.", "The results on two Hyper-Lex splits are presented in Table 1 , along with the best configurations reported by .", "We refer the interested reader to the original Hy-perLex paper for a detailed description of the best performing baseline models.", "The Supervised Directional Similarity Network (SDSN) achieves substantially better scores than all other tested systems, despite relying on a much simpler supervision signal.", "The previous top approaches, including the Paragram+CF embeddings, make use of numerous annotations provided by WordNet or similarly rich lexical resources, while for SDSN and SDSN+SDF only use the designated relation-specific training set and corpus statistics.", "By also including these extra training instances (SDSN+SDF+AS), we can gain additional perfor- mance and push the correlation to 0.692 on the random split and 0.544 on the lexical split of Hy-perLex, an improvement of approximately 25% to the standard supervised training regime.", "In Table 3 we provide some example output from the final SDSN+SDF+AS model.", "It is able to successfully assign a high score to (captain, officer) and also identify with high confidence that wing is not a type of airplane, even though they are semantically related.", "As an example of incorrect output, the model fails to assign a high score to (prince, royalty), possibly due to the usage patterns of these words being different in context.", "In contrast, it assigns an unexpectedly high score to (kid, parent), likely due to the high distributional similarity of these words.", "Glavaš and Ponzetto (2017) proposed a related dual tensor model for the binary detection of asymmetric relations (Dual-T).", "In order to compare our system to theirs, we train our model on HypeNet and convert the output to binary decisions.", "We also compare SDSN to the best reported models of Shwartz et al.", "(2016) and Roller and Erk (2016) , which combine distributional and pattern-based information for hypernymy detection (HypeNethybrid and H-feature, respectively).", "3 We do not include additional WordNet and PPDB examples in these experiments, as the HypeNet data already subsumes most of them.", "As can be seen in Table 2 , our SDSN+SDF model achieves the best results also on the HypeNet dataset, outperforming previous models on both data splits.", "Conclusion We introduce a novel neural architecture for mapping and specialising a vector space based on limited supervision.", "While prior work has focused only on optimising individual word embeddings available in external resources, our model uses 3 For more detail on the baseline models, we refer the reader to the original papers.", "Table 3 : Example word pairs from the HyperLex development set.", "S is the human-annotated score in the HyperLex dataset.", "P is the predicted score using the SDSN+SDF+AS model.", "general-purpose embeddings and optimises a separate neural component to adapt these to the specific task, generalising to unseen data.", "The system achieves new state-of-the-art results on the task of scoring graded lexical entailment.", "Future work could apply the model to other lexical relations or extend it to cover multiple relations simultaneously." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5" ], "paper_header_content": [ "Introduction", "The Task of Grading Lexical Entailment", "System Architecture", "Evaluation", "Conclusion" ] }
GEM-SciDuet-train-97#paper-1252#slide-8
HyperNet Hyponym Detection
eo x se @ RS
eo x se @ RS
[]
GEM-SciDuet-train-97#paper-1252#slide-9
1252
Scoring Lexical Entailment with a Supervised Directional Similarity Network
We present the Supervised Directional Similarity Network (SDSN), a novel neural architecture for learning task-specific transformation functions on top of generalpurpose word embeddings. Relying on only a limited amount of supervision from task-specific scores on a subset of the vocabulary, our architecture is able to generalise and transform a general-purpose distributional vector space to model the relation of lexical entailment. Experiments show excellent performance on scoring graded lexical entailment, raising the stateof-the-art on the HyperLex dataset by approximately 25%.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118 ], "paper_content_text": [ "Introduction Standard word embedding models (Mikolov et al., 2013; Pennington et al., 2014; Bojanowski et al., 2017) are based on the distributional hypothesis by Harris (1954) .", "However, purely distributional models coalesce various lexico-semantic relations (e.g., synonymy, antonymy, hypernymy) into a joint distributed representation.", "To address this, previous work has focused on introducing supervision into individual word embeddings, allowing them to better capture the desired lexical properties.", "For example, Faruqui et al.", "(2015) and Wieting et al.", "(2015) proposed methods for using annotated lexical relations to condition the vector space and bring synonymous words closer together.", "Mrkšić et al.", "(2016) and Mrkšić et al.", "(2017) improved the optimisation function and introduced an additional constraint for pushing antonym pairs further apart.", "While these methods integrate hand-crafted features from external lexical resources with distributional information, they improve only the embeddings of words that have annotated lexical relations in the training resource.", "In this work, we propose a novel approach to leveraging external knowledge with generalpurpose unsupervised embeddings, focusing on the directional graded lexical entailment task , whereas previous work has mostly investigated simpler non-directional semantic similarity tasks.", "Instead of optimising individual word embeddings, our model uses general-purpose embeddings and optimises a separate neural component to adapt these to the specific task.", "In particular, our neural Supervised Directional Similarity Network (SDSN) dynamically produces task-specific embeddings optimised for scoring the asymmetric lexical entailment relation between any two words, regardless of their presence in the training resource.", "Our results with task-specific embeddings indicate large improvements on the HyperLex dataset, a standard graded lexical entailment benchmark.", "The model also yields improvements on a simpler nongraded entailment detection task.", "The Task of Grading Lexical Entailment In graded lexical entailment, the goal is to make fine-grained assertions regarding the directional hierarchical semantic relationships between concepts .", "The task is grounded in theories of concept (proto)typicality and category vagueness from cognitive science (Rosch, 1975; Kamp and Partee, 1995) , and aims at answering the following question: \"To what degree is X a type of Y ?\".", "It quantifies the degree of lexical entailment instead of providing only a binary yes/no decision on the relationship between the concepts X and Y , as done in hypernymy detection tasks (Kotlerman et al., 2010; Weeds et al., 2014; Santus et al., 2014; Kiela et al., 2015; Shwartz et al., 2017) .", "Graded lexical entailment provides finer-grained judgements on a continuous scale.", "For instance, the word pair (girl → person) has been rated highly with 9.85/10 by the HyperLex annotators.", "The pair (guest → person) has received a slightly lower score of 7.22, as a prototypical guest is often a person but there can be exceptions.", "In contrast, the score for the reversed pair (person → guest) is only judged at 2.88.", "As demonstrated by and Nickel and Kiela (2017) , standard general-purpose representation models trained in an unsupervised way purely on distributional information are unfit for this task and unable to surpass the performance of simple frequency baselines (see also Table 1 ).", "Therefore, in what follows, we describe a novel supervised framework for constructing task-specific word embeddings, optimised for the graded entailment task at hand.", "System Architecture The network architecture can be seen in Figure 1 .", "The system receives a pair of words as input and predicts a score that represents the strength of the given lexical relation.", "In the graded entailment task, we would like the model to return a high score for (biology → science), as biology is a type of science, but a low score for (point → pencil).", "We start by mapping both input words to corresponding word embeddings w 1 and w 2 .", "The embeddings come from a standard distributional vector space, pre-trained on a large unannotated corpus, and are not fine-tuned during training.", "An element-wise gating operation is then applied to each word, conditioned on the other word: g 1 = σ(W g 1 w 1 + b g 1 ) (1) g 2 = σ(W g 2 w 2 + b g 2 ) (2) w 1 = w 1 g 2 (3) w 2 = w 2 g 1 (4) where W g 1 and W g 2 are weight matrices, b g 1 and b g 2 are bias vectors, σ() is the logistic function and indicates element-wise multiplication.", "This operation allows the network to first observe the candidate hypernym w 2 and then decide which features are important when analysing the hyponym w 1 .", "For example, when deciding whether seal is a type of animal, the model is able to first see the word animal and then apply a mask that blocks out features of the word seal that are not related to nature.", "During development we found it best to apply this gating in both directions, therefore we condition each word based on the other.", "Each of the word representations is then passed through a non-linear layer with tanh activation, mapping the words to a new space that is more suitable for the given task: m 1 = tanh(W m 1 w 1 + b m 1 ) (5) m 2 = tanh(W m 2 w 2 + b m 2 ) (6) where W m 1 , W m 2 , b m 1 and b m 2 are trainable parameters.", "The input embeddings are trained to predict surrounding words on a large unannotated corpus using the skip-gram objective (Mikolov et al., 2013) , making the resulting vector space reflect (a broad relation of) semantic relatedness but unsuitable for lexical entailment .", "The mapping stage allows the network to learn a transformation function from the general skip-gram embeddings to a task-specific space for lexical entailment.", "In addition, the two weight matrices enable asymmetric reasoning, allowing the network to learn separate mappings for hyponyms and hypernyms.", "We then use a supervised composition function for combining the two representations and returning a confidence score as output.", "Rei et al.", "(2017) described a generalised version of cosine similarity for metaphor detection, constructing a supervised operation and learning individual weights for each feature.", "We apply a similar approach here and modify it to predict a relation score: d = m 1 m 2 (7) h = tanh(W h d + b h ) (8) y = S · σ(a(W y h + b y )) (9) where W h , b h , a, W y and b y are trainable parameters.", "The annotated labels of lexical relations are generally in a fixed range, therefore we base the output function on logistic regression, which also restricts the range of the predicted scores.", "b y allows for the function to be shifted as necessary and a controls the slope of the sigmoid.", "S is the value of the maximum score in the dataset, scaling the resulting value to the correct range.", "The output y represents the confidence that the two input words are in a lexical entailment relation.", "We optimise the model by minimising the mean squared distance between the predicted score y and the gold-standard scoreŷ: L = i (y i −ŷ i ) 2 (10) Sparse Distributional Features (SDF).", "Word embeddings are well-suited for capturing distributional similarity, but they have trouble encoding features such as word frequency, or the number of unique contexts the word has appeared in.", "This information becomes important when deciding whether one word entails another, as the system needs to determine when a concept is more general and subsumes the other.", "We construct classical sparse distributional word vectors and use them to extract 5 unique features for every word pair, to complement the features extracted from neural embeddings: • Regular cosine similarity between the sparse distributional vectors of both words.", "• The sparse weighted cosine measure, described by Rei and Briscoe (2014) , comparing the weighted ranks of different distributional contexts.", "The measure is directional and assigns more importance to the features of the broader term.", "We include this weighted cosine in both directions.", "• The proportion of shared unique contexts, compared to the number of contexts for one word.", "This measure is able to capture whether one of the words appears in a subset of the contexts, compared to the other word.", "This feature is also directional and is therefore included in both directions.", "We build the sparse distributional word vectors from two versions of the British National Corpus (Leech, 1992) .", "The first counts contexts simply based on a window of size 3.", "The second uses a parsed version of the BNC (Andersen et al., 2008) and extracts contexts based on dependency relations.", "In both cases, the features are weighted using pointwise mutual information.", "Each of the five features is calculated separately for the two vector spaces, resulting in 10 corpus-based features.", "We integrate them into the network by conditioning the hidden layer h on this vector: h = tanh(W h d + W x x + b h ) (11) where x is the feature vector of length 10 and W x is the corresponding weight matrix.", "Additional Supervision (AS).", "Methods such as retrofitting (Faruqui et al., 2015) , ATTRACT-REPEL (Mrkšić et al., 2017) and Poincaré embeddings (Nickel and Kiela, 2017 ) make use of handannotated lexical relations for optimising word representations such that they capture the desired properties (so-called embedding specialisation).", "We also experiment with incorporating these resources, but instead of adjusting the individual word embeddings, we use them to optimise the shared network weights.", "This teaches the model to find useful regularities in general-purpose word embeddings, which can then be equally applied to all words in the embedding vocabulary.", "For hyponym detection, we extract examples from WordNet (Miller, 1995) and the Paraphrase Database (PPDB 2.0) (Pavlick et al., 2015) .", "We use WordNet synonyms and hyponyms as positive examples, along with antonyms and hypernyms as negative examples.", "In order to prevent the network from biasing towards specific words that have numerous annotated relations, we limit them to a maximum of 10 examples per word.", "From the PPDB we extract the Equivalence relations as positive examples and the Exclusion relations as negative word pairs.", "The final dataset contains 102,586 positive pairs and 42,958 negative pairs.", "However, only binary labels are attached to all word pairs, whereas the task requires predicting a graded score.", "Initial experiments with optimising the network to predict the minimal and maximal possible score for these cases did not lead to improved performance.", "Therefore, we instead make use of a hinge loss function that optimises the network to only push these examples to the correct side of the decision boundary: L = i max((y −ŷ) 2 − ( S 2 − R) 2 , 0) (12) where S is the maximum score in the range and and R is a margin parameter.", "By minimising Equation 12, the model is only updated based on examples that are not yet on the correct side of the boundary, including a margin.", "This prevents us from penalising the model for predicting a score with slight variations, as the extracted examples are not annotated with sufficient granularity.", "When optimising the model, we first perform one pretraining pass over these additional word pairs before proceeding with the regular training process.", "Evaluation SDSN Training Setup.", "As input to the SDSN network we use 300-dimensional dependency-based word embeddings by Levy and Goldberg (2014) .", "Layers m 1 and m 2 also have size 300 and layer h has size 100.", "For regularisation, we apply dropout to the embeddings with p = 0.5.", "The margin R is set to 1 for the supervised pre-training stage.", "The model is optimised using AdaDelta (Zeiler, 2012) with learning rate 1.0.", "In order to control for random noise, we run each experiment with 10 different random seeds and average the results.", "Our code and detailed configuration files will be made available online.", "1 Evaluation Data.", "We evaluate graded lexical entailment on the HyperLex dataset which contains 2,616 word pairs in total scored for the asymmetric graded lexical entailment relation.", "Following a standard practice, we report Spearman's ρ correlation of the model output to the given human-annotated scores.", "We conduct experiments on two standard data splits for supervised learning: random split and lexical split.", "In the random split the data is randomly divided into training, validation, and test subsets containing 1831, 130, and 655 word pairs, respectively.", "In the lexical split, proposed by Levy et al.", "(2015) , there is no lexical overlap between training and test subsets.", "This prevents the effect of lexical memorisation, as supervised models tend to learn an independent property of a single concept in the pair instead of learning a relation between the two concepts.", "In this setup training, validation, and test sets contain 1133, 85, and 269 word pairs, respectively.", "2 Since plenty of related research on lexical entailment is still focused on the simpler binary detection of asymmetric relations, we also run experiments on the large binary detection HypeNet dataset (Shwartz et al., 2016) , where the SDSN output is converted to binary decisions.", "We again report scores for both random and lexical split.", "Results and Analysis.", "The results on two Hyper-Lex splits are presented in Table 1 , along with the best configurations reported by .", "We refer the interested reader to the original Hy-perLex paper for a detailed description of the best performing baseline models.", "The Supervised Directional Similarity Network (SDSN) achieves substantially better scores than all other tested systems, despite relying on a much simpler supervision signal.", "The previous top approaches, including the Paragram+CF embeddings, make use of numerous annotations provided by WordNet or similarly rich lexical resources, while for SDSN and SDSN+SDF only use the designated relation-specific training set and corpus statistics.", "By also including these extra training instances (SDSN+SDF+AS), we can gain additional perfor- mance and push the correlation to 0.692 on the random split and 0.544 on the lexical split of Hy-perLex, an improvement of approximately 25% to the standard supervised training regime.", "In Table 3 we provide some example output from the final SDSN+SDF+AS model.", "It is able to successfully assign a high score to (captain, officer) and also identify with high confidence that wing is not a type of airplane, even though they are semantically related.", "As an example of incorrect output, the model fails to assign a high score to (prince, royalty), possibly due to the usage patterns of these words being different in context.", "In contrast, it assigns an unexpectedly high score to (kid, parent), likely due to the high distributional similarity of these words.", "Glavaš and Ponzetto (2017) proposed a related dual tensor model for the binary detection of asymmetric relations (Dual-T).", "In order to compare our system to theirs, we train our model on HypeNet and convert the output to binary decisions.", "We also compare SDSN to the best reported models of Shwartz et al.", "(2016) and Roller and Erk (2016) , which combine distributional and pattern-based information for hypernymy detection (HypeNethybrid and H-feature, respectively).", "3 We do not include additional WordNet and PPDB examples in these experiments, as the HypeNet data already subsumes most of them.", "As can be seen in Table 2 , our SDSN+SDF model achieves the best results also on the HypeNet dataset, outperforming previous models on both data splits.", "Conclusion We introduce a novel neural architecture for mapping and specialising a vector space based on limited supervision.", "While prior work has focused only on optimising individual word embeddings available in external resources, our model uses 3 For more detail on the baseline models, we refer the reader to the original papers.", "Table 3 : Example word pairs from the HyperLex development set.", "S is the human-annotated score in the HyperLex dataset.", "P is the predicted score using the SDSN+SDF+AS model.", "general-purpose embeddings and optimises a separate neural component to adapt these to the specific task, generalising to unseen data.", "The system achieves new state-of-the-art results on the task of scoring graded lexical entailment.", "Future work could apply the model to other lexical relations or extend it to cover multiple relations simultaneously." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5" ], "paper_header_content": [ "Introduction", "The Task of Grading Lexical Entailment", "System Architecture", "Evaluation", "Conclusion" ] }
GEM-SciDuet-train-97#paper-1252#slide-9
Conclusion
Can train a neural network to find specific regularities in off-the-shelf word embeddings Traditional sparse embeddings still provide complementary information Achieves state-of-the-art on graded lexical entailment
Can train a neural network to find specific regularities in off-the-shelf word embeddings Traditional sparse embeddings still provide complementary information Achieves state-of-the-art on graded lexical entailment
[]
GEM-SciDuet-train-98#paper-1253#slide-0
1253
Counterfactual Data Augmentation for Mitigating Gender Stereotypes in Languages with Rich Morphology
Gender stereotypes are manifest in most of the world's languages and are consequently propagated or amplified by NLP systems. Although research has focused on mitigating gender stereotypes in English, the approaches that are commonly employed produce ungrammatical sentences in morphologically rich languages. We present a novel approach for converting between masculine-inflected and feminineinflected sentences in such languages. For Spanish and Hebrew, our approach achieves F 1 scores of 82% and 73% at the level of tags and accuracies of 90% and 87% at the level of forms. By evaluating our approach using four different languages, we show that, on average, it reduces gender stereotyping by a factor of 2.5 without any sacrifice to grammaticality. Sebastian J. Mielke and Jason Eisner. 2018. Spell once, summon anywhere: A two-level open-vocabulary language model. CoRR, abs/1804.08205. Thomas Müller, Ryan Cotterell, Alexander Fraser, and Hinrich Schütze. 2015. Joint lemmatization and morphological tagging with lemming. In Proceed-
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238 ], "paper_content_text": [ "Introduction One of the biggest challenges faced by modern natural language processing (NLP) systems is the inadvertent replication or amplification of societal biases.", "This is because NLP systems depend on language corpora, which are inherently \"not objective; they are creations of human design\" (Crawford, 2013) .", "One type of societal bias that has received considerable attention from the NLP community is gender stereotyping (Garg et al., 2017; Rudinger et al., 2017; Sutton et al., 2018) .", "Gender stereotypes can manifest in language in overt ways.", "For example, the sentence he is an engineer is more likely to appear in a corpus than she is an engineer due to the current gender disparity in engineering.", "Consequently, any NLP system that is trained such a corpus will likely learn to associate engineer with men, but not with women (De-Arteaga et al., 2019) .", "To date, the NLP community has focused primarily on approaches for detecting and mitigating gender stereotypes in English (Bolukbasi et al., 2016; Dixon et al., 2018; Zhao et al., 2017 ).", "Yet, gender stereotypes also exist in other languages .", "We extract the properties of each word in the sentence.", "We then fix a noun and its tags and infer the manner in which the remaining tags must be updated.", "Finally, we reinflect the lemmata to their new forms.", "because they are a function of society, not of grammar.", "Moreover, because English does not mark grammatical gender, approaches developed for English are not transferable to morphologically rich languages that exhibit gender agreement (Corbett, 1991) .", "In these languages, the words in a sentence are marked with morphological endings that reflect the grammatical gender of the surrounding nouns.", "This means that if the gender of one word changes, the others have to be updated to match.", "As a result, simple heuristics, such as augmenting a corpus with additional sentences in which he and she have been swapped (Zhao et al., 2018) , will yield ungrammatical sentences.", "Consider the Spanish phrase el ingeniero experto (the skilled engineer).", "Replacing ingeniero with ingeniera is insufficient-el must also be replaced with la and experto with experta.", "In this paper, we present a new approach to counterfactual data augmentation (CDA; Lu et al., 2018) for mitigating gender stereotypes associated with animate 1 nouns (i.e., nouns that represent people) for morphologically rich languages.", "We introduce a Markov random field with an optional neural parameterization that infers the manner in which a sentence must change when altering the grammatical gender of particular nouns.", "We use this model as part of a four-step process, depicted in Fig.", "1 , to reinflect entire sentences following an intervention on the grammatical gender of one word.", "We intrinsically evaluate our approach using Spanish and Hebrew, achieving tag-level F 1 scores of 83% and 72% and form-level accuracies of 90% and 87%, respectively.", "We also conduct an extrinsic evaluation using four languages.", "Following Lu et al.", "(2018) , we show that, on average, our approach reduces gender stereotyping in neural language models by a factor of 2.5 without sacrificing grammaticality.", "Gender Stereotypes in Text Men and women are mentioned at different rates in text (Coates, 1987) .", "This problem is exacerbated in certain contexts.", "For example, the sentence he is an engineer is more likely to appear in a corpus than she is an engineer due to the current gender disparity in engineering.", "This imbalance in representation can have a dramatic downstream effect on NLP systems trained on such a corpus, such as giving preference to male engineers over female engineers in an automated resumé filtering system.", "Gender stereotypes of this sort have been observed in word embeddings (Bolukbasi et al., 2016; Sutton et al., 2018) , contextual word embeddings (Zhao et al., 2019) , and co-reference resolution systems (Rudinger et al., 2018; Zhao et al., 2018) inter alia.", "A quick fix: swapping gendered words.", "One approach to mitigating such gender stereotypes is counterfactual data augmentation (CDA; Lu et al., 2018) .", "In English, this involves augmenting a corpus with additional sentences in which gendered words, such as he and she, have been swapped to yield a balanced representation.", "Indeed, Zhao et al.", "(2018) showed that this simple heuristic significantly reduces gender stereotyping in neural co-reference resolution systems, without harming system performance.", "Unfortunately, this approach is only applicable to English and other languages with little morphological inflection.", "When applied to morphologically rich languages that exhibit gender agreement, it yields ungrammatical sentences.", "The problem: inflected languages.", "Many languages, including Spanish and Hebrew, have gender inflections for nouns, verbs, and adjectivesi.e., the words in a sentence are marked with morphological endings that reflect the grammatical gender of the surrounding nouns.", "2 This means that if the gender of one word changes, the others have to be updated to preserve morpho-syntactic agreement (Corbett, 2012) .", "Consider the following example from Spanish, where we wish to transform Sentence (1) to Sentence (2).", "(Parts of words that mark gender are depicted in bold.)", "This task is not as simple as replacing el with la-ingeniero and experto must also be reinflected.", "Moreover, the changes required for one language are not the same as those required for another (e.g., verbs are marked with gender in Hebrew, but not in Spanish).", "( Our approach.", "Our goal is to transform sentences like Sentence (1) to Sentence (2) and vice versa.", "To the best of our knowledge, this task has not been studied previously.", "Indeed, there is no existing annotated corpus of paired sentences that could be used to train a supervised model.", "As a result, we take an unsupervised 3 approach using dependency trees, lemmata, part-of-speech (POS) tags, and morpho-syntactic tags from Universal Dependencies corpora (UD; Nivre et al., 2018) .", "Specifically, we propose the following four-step process: 1.", "Analyze the sentence (including parsing, morphological analysis, and lemmatization).", "Figure 2 : Dependency tree for the sentence El ingeniero alemán es muy experto.", "2.", "Intervene on a gendered word.", "3.", "Infer the new morpho-syntactic tags.", "Reinflect the lemmata to their new forms.", "This process is depicted in Fig.", "1 .", "The primary technical contribution is a novel Markov random field for performing step 3, described in the next section.", "A Markov Random Field for Morpho-Syntactic Agreement In this section, we present a Markov random field (MRF; Koller and Friedman, 2009 ) for morphosyntactic agreement.", "This model defines a joint distribution over sequences of morpho-syntactic tags, conditioned on a labeled dependency tree with associated part-of-speech tags.", "Given an intervention on a gendered word, we can use this model to infer the manner in which the remaining tags must be updated to preserve morpho-syntactic agreement.", "A dependency tree for a sentence (see Fig.", "2 for an example) is a set of ordered triples (i, j, ), where i and j are positions in the sentence (or a distinguished root symbol) and ∈ L is the label of the edge i → j in the tree; each position occurs exactly once as the first element in a triple.", "Each dependency tree T is associated with a sequence of morpho-syntactic tags m = m 1 , .", ".", ".", ", m |T | and a sequence of part-ofspeech (POS) tags p = p 1 , .", ".", ".", ", p |T | .", "For example, the tags m ∈ M and p ∈ P for ingeniero are [MSC; SG] and NOUN, respectively, because ingeniero is a masculine, singular noun.", "For notational simplicity, we define M = M |T | to be the set of all length-|T | sequences of morpho-syntactic tags.", "We define the probability of m given T and p as Pr(m | T, p) ∝ (i,j, )∈T φ i (m i ) · ψ(m i , m j | p i , p j , ), (1) where the binary factor ψ(·, · | ·, ·, ·) ≥ 0 scores how well the morpho-syntactic tags m i and m j agree given the POS tags p i and p j and the label .", "For example, consider the amod (adjectival modifier) edge from experto to ingeniero in Fig.", "2 .", "The factor ψ(m i , m j | A, N, amod) returns a high score if the corresponding morpho-syntactic tags agree in gender and number (e.g., m i = [MSC; SG] and m j = [MSC; SG]) and a low score if they do not (e.g., m i = [MSC; SG] and m j = [FEM; PL]).", "The unary factor φ i (·) ≥ 0 scores a morpho-syntactic tag m i outside the context of the dependency tree.", "As we explain in §3.1, we use these unary factors to force or disallow particular tags when performing an intervention; we do not learn them.", "Eq.", "(1) is normalized by the following partition function: Z(T, p) = m ∈M (i,j, )∈T φ i (m i ) · ψ(m i , m j | p i , p j , ).", "Z(T, p) can be calculated using belief propagation; we provide the update equations that we use in App.", "A.", "Our model is depicted in Fig.", "3 .", "It is noteworthy that this model is delexicalized-i.e., it considers only the labeled dependency tree and the POS tags, not the actual words themselves.", "Parameterization We consider a linear parameterization and a neural parameterization of the binary factor ψ(·, · | ·, ·, ·).", "Linear parameterization.", "We define a matrix W (p i , p j , ) ∈ R c×c for each triple (p i , p j , ), where c is the number of morpho-syntactic subtags.", "For example, [MSC; SG] has two subtags MSC and SG.", "We then define ψ(·, · | ·, ·, ·) as follows: ψ(m i , m j | p i , p j , ) = exp (m i W (p i , p j , )m j ), where m i ∈ {0, 1} c is a multi-hot encoding of m i .", "Neural parameterization.", "As an alternative, we also define a neural parameterization of W (p i , p j , ) to allow parameter sharing among El ingeniero alemán es muy experto edges with different parts of speech and labels: φ1(·) φ2(·) φ3(·) φ4(·) φ5(·) φ6(·) ψ(·, · | D, N, det) ψ(·, · | A, N, amod) ψ(·, · | N, V, cop) ψ(·, · | AV, A, advmod) ψ(·, · | A, N, amod) W (p i , p j , ) = exp (U tanh(V [e(p i ); e(p j ); e( )])) where U ∈ R c×c×n 1 , V ∈ R n 1 ×3n 2 , and n 1 and n 2 define the structure of the neural parameterization and each e(·) ∈ R n 2 is an embedding function.", "Parameterization of φ i .", "We use the unary factors only to force or disallow particular tags when performing an intervention.", "Specifically, we define φ i (m) = α if m = m i 1 otherwise, (2) where α > 1 is a strength parameter that determines the extent to which m i should remain unchanged following an intervention.", "In the limit as α → ∞, all tags will remain unchanged except for the tag directly involved in the intervention.", "4 Inference Because our MRF is acyclic and tree-shaped, we can use belief propagation (Pearl, 1988) to perform exact inference.", "The algorithm is a generalization of the forward-backward algorithm for hidden Markov models (Rabiner and Juang, 1986 Parameter Estimation We use gradient-based optimization.", "We treat the negative log-likelihood − log (Pr(m | T, p)) as the loss function for tree T and compute its gradient using automatic differentiation (Rall, 1981) .", "We learn the parameters of §3.1 by optimizing the negative log-likelihood using gradient descent.", "Intervention As explained in §2, our goal is to transform sentences like Sentence (1) to Sentence (2) by intervening on a gendered word and then using our model to infer the manner in which the remaining tags must be updated to preserve morphosyntactic agreement.", "For example, if we change the morpho-syntactic tag for ingeniero from [MSC;SG] to [FEM;SG], then we must also update the tags for el and experto, but do not need to update the tag for es, which should remain unchanged as [IN; PR; SG].", "If we intervene on the i th word in a sentence, changing its tag from m i to m i , then using our model to infer the manner in which the remaining tags must be updated means using Pr(m −i | m i , T, p) to identify high-probability tags for the remaining words.", "Crucially, we wish to change as little as possible when intervening on a gendered word.", "The unary factors φ i enable us to do exactly this.", "As described in the previous section, the strength parameter α determines the extent to which m i should remain unchanged following an intervention-the larger the value, the less likely it is that m i will be changed.", "Once the new tags have been inferred, the final step is to reinflect the lemmata to their new forms.", "This task has received considerable attention from the NLP community (Cotterell et al., 2016 (Cotterell et al., , 2017 .", "We use the inflection model of .", "This model conditions on the lemma x and morphosyntactic tag m to form a distribution over possible inflections.", "For example, given experto and [A; FEM; PL], the trained inflection model will assign a high probability to expertas.", "We provide accuracies for the trained inflection model in Tab.", "1.", "Experiments We used the Adam optimizer (Kingma and Ba, 2014) to train both parameterizations of our model until the change in dev-loss was less than 10 −5 bits.", "We set β = (0.9, 0.999) without tuning, and chose a learning rate of 0.005 and weight decay factor of 0.0001 after tuning.", "We tuned log α in the set {0.5, 0.75, 1, 2, 5, 10} and chose log α = 1.", "For the neural parameterization, we set n 1 = 9 and n 2 = 3 without any tuning.", "Finally, we trained the inflection model using only gendered words.", "We evaluate our approach both intrinsically and extrinsically.", "For the intrinsic evaluation, we focus on whether our approach yields the correct morphosyntactic tags and the correct reinflections.", "For the extrinsic evaluation, we assess the extent to which using the resulting transformed sentences reduces gender stereotyping in neural language models.", "Intrinsic Evaluation To the best of our knowledge, this task has not been studied previously.", "As a result, there is no existing annotated corpus of paired sentences that can be used as \"ground truth.\"", "We therefore annotated Spanish and Hebrew sentences ourselves, with annotations made by native speakers of each language.", "Specifically, for each language, we extracted sentences containing animate nouns from that language's UD treebank.", "The average length of these extracted sentences was 37 words.", "We then manually inspected each sentence, intervening on the gender of the animate noun and reinflecting the sentence accordingly.", "We chose Spanish and Hebrew because gender agreement operates differ- Table 3 : Tag-level precision, recall, F 1 score, and accuracy and form-level accuracy for the baselines (\"-BASE\") and for our approach (\"-LIN\" is the linear parameterization, \"-NN\" is the neural parameterization).", "ently in each language.", "We provide corpus statistics for both languages in the top two rows of Tab.", "2.", "We created a hard-coded ψ(·, · | ·, ·, ·) to serve as a baseline for each language.", "For Spanish, we only activated, i.e.", "set to a number greater than zero, values that relate adjectives and determiners to nouns; for Hebrew, we only activated values that relate adjectives and verbs to nouns.", "We created two separate baselines because gender agreement operates differently in each language.", "To evaluate our approach, we held all morphosyntactic subtags fixed except for gender.", "For each annotated sentence, we intervened on the gender of the animate noun.", "We then used our model to infer which of the remaining tags should be updated (updating a tag means swapping the gender subtag because all morpho-syntactic subtags were held fixed except for gender) and reinflected the lemmata.", "Finally, we used the annotations to compute the taglevel F 1 score and the form-level accuracy, excluding the animate nouns on which we intervened.", "Results.", "We present the results in Tab.", "3.", "Recall is consistently significantly lower than precision.", "As expected, the baselines have the highest precision (though not by much).", "This is because they reflect well-known rules for each language.", "That said, they have lower recall than our approach because they fail to capture more subtle relationships.", "For both languages, our approach struggles with conjunctions.", "For example, consider the phraseél es un ingeniero y escritor (he is an engineer and a writer).", "Replacing ingeniero with ingeniera does not necessarily result in escritor being changed to escritora.", "This is because two nouns do not normally need to have the same gender when they are conjoined.", "Moreover, our MRF does not include co-reference information, so it cannot tell that, in this case, both nouns refer to the same person.", "Note Figure 4 : Gender stereotyping (left) and grammaticality (right) using the original corpus, the corpus following CDA using naïve swapping of gendered words (\"Swap\"), and the corpus following CDA using our approach (\"MRF\").", "that including co-reference information in our MRF would create cycles and inference would no longer be exact.", "Additionally, the lack of co-reference information means that, for Spanish, our approach fails to convert nouns that are noun-modifiers or indirect objects of verbs.", "Somewhat surprisingly, the neural parameterization does not outperform the linear parameterization.", "We proposed the neural parameterization to allow parameter sharing among edges with different parts of speech and labels; however, this parameter sharing does not seem to make a difference in practice, so the linear parameterization is sufficient.", "Extrinsic Evaluation We extrinsically evaluate our approach by assessing the extent to which it reduces gender stereotyping.", "Following Lu et al.", "(2018) , focus on neural language models.", "We choose language models over word embeddings because standard measures of gender stereotyping for word embeddings cannot be applied to morphologically rich languages.", "As our measure of gender stereotyping, we compare the log ratio of the prefix probabilities under a language model P lm for gendered, animate nouns, such as ingeniero, combined with four adjectives: good, bad, smart, and beautiful.", "The translations we use for these adjectives are given in App.", "B.", "We chose the first two adjectives because they should be used equally to describe men and women, and the latter two because we expect that they will reveal gender stereotypes.", "For example, consider log x∈Σ * P lm (BOS El ingeniero bueno x) x∈Σ * P lm (BOS La ingeniera buena x) .", "If this log ratio is close to 0, then the language model is as likely to generate sentences that start with el ingeniero bueno (the good male engineer) as it is to generate sentences that start with la ingeniera bueno (the good female engineer).", "If the log ratio is negative, then the language model is more likely to generate the feminine form than the masculine form, while the opposite is true if the log ratio is positive.", "In practice, given the current gender disparity in engineering, we would expect the log ratio to be positive.", "If, however, the language model were trained on a corpus to which our CDA approach had been applied, we would then expect the log ratio to be much closer to zero.", "Because our approach is specifically intended to yield sentences that are grammatical, we additionally consider the following log ratio (i.e., the grammatical phrase over the ungrammatical phrase): log x∈Σ * P lm (BOS El ingeniero bueno x) x∈Σ * P lm (BOS El ingeniera bueno x) .", "We trained the linear parameterization using UD treebanks for Spanish, Hebrew, French, and Italian (see Tab.", "2).", "For each of the four languages, we parsed one million sentences from Wikipedia (May 2018 dump) using Dozat and Manning (2016) 's parser and extracted taggings and lemmata using the method of Müller et al.", "(2015) .", "We automatically extracted an animacy gazetteer from WordNet (Bond and Paik, 2012) and then manually filtered the output for correctness.", "We provide the size of the languages' animacy gazetteers and the percentage of automatically parsed sentences that contain an animate noun in Tab.", "4.", "For each sentence containing a noun in our animacy gazetteer, we created a copy of the sentence, intervened on Figure 5 : Gender stereotyping for words that are stereotyped toward men or women in Spanish using the original corpus, the corpus following CDA using naïve swapping of gendered words (\"Swap\"), and the corpus following CDA using our approach (\"MRF\").", "the noun, and then used our approach to transform the sentence.", "For sentences containing more than one animate noun, we generated a separate sentence for each possible combination of genders.", "Choosing which sentences to duplicate is a difficult task.", "For example, alemán in Spanish can refer to either a German man or the German language; however, we have no way of distinguishing between these two meanings without additional annotations.", "Multilingual animacy detection (Jahan et al., 2018) might help with this challenge; co-reference information might additionally help.", "For each language, we trained the BPE-RNNLM baseline open-vocabulary language model of Mielke and Eisner (2018) using the original corpus, the corpus following CDA using naïve swapping of gendered words, and the corpus following CDA using our approach.", "We then computed gender stereotyping and grammaticality as described above.", "We provide example phrases in Tab.", "5; we provide a more extensive list of phrases in App.", "C. Results Fig.", "4 demonstrates depicts gender stereotyping and grammaticality for each language using the original corpus, the corpus following CDA using naïve swapping of gendered words, and the corpus following CDA using our approach.", "It is immediately apparent that our approch reduces gender stereotyping.", "On average, our approach reduces gender stereotyping by a factor of 2.5 (the lowest and highest factors are 1.2 (Ita) and 5.0 (Esp), respectively).", "We expected that naïve swapping of gendered words would also reduce gender stereotyping.", "Indeed, we see that this simple heuristic reduces gender stereotyping for some but not all of the languages.", "For Spanish, we also examine specific words that are stereotyped Table 5 : Prefix log-likelihoods of Spanish phrases using the original corpus, the corpus following CDA using naïve swapping of gendered words (\"Swap\"), and the corpus following CDA using our approach (\"MRF\").", "Phrases 1 and 2 are grammatical, while phrases 3 and 4 are not (dentoted by \"*\").", "Gender stereotyping is measured using phrases 1 and 2.", "Grammaticality is measured using phrases 1 and 3 and using phrases 2 and 4; these scores are then averaged.", "toward men or women.", "We define a word to be stereotyped toward one gender if 75% of its occurrences are of that gender.", "Fig.", "5 suggests a clear reduction in gender stereotyping for specific words that are stereotyped toward men or women.", "The grammaticality of the corpora following CDA differs between languages.", "That said, with the exception of Hebrew, our approach either sacrifices less grammaticality than naïve swapping of gendered words and sometimes increases grammaticality over the original corpus.", "Given that we know the model did not perform as accurately for Hebrew (see Tab.", "3), this finding is not surprising.", "Related Work In contrast to previous work, we focus on mitigating gender stereotypes in languages with rich morphology-specifically languages that exhibit gender agreement.", "To date, the NLP community has focused on approaches for detecting and mitigating gender stereotypes in English.", "For example, Bolukbasi et al.", "(2016) proposed a way of mitigating gender stereotypes in word embeddings while preserving meanings; Lu et al.", "(2018) studied gender stereotypes in language models; and Rudinger et al.", "(2018) introduced a novel Winograd schema for evaluating gender stereotypes in co-reference resolution.", "The most closely related work is that of Zhao et al.", "(2018) , who used CDA to reduce gender stereotypes in co-reference resolution; however, their approach yields ungrammatical sentences in morphologically rich languages.", "Our approach is specifically intended to yield grammatical sentences when applied to such languages.", "Habash et al.", "(2019) also focused on morphologically rich languages, specifically Arabic, but in the context of gender identification in machine translation.", "Conclusion We presented a new approach for converting between masculine-inflected and feminine-inflected noun phrases in morphologically rich languages.", "To do this, we introduced a Markov random field with an optional neural parameterization that infers the manner in which a sentence must change to preserve morpho-syntactic agreement when altering the grammatical gender of particular nouns.", "To the best of our knowledge, this task has not been studied previously.", "As a result, there is no existing annotated corpus of paired sentences that can be used as \"ground truth.\"", "Despite this limitation, we evaluated our approach both intrinsically and extrinsically, achieving promising results.", "For example, we demonstrated that our approach reduces gender stereotyping in neural language models.", "Finally, we also identified avenues for future work, such as the inclusion of co-reference information.", "A Belief Propagation Update Equations Our belief propagation update equations are µ i→f (m) = f ∈N (i)\\{f } µ f →i (m) (3) µ f i →i (m) = φ i (m) µ i→f i (m) (4) µ f ij →i (m) = m ∈M ψ(m , m | p i , p j , ) µ j→f ij (m ) (5) µ f ij →j (m) = m ∈M ψ(m, m | p i , p j , ) µ i→f ij (m ) (6) where N (i) returns the set of neighbouring nodes of node i.", "The belief at any node is given by β(v) = f ∈N (v) µ f →v (m).", "(7) B Adjective Translations Tab.", "6 and Tab.", "7 contain the feminine and masculine translations of the four adjectives that we used.", "C Extrinsic Evaluation Example Phrases For each noun in our animacy gazetteer, we generated sixteen phrases.", "Consider the noun engineer as an example.", "We created four phrases-one for each translation of The good engineer, The bad engineer, The smart engineer, and The beautiful engineer.", "These phrases, as well as their prefix log-likelihoods are provided below in Tab.", "8.", "Table 8 : Prefix log-likelihoods of Spanish phrases using the original corpus, the corpus following CDA using naïve swapping of gendered words (\"Swap\"), and the corpus following CDA using our approach (\"MRF\").", "Ungrammatical phrases are denoted by \"*\".", "Phrase" ] }
{ "paper_header_number": [ "1", "2", "4.", "3", "3.1", "3.2", "3.3", "4", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Gender Stereotypes in Text", "Reinflect the lemmata to their new forms.", "A Markov Random Field for Morpho-Syntactic Agreement", "Parameterization", "Inference", "Parameter Estimation", "Intervention", "Experiments", "Intrinsic Evaluation", "Extrinsic Evaluation", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-98#paper-1253#slide-0
Gender bias in NLP systems
Coreference resolution systems are biased: Even though the doctor reassured the nurse, she was worried. Word embeddings carry biases:
Coreference resolution systems are biased: Even though the doctor reassured the nurse, she was worried. Word embeddings carry biases:
[]
GEM-SciDuet-train-98#paper-1253#slide-1
1253
Counterfactual Data Augmentation for Mitigating Gender Stereotypes in Languages with Rich Morphology
Gender stereotypes are manifest in most of the world's languages and are consequently propagated or amplified by NLP systems. Although research has focused on mitigating gender stereotypes in English, the approaches that are commonly employed produce ungrammatical sentences in morphologically rich languages. We present a novel approach for converting between masculine-inflected and feminineinflected sentences in such languages. For Spanish and Hebrew, our approach achieves F 1 scores of 82% and 73% at the level of tags and accuracies of 90% and 87% at the level of forms. By evaluating our approach using four different languages, we show that, on average, it reduces gender stereotyping by a factor of 2.5 without any sacrifice to grammaticality. Sebastian J. Mielke and Jason Eisner. 2018. Spell once, summon anywhere: A two-level open-vocabulary language model. CoRR, abs/1804.08205. Thomas Müller, Ryan Cotterell, Alexander Fraser, and Hinrich Schütze. 2015. Joint lemmatization and morphological tagging with lemming. In Proceed-
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238 ], "paper_content_text": [ "Introduction One of the biggest challenges faced by modern natural language processing (NLP) systems is the inadvertent replication or amplification of societal biases.", "This is because NLP systems depend on language corpora, which are inherently \"not objective; they are creations of human design\" (Crawford, 2013) .", "One type of societal bias that has received considerable attention from the NLP community is gender stereotyping (Garg et al., 2017; Rudinger et al., 2017; Sutton et al., 2018) .", "Gender stereotypes can manifest in language in overt ways.", "For example, the sentence he is an engineer is more likely to appear in a corpus than she is an engineer due to the current gender disparity in engineering.", "Consequently, any NLP system that is trained such a corpus will likely learn to associate engineer with men, but not with women (De-Arteaga et al., 2019) .", "To date, the NLP community has focused primarily on approaches for detecting and mitigating gender stereotypes in English (Bolukbasi et al., 2016; Dixon et al., 2018; Zhao et al., 2017 ).", "Yet, gender stereotypes also exist in other languages .", "We extract the properties of each word in the sentence.", "We then fix a noun and its tags and infer the manner in which the remaining tags must be updated.", "Finally, we reinflect the lemmata to their new forms.", "because they are a function of society, not of grammar.", "Moreover, because English does not mark grammatical gender, approaches developed for English are not transferable to morphologically rich languages that exhibit gender agreement (Corbett, 1991) .", "In these languages, the words in a sentence are marked with morphological endings that reflect the grammatical gender of the surrounding nouns.", "This means that if the gender of one word changes, the others have to be updated to match.", "As a result, simple heuristics, such as augmenting a corpus with additional sentences in which he and she have been swapped (Zhao et al., 2018) , will yield ungrammatical sentences.", "Consider the Spanish phrase el ingeniero experto (the skilled engineer).", "Replacing ingeniero with ingeniera is insufficient-el must also be replaced with la and experto with experta.", "In this paper, we present a new approach to counterfactual data augmentation (CDA; Lu et al., 2018) for mitigating gender stereotypes associated with animate 1 nouns (i.e., nouns that represent people) for morphologically rich languages.", "We introduce a Markov random field with an optional neural parameterization that infers the manner in which a sentence must change when altering the grammatical gender of particular nouns.", "We use this model as part of a four-step process, depicted in Fig.", "1 , to reinflect entire sentences following an intervention on the grammatical gender of one word.", "We intrinsically evaluate our approach using Spanish and Hebrew, achieving tag-level F 1 scores of 83% and 72% and form-level accuracies of 90% and 87%, respectively.", "We also conduct an extrinsic evaluation using four languages.", "Following Lu et al.", "(2018) , we show that, on average, our approach reduces gender stereotyping in neural language models by a factor of 2.5 without sacrificing grammaticality.", "Gender Stereotypes in Text Men and women are mentioned at different rates in text (Coates, 1987) .", "This problem is exacerbated in certain contexts.", "For example, the sentence he is an engineer is more likely to appear in a corpus than she is an engineer due to the current gender disparity in engineering.", "This imbalance in representation can have a dramatic downstream effect on NLP systems trained on such a corpus, such as giving preference to male engineers over female engineers in an automated resumé filtering system.", "Gender stereotypes of this sort have been observed in word embeddings (Bolukbasi et al., 2016; Sutton et al., 2018) , contextual word embeddings (Zhao et al., 2019) , and co-reference resolution systems (Rudinger et al., 2018; Zhao et al., 2018) inter alia.", "A quick fix: swapping gendered words.", "One approach to mitigating such gender stereotypes is counterfactual data augmentation (CDA; Lu et al., 2018) .", "In English, this involves augmenting a corpus with additional sentences in which gendered words, such as he and she, have been swapped to yield a balanced representation.", "Indeed, Zhao et al.", "(2018) showed that this simple heuristic significantly reduces gender stereotyping in neural co-reference resolution systems, without harming system performance.", "Unfortunately, this approach is only applicable to English and other languages with little morphological inflection.", "When applied to morphologically rich languages that exhibit gender agreement, it yields ungrammatical sentences.", "The problem: inflected languages.", "Many languages, including Spanish and Hebrew, have gender inflections for nouns, verbs, and adjectivesi.e., the words in a sentence are marked with morphological endings that reflect the grammatical gender of the surrounding nouns.", "2 This means that if the gender of one word changes, the others have to be updated to preserve morpho-syntactic agreement (Corbett, 2012) .", "Consider the following example from Spanish, where we wish to transform Sentence (1) to Sentence (2).", "(Parts of words that mark gender are depicted in bold.)", "This task is not as simple as replacing el with la-ingeniero and experto must also be reinflected.", "Moreover, the changes required for one language are not the same as those required for another (e.g., verbs are marked with gender in Hebrew, but not in Spanish).", "( Our approach.", "Our goal is to transform sentences like Sentence (1) to Sentence (2) and vice versa.", "To the best of our knowledge, this task has not been studied previously.", "Indeed, there is no existing annotated corpus of paired sentences that could be used to train a supervised model.", "As a result, we take an unsupervised 3 approach using dependency trees, lemmata, part-of-speech (POS) tags, and morpho-syntactic tags from Universal Dependencies corpora (UD; Nivre et al., 2018) .", "Specifically, we propose the following four-step process: 1.", "Analyze the sentence (including parsing, morphological analysis, and lemmatization).", "Figure 2 : Dependency tree for the sentence El ingeniero alemán es muy experto.", "2.", "Intervene on a gendered word.", "3.", "Infer the new morpho-syntactic tags.", "Reinflect the lemmata to their new forms.", "This process is depicted in Fig.", "1 .", "The primary technical contribution is a novel Markov random field for performing step 3, described in the next section.", "A Markov Random Field for Morpho-Syntactic Agreement In this section, we present a Markov random field (MRF; Koller and Friedman, 2009 ) for morphosyntactic agreement.", "This model defines a joint distribution over sequences of morpho-syntactic tags, conditioned on a labeled dependency tree with associated part-of-speech tags.", "Given an intervention on a gendered word, we can use this model to infer the manner in which the remaining tags must be updated to preserve morpho-syntactic agreement.", "A dependency tree for a sentence (see Fig.", "2 for an example) is a set of ordered triples (i, j, ), where i and j are positions in the sentence (or a distinguished root symbol) and ∈ L is the label of the edge i → j in the tree; each position occurs exactly once as the first element in a triple.", "Each dependency tree T is associated with a sequence of morpho-syntactic tags m = m 1 , .", ".", ".", ", m |T | and a sequence of part-ofspeech (POS) tags p = p 1 , .", ".", ".", ", p |T | .", "For example, the tags m ∈ M and p ∈ P for ingeniero are [MSC; SG] and NOUN, respectively, because ingeniero is a masculine, singular noun.", "For notational simplicity, we define M = M |T | to be the set of all length-|T | sequences of morpho-syntactic tags.", "We define the probability of m given T and p as Pr(m | T, p) ∝ (i,j, )∈T φ i (m i ) · ψ(m i , m j | p i , p j , ), (1) where the binary factor ψ(·, · | ·, ·, ·) ≥ 0 scores how well the morpho-syntactic tags m i and m j agree given the POS tags p i and p j and the label .", "For example, consider the amod (adjectival modifier) edge from experto to ingeniero in Fig.", "2 .", "The factor ψ(m i , m j | A, N, amod) returns a high score if the corresponding morpho-syntactic tags agree in gender and number (e.g., m i = [MSC; SG] and m j = [MSC; SG]) and a low score if they do not (e.g., m i = [MSC; SG] and m j = [FEM; PL]).", "The unary factor φ i (·) ≥ 0 scores a morpho-syntactic tag m i outside the context of the dependency tree.", "As we explain in §3.1, we use these unary factors to force or disallow particular tags when performing an intervention; we do not learn them.", "Eq.", "(1) is normalized by the following partition function: Z(T, p) = m ∈M (i,j, )∈T φ i (m i ) · ψ(m i , m j | p i , p j , ).", "Z(T, p) can be calculated using belief propagation; we provide the update equations that we use in App.", "A.", "Our model is depicted in Fig.", "3 .", "It is noteworthy that this model is delexicalized-i.e., it considers only the labeled dependency tree and the POS tags, not the actual words themselves.", "Parameterization We consider a linear parameterization and a neural parameterization of the binary factor ψ(·, · | ·, ·, ·).", "Linear parameterization.", "We define a matrix W (p i , p j , ) ∈ R c×c for each triple (p i , p j , ), where c is the number of morpho-syntactic subtags.", "For example, [MSC; SG] has two subtags MSC and SG.", "We then define ψ(·, · | ·, ·, ·) as follows: ψ(m i , m j | p i , p j , ) = exp (m i W (p i , p j , )m j ), where m i ∈ {0, 1} c is a multi-hot encoding of m i .", "Neural parameterization.", "As an alternative, we also define a neural parameterization of W (p i , p j , ) to allow parameter sharing among El ingeniero alemán es muy experto edges with different parts of speech and labels: φ1(·) φ2(·) φ3(·) φ4(·) φ5(·) φ6(·) ψ(·, · | D, N, det) ψ(·, · | A, N, amod) ψ(·, · | N, V, cop) ψ(·, · | AV, A, advmod) ψ(·, · | A, N, amod) W (p i , p j , ) = exp (U tanh(V [e(p i ); e(p j ); e( )])) where U ∈ R c×c×n 1 , V ∈ R n 1 ×3n 2 , and n 1 and n 2 define the structure of the neural parameterization and each e(·) ∈ R n 2 is an embedding function.", "Parameterization of φ i .", "We use the unary factors only to force or disallow particular tags when performing an intervention.", "Specifically, we define φ i (m) = α if m = m i 1 otherwise, (2) where α > 1 is a strength parameter that determines the extent to which m i should remain unchanged following an intervention.", "In the limit as α → ∞, all tags will remain unchanged except for the tag directly involved in the intervention.", "4 Inference Because our MRF is acyclic and tree-shaped, we can use belief propagation (Pearl, 1988) to perform exact inference.", "The algorithm is a generalization of the forward-backward algorithm for hidden Markov models (Rabiner and Juang, 1986 Parameter Estimation We use gradient-based optimization.", "We treat the negative log-likelihood − log (Pr(m | T, p)) as the loss function for tree T and compute its gradient using automatic differentiation (Rall, 1981) .", "We learn the parameters of §3.1 by optimizing the negative log-likelihood using gradient descent.", "Intervention As explained in §2, our goal is to transform sentences like Sentence (1) to Sentence (2) by intervening on a gendered word and then using our model to infer the manner in which the remaining tags must be updated to preserve morphosyntactic agreement.", "For example, if we change the morpho-syntactic tag for ingeniero from [MSC;SG] to [FEM;SG], then we must also update the tags for el and experto, but do not need to update the tag for es, which should remain unchanged as [IN; PR; SG].", "If we intervene on the i th word in a sentence, changing its tag from m i to m i , then using our model to infer the manner in which the remaining tags must be updated means using Pr(m −i | m i , T, p) to identify high-probability tags for the remaining words.", "Crucially, we wish to change as little as possible when intervening on a gendered word.", "The unary factors φ i enable us to do exactly this.", "As described in the previous section, the strength parameter α determines the extent to which m i should remain unchanged following an intervention-the larger the value, the less likely it is that m i will be changed.", "Once the new tags have been inferred, the final step is to reinflect the lemmata to their new forms.", "This task has received considerable attention from the NLP community (Cotterell et al., 2016 (Cotterell et al., , 2017 .", "We use the inflection model of .", "This model conditions on the lemma x and morphosyntactic tag m to form a distribution over possible inflections.", "For example, given experto and [A; FEM; PL], the trained inflection model will assign a high probability to expertas.", "We provide accuracies for the trained inflection model in Tab.", "1.", "Experiments We used the Adam optimizer (Kingma and Ba, 2014) to train both parameterizations of our model until the change in dev-loss was less than 10 −5 bits.", "We set β = (0.9, 0.999) without tuning, and chose a learning rate of 0.005 and weight decay factor of 0.0001 after tuning.", "We tuned log α in the set {0.5, 0.75, 1, 2, 5, 10} and chose log α = 1.", "For the neural parameterization, we set n 1 = 9 and n 2 = 3 without any tuning.", "Finally, we trained the inflection model using only gendered words.", "We evaluate our approach both intrinsically and extrinsically.", "For the intrinsic evaluation, we focus on whether our approach yields the correct morphosyntactic tags and the correct reinflections.", "For the extrinsic evaluation, we assess the extent to which using the resulting transformed sentences reduces gender stereotyping in neural language models.", "Intrinsic Evaluation To the best of our knowledge, this task has not been studied previously.", "As a result, there is no existing annotated corpus of paired sentences that can be used as \"ground truth.\"", "We therefore annotated Spanish and Hebrew sentences ourselves, with annotations made by native speakers of each language.", "Specifically, for each language, we extracted sentences containing animate nouns from that language's UD treebank.", "The average length of these extracted sentences was 37 words.", "We then manually inspected each sentence, intervening on the gender of the animate noun and reinflecting the sentence accordingly.", "We chose Spanish and Hebrew because gender agreement operates differ- Table 3 : Tag-level precision, recall, F 1 score, and accuracy and form-level accuracy for the baselines (\"-BASE\") and for our approach (\"-LIN\" is the linear parameterization, \"-NN\" is the neural parameterization).", "ently in each language.", "We provide corpus statistics for both languages in the top two rows of Tab.", "2.", "We created a hard-coded ψ(·, · | ·, ·, ·) to serve as a baseline for each language.", "For Spanish, we only activated, i.e.", "set to a number greater than zero, values that relate adjectives and determiners to nouns; for Hebrew, we only activated values that relate adjectives and verbs to nouns.", "We created two separate baselines because gender agreement operates differently in each language.", "To evaluate our approach, we held all morphosyntactic subtags fixed except for gender.", "For each annotated sentence, we intervened on the gender of the animate noun.", "We then used our model to infer which of the remaining tags should be updated (updating a tag means swapping the gender subtag because all morpho-syntactic subtags were held fixed except for gender) and reinflected the lemmata.", "Finally, we used the annotations to compute the taglevel F 1 score and the form-level accuracy, excluding the animate nouns on which we intervened.", "Results.", "We present the results in Tab.", "3.", "Recall is consistently significantly lower than precision.", "As expected, the baselines have the highest precision (though not by much).", "This is because they reflect well-known rules for each language.", "That said, they have lower recall than our approach because they fail to capture more subtle relationships.", "For both languages, our approach struggles with conjunctions.", "For example, consider the phraseél es un ingeniero y escritor (he is an engineer and a writer).", "Replacing ingeniero with ingeniera does not necessarily result in escritor being changed to escritora.", "This is because two nouns do not normally need to have the same gender when they are conjoined.", "Moreover, our MRF does not include co-reference information, so it cannot tell that, in this case, both nouns refer to the same person.", "Note Figure 4 : Gender stereotyping (left) and grammaticality (right) using the original corpus, the corpus following CDA using naïve swapping of gendered words (\"Swap\"), and the corpus following CDA using our approach (\"MRF\").", "that including co-reference information in our MRF would create cycles and inference would no longer be exact.", "Additionally, the lack of co-reference information means that, for Spanish, our approach fails to convert nouns that are noun-modifiers or indirect objects of verbs.", "Somewhat surprisingly, the neural parameterization does not outperform the linear parameterization.", "We proposed the neural parameterization to allow parameter sharing among edges with different parts of speech and labels; however, this parameter sharing does not seem to make a difference in practice, so the linear parameterization is sufficient.", "Extrinsic Evaluation We extrinsically evaluate our approach by assessing the extent to which it reduces gender stereotyping.", "Following Lu et al.", "(2018) , focus on neural language models.", "We choose language models over word embeddings because standard measures of gender stereotyping for word embeddings cannot be applied to morphologically rich languages.", "As our measure of gender stereotyping, we compare the log ratio of the prefix probabilities under a language model P lm for gendered, animate nouns, such as ingeniero, combined with four adjectives: good, bad, smart, and beautiful.", "The translations we use for these adjectives are given in App.", "B.", "We chose the first two adjectives because they should be used equally to describe men and women, and the latter two because we expect that they will reveal gender stereotypes.", "For example, consider log x∈Σ * P lm (BOS El ingeniero bueno x) x∈Σ * P lm (BOS La ingeniera buena x) .", "If this log ratio is close to 0, then the language model is as likely to generate sentences that start with el ingeniero bueno (the good male engineer) as it is to generate sentences that start with la ingeniera bueno (the good female engineer).", "If the log ratio is negative, then the language model is more likely to generate the feminine form than the masculine form, while the opposite is true if the log ratio is positive.", "In practice, given the current gender disparity in engineering, we would expect the log ratio to be positive.", "If, however, the language model were trained on a corpus to which our CDA approach had been applied, we would then expect the log ratio to be much closer to zero.", "Because our approach is specifically intended to yield sentences that are grammatical, we additionally consider the following log ratio (i.e., the grammatical phrase over the ungrammatical phrase): log x∈Σ * P lm (BOS El ingeniero bueno x) x∈Σ * P lm (BOS El ingeniera bueno x) .", "We trained the linear parameterization using UD treebanks for Spanish, Hebrew, French, and Italian (see Tab.", "2).", "For each of the four languages, we parsed one million sentences from Wikipedia (May 2018 dump) using Dozat and Manning (2016) 's parser and extracted taggings and lemmata using the method of Müller et al.", "(2015) .", "We automatically extracted an animacy gazetteer from WordNet (Bond and Paik, 2012) and then manually filtered the output for correctness.", "We provide the size of the languages' animacy gazetteers and the percentage of automatically parsed sentences that contain an animate noun in Tab.", "4.", "For each sentence containing a noun in our animacy gazetteer, we created a copy of the sentence, intervened on Figure 5 : Gender stereotyping for words that are stereotyped toward men or women in Spanish using the original corpus, the corpus following CDA using naïve swapping of gendered words (\"Swap\"), and the corpus following CDA using our approach (\"MRF\").", "the noun, and then used our approach to transform the sentence.", "For sentences containing more than one animate noun, we generated a separate sentence for each possible combination of genders.", "Choosing which sentences to duplicate is a difficult task.", "For example, alemán in Spanish can refer to either a German man or the German language; however, we have no way of distinguishing between these two meanings without additional annotations.", "Multilingual animacy detection (Jahan et al., 2018) might help with this challenge; co-reference information might additionally help.", "For each language, we trained the BPE-RNNLM baseline open-vocabulary language model of Mielke and Eisner (2018) using the original corpus, the corpus following CDA using naïve swapping of gendered words, and the corpus following CDA using our approach.", "We then computed gender stereotyping and grammaticality as described above.", "We provide example phrases in Tab.", "5; we provide a more extensive list of phrases in App.", "C. Results Fig.", "4 demonstrates depicts gender stereotyping and grammaticality for each language using the original corpus, the corpus following CDA using naïve swapping of gendered words, and the corpus following CDA using our approach.", "It is immediately apparent that our approch reduces gender stereotyping.", "On average, our approach reduces gender stereotyping by a factor of 2.5 (the lowest and highest factors are 1.2 (Ita) and 5.0 (Esp), respectively).", "We expected that naïve swapping of gendered words would also reduce gender stereotyping.", "Indeed, we see that this simple heuristic reduces gender stereotyping for some but not all of the languages.", "For Spanish, we also examine specific words that are stereotyped Table 5 : Prefix log-likelihoods of Spanish phrases using the original corpus, the corpus following CDA using naïve swapping of gendered words (\"Swap\"), and the corpus following CDA using our approach (\"MRF\").", "Phrases 1 and 2 are grammatical, while phrases 3 and 4 are not (dentoted by \"*\").", "Gender stereotyping is measured using phrases 1 and 2.", "Grammaticality is measured using phrases 1 and 3 and using phrases 2 and 4; these scores are then averaged.", "toward men or women.", "We define a word to be stereotyped toward one gender if 75% of its occurrences are of that gender.", "Fig.", "5 suggests a clear reduction in gender stereotyping for specific words that are stereotyped toward men or women.", "The grammaticality of the corpora following CDA differs between languages.", "That said, with the exception of Hebrew, our approach either sacrifices less grammaticality than naïve swapping of gendered words and sometimes increases grammaticality over the original corpus.", "Given that we know the model did not perform as accurately for Hebrew (see Tab.", "3), this finding is not surprising.", "Related Work In contrast to previous work, we focus on mitigating gender stereotypes in languages with rich morphology-specifically languages that exhibit gender agreement.", "To date, the NLP community has focused on approaches for detecting and mitigating gender stereotypes in English.", "For example, Bolukbasi et al.", "(2016) proposed a way of mitigating gender stereotypes in word embeddings while preserving meanings; Lu et al.", "(2018) studied gender stereotypes in language models; and Rudinger et al.", "(2018) introduced a novel Winograd schema for evaluating gender stereotypes in co-reference resolution.", "The most closely related work is that of Zhao et al.", "(2018) , who used CDA to reduce gender stereotypes in co-reference resolution; however, their approach yields ungrammatical sentences in morphologically rich languages.", "Our approach is specifically intended to yield grammatical sentences when applied to such languages.", "Habash et al.", "(2019) also focused on morphologically rich languages, specifically Arabic, but in the context of gender identification in machine translation.", "Conclusion We presented a new approach for converting between masculine-inflected and feminine-inflected noun phrases in morphologically rich languages.", "To do this, we introduced a Markov random field with an optional neural parameterization that infers the manner in which a sentence must change to preserve morpho-syntactic agreement when altering the grammatical gender of particular nouns.", "To the best of our knowledge, this task has not been studied previously.", "As a result, there is no existing annotated corpus of paired sentences that can be used as \"ground truth.\"", "Despite this limitation, we evaluated our approach both intrinsically and extrinsically, achieving promising results.", "For example, we demonstrated that our approach reduces gender stereotyping in neural language models.", "Finally, we also identified avenues for future work, such as the inclusion of co-reference information.", "A Belief Propagation Update Equations Our belief propagation update equations are µ i→f (m) = f ∈N (i)\\{f } µ f →i (m) (3) µ f i →i (m) = φ i (m) µ i→f i (m) (4) µ f ij →i (m) = m ∈M ψ(m , m | p i , p j , ) µ j→f ij (m ) (5) µ f ij →j (m) = m ∈M ψ(m, m | p i , p j , ) µ i→f ij (m ) (6) where N (i) returns the set of neighbouring nodes of node i.", "The belief at any node is given by β(v) = f ∈N (v) µ f →v (m).", "(7) B Adjective Translations Tab.", "6 and Tab.", "7 contain the feminine and masculine translations of the four adjectives that we used.", "C Extrinsic Evaluation Example Phrases For each noun in our animacy gazetteer, we generated sixteen phrases.", "Consider the noun engineer as an example.", "We created four phrases-one for each translation of The good engineer, The bad engineer, The smart engineer, and The beautiful engineer.", "These phrases, as well as their prefix log-likelihoods are provided below in Tab.", "8.", "Table 8 : Prefix log-likelihoods of Spanish phrases using the original corpus, the corpus following CDA using naïve swapping of gendered words (\"Swap\"), and the corpus following CDA using our approach (\"MRF\").", "Ungrammatical phrases are denoted by \"*\".", "Phrase" ] }
{ "paper_header_number": [ "1", "2", "4.", "3", "3.1", "3.2", "3.3", "4", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Gender Stereotypes in Text", "Reinflect the lemmata to their new forms.", "A Markov Random Field for Morpho-Syntactic Agreement", "Parameterization", "Inference", "Parameter Estimation", "Intervention", "Experiments", "Intrinsic Evaluation", "Extrinsic Evaluation", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-98#paper-1253#slide-1
This shouldnt come as a surprise our data is biased
Google n-grams frequency counts she is a doctor
Google n-grams frequency counts she is a doctor
[]
GEM-SciDuet-train-98#paper-1253#slide-2
1253
Counterfactual Data Augmentation for Mitigating Gender Stereotypes in Languages with Rich Morphology
Gender stereotypes are manifest in most of the world's languages and are consequently propagated or amplified by NLP systems. Although research has focused on mitigating gender stereotypes in English, the approaches that are commonly employed produce ungrammatical sentences in morphologically rich languages. We present a novel approach for converting between masculine-inflected and feminineinflected sentences in such languages. For Spanish and Hebrew, our approach achieves F 1 scores of 82% and 73% at the level of tags and accuracies of 90% and 87% at the level of forms. By evaluating our approach using four different languages, we show that, on average, it reduces gender stereotyping by a factor of 2.5 without any sacrifice to grammaticality. Sebastian J. Mielke and Jason Eisner. 2018. Spell once, summon anywhere: A two-level open-vocabulary language model. CoRR, abs/1804.08205. Thomas Müller, Ryan Cotterell, Alexander Fraser, and Hinrich Schütze. 2015. Joint lemmatization and morphological tagging with lemming. In Proceed-
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238 ], "paper_content_text": [ "Introduction One of the biggest challenges faced by modern natural language processing (NLP) systems is the inadvertent replication or amplification of societal biases.", "This is because NLP systems depend on language corpora, which are inherently \"not objective; they are creations of human design\" (Crawford, 2013) .", "One type of societal bias that has received considerable attention from the NLP community is gender stereotyping (Garg et al., 2017; Rudinger et al., 2017; Sutton et al., 2018) .", "Gender stereotypes can manifest in language in overt ways.", "For example, the sentence he is an engineer is more likely to appear in a corpus than she is an engineer due to the current gender disparity in engineering.", "Consequently, any NLP system that is trained such a corpus will likely learn to associate engineer with men, but not with women (De-Arteaga et al., 2019) .", "To date, the NLP community has focused primarily on approaches for detecting and mitigating gender stereotypes in English (Bolukbasi et al., 2016; Dixon et al., 2018; Zhao et al., 2017 ).", "Yet, gender stereotypes also exist in other languages .", "We extract the properties of each word in the sentence.", "We then fix a noun and its tags and infer the manner in which the remaining tags must be updated.", "Finally, we reinflect the lemmata to their new forms.", "because they are a function of society, not of grammar.", "Moreover, because English does not mark grammatical gender, approaches developed for English are not transferable to morphologically rich languages that exhibit gender agreement (Corbett, 1991) .", "In these languages, the words in a sentence are marked with morphological endings that reflect the grammatical gender of the surrounding nouns.", "This means that if the gender of one word changes, the others have to be updated to match.", "As a result, simple heuristics, such as augmenting a corpus with additional sentences in which he and she have been swapped (Zhao et al., 2018) , will yield ungrammatical sentences.", "Consider the Spanish phrase el ingeniero experto (the skilled engineer).", "Replacing ingeniero with ingeniera is insufficient-el must also be replaced with la and experto with experta.", "In this paper, we present a new approach to counterfactual data augmentation (CDA; Lu et al., 2018) for mitigating gender stereotypes associated with animate 1 nouns (i.e., nouns that represent people) for morphologically rich languages.", "We introduce a Markov random field with an optional neural parameterization that infers the manner in which a sentence must change when altering the grammatical gender of particular nouns.", "We use this model as part of a four-step process, depicted in Fig.", "1 , to reinflect entire sentences following an intervention on the grammatical gender of one word.", "We intrinsically evaluate our approach using Spanish and Hebrew, achieving tag-level F 1 scores of 83% and 72% and form-level accuracies of 90% and 87%, respectively.", "We also conduct an extrinsic evaluation using four languages.", "Following Lu et al.", "(2018) , we show that, on average, our approach reduces gender stereotyping in neural language models by a factor of 2.5 without sacrificing grammaticality.", "Gender Stereotypes in Text Men and women are mentioned at different rates in text (Coates, 1987) .", "This problem is exacerbated in certain contexts.", "For example, the sentence he is an engineer is more likely to appear in a corpus than she is an engineer due to the current gender disparity in engineering.", "This imbalance in representation can have a dramatic downstream effect on NLP systems trained on such a corpus, such as giving preference to male engineers over female engineers in an automated resumé filtering system.", "Gender stereotypes of this sort have been observed in word embeddings (Bolukbasi et al., 2016; Sutton et al., 2018) , contextual word embeddings (Zhao et al., 2019) , and co-reference resolution systems (Rudinger et al., 2018; Zhao et al., 2018) inter alia.", "A quick fix: swapping gendered words.", "One approach to mitigating such gender stereotypes is counterfactual data augmentation (CDA; Lu et al., 2018) .", "In English, this involves augmenting a corpus with additional sentences in which gendered words, such as he and she, have been swapped to yield a balanced representation.", "Indeed, Zhao et al.", "(2018) showed that this simple heuristic significantly reduces gender stereotyping in neural co-reference resolution systems, without harming system performance.", "Unfortunately, this approach is only applicable to English and other languages with little morphological inflection.", "When applied to morphologically rich languages that exhibit gender agreement, it yields ungrammatical sentences.", "The problem: inflected languages.", "Many languages, including Spanish and Hebrew, have gender inflections for nouns, verbs, and adjectivesi.e., the words in a sentence are marked with morphological endings that reflect the grammatical gender of the surrounding nouns.", "2 This means that if the gender of one word changes, the others have to be updated to preserve morpho-syntactic agreement (Corbett, 2012) .", "Consider the following example from Spanish, where we wish to transform Sentence (1) to Sentence (2).", "(Parts of words that mark gender are depicted in bold.)", "This task is not as simple as replacing el with la-ingeniero and experto must also be reinflected.", "Moreover, the changes required for one language are not the same as those required for another (e.g., verbs are marked with gender in Hebrew, but not in Spanish).", "( Our approach.", "Our goal is to transform sentences like Sentence (1) to Sentence (2) and vice versa.", "To the best of our knowledge, this task has not been studied previously.", "Indeed, there is no existing annotated corpus of paired sentences that could be used to train a supervised model.", "As a result, we take an unsupervised 3 approach using dependency trees, lemmata, part-of-speech (POS) tags, and morpho-syntactic tags from Universal Dependencies corpora (UD; Nivre et al., 2018) .", "Specifically, we propose the following four-step process: 1.", "Analyze the sentence (including parsing, morphological analysis, and lemmatization).", "Figure 2 : Dependency tree for the sentence El ingeniero alemán es muy experto.", "2.", "Intervene on a gendered word.", "3.", "Infer the new morpho-syntactic tags.", "Reinflect the lemmata to their new forms.", "This process is depicted in Fig.", "1 .", "The primary technical contribution is a novel Markov random field for performing step 3, described in the next section.", "A Markov Random Field for Morpho-Syntactic Agreement In this section, we present a Markov random field (MRF; Koller and Friedman, 2009 ) for morphosyntactic agreement.", "This model defines a joint distribution over sequences of morpho-syntactic tags, conditioned on a labeled dependency tree with associated part-of-speech tags.", "Given an intervention on a gendered word, we can use this model to infer the manner in which the remaining tags must be updated to preserve morpho-syntactic agreement.", "A dependency tree for a sentence (see Fig.", "2 for an example) is a set of ordered triples (i, j, ), where i and j are positions in the sentence (or a distinguished root symbol) and ∈ L is the label of the edge i → j in the tree; each position occurs exactly once as the first element in a triple.", "Each dependency tree T is associated with a sequence of morpho-syntactic tags m = m 1 , .", ".", ".", ", m |T | and a sequence of part-ofspeech (POS) tags p = p 1 , .", ".", ".", ", p |T | .", "For example, the tags m ∈ M and p ∈ P for ingeniero are [MSC; SG] and NOUN, respectively, because ingeniero is a masculine, singular noun.", "For notational simplicity, we define M = M |T | to be the set of all length-|T | sequences of morpho-syntactic tags.", "We define the probability of m given T and p as Pr(m | T, p) ∝ (i,j, )∈T φ i (m i ) · ψ(m i , m j | p i , p j , ), (1) where the binary factor ψ(·, · | ·, ·, ·) ≥ 0 scores how well the morpho-syntactic tags m i and m j agree given the POS tags p i and p j and the label .", "For example, consider the amod (adjectival modifier) edge from experto to ingeniero in Fig.", "2 .", "The factor ψ(m i , m j | A, N, amod) returns a high score if the corresponding morpho-syntactic tags agree in gender and number (e.g., m i = [MSC; SG] and m j = [MSC; SG]) and a low score if they do not (e.g., m i = [MSC; SG] and m j = [FEM; PL]).", "The unary factor φ i (·) ≥ 0 scores a morpho-syntactic tag m i outside the context of the dependency tree.", "As we explain in §3.1, we use these unary factors to force or disallow particular tags when performing an intervention; we do not learn them.", "Eq.", "(1) is normalized by the following partition function: Z(T, p) = m ∈M (i,j, )∈T φ i (m i ) · ψ(m i , m j | p i , p j , ).", "Z(T, p) can be calculated using belief propagation; we provide the update equations that we use in App.", "A.", "Our model is depicted in Fig.", "3 .", "It is noteworthy that this model is delexicalized-i.e., it considers only the labeled dependency tree and the POS tags, not the actual words themselves.", "Parameterization We consider a linear parameterization and a neural parameterization of the binary factor ψ(·, · | ·, ·, ·).", "Linear parameterization.", "We define a matrix W (p i , p j , ) ∈ R c×c for each triple (p i , p j , ), where c is the number of morpho-syntactic subtags.", "For example, [MSC; SG] has two subtags MSC and SG.", "We then define ψ(·, · | ·, ·, ·) as follows: ψ(m i , m j | p i , p j , ) = exp (m i W (p i , p j , )m j ), where m i ∈ {0, 1} c is a multi-hot encoding of m i .", "Neural parameterization.", "As an alternative, we also define a neural parameterization of W (p i , p j , ) to allow parameter sharing among El ingeniero alemán es muy experto edges with different parts of speech and labels: φ1(·) φ2(·) φ3(·) φ4(·) φ5(·) φ6(·) ψ(·, · | D, N, det) ψ(·, · | A, N, amod) ψ(·, · | N, V, cop) ψ(·, · | AV, A, advmod) ψ(·, · | A, N, amod) W (p i , p j , ) = exp (U tanh(V [e(p i ); e(p j ); e( )])) where U ∈ R c×c×n 1 , V ∈ R n 1 ×3n 2 , and n 1 and n 2 define the structure of the neural parameterization and each e(·) ∈ R n 2 is an embedding function.", "Parameterization of φ i .", "We use the unary factors only to force or disallow particular tags when performing an intervention.", "Specifically, we define φ i (m) = α if m = m i 1 otherwise, (2) where α > 1 is a strength parameter that determines the extent to which m i should remain unchanged following an intervention.", "In the limit as α → ∞, all tags will remain unchanged except for the tag directly involved in the intervention.", "4 Inference Because our MRF is acyclic and tree-shaped, we can use belief propagation (Pearl, 1988) to perform exact inference.", "The algorithm is a generalization of the forward-backward algorithm for hidden Markov models (Rabiner and Juang, 1986 Parameter Estimation We use gradient-based optimization.", "We treat the negative log-likelihood − log (Pr(m | T, p)) as the loss function for tree T and compute its gradient using automatic differentiation (Rall, 1981) .", "We learn the parameters of §3.1 by optimizing the negative log-likelihood using gradient descent.", "Intervention As explained in §2, our goal is to transform sentences like Sentence (1) to Sentence (2) by intervening on a gendered word and then using our model to infer the manner in which the remaining tags must be updated to preserve morphosyntactic agreement.", "For example, if we change the morpho-syntactic tag for ingeniero from [MSC;SG] to [FEM;SG], then we must also update the tags for el and experto, but do not need to update the tag for es, which should remain unchanged as [IN; PR; SG].", "If we intervene on the i th word in a sentence, changing its tag from m i to m i , then using our model to infer the manner in which the remaining tags must be updated means using Pr(m −i | m i , T, p) to identify high-probability tags for the remaining words.", "Crucially, we wish to change as little as possible when intervening on a gendered word.", "The unary factors φ i enable us to do exactly this.", "As described in the previous section, the strength parameter α determines the extent to which m i should remain unchanged following an intervention-the larger the value, the less likely it is that m i will be changed.", "Once the new tags have been inferred, the final step is to reinflect the lemmata to their new forms.", "This task has received considerable attention from the NLP community (Cotterell et al., 2016 (Cotterell et al., , 2017 .", "We use the inflection model of .", "This model conditions on the lemma x and morphosyntactic tag m to form a distribution over possible inflections.", "For example, given experto and [A; FEM; PL], the trained inflection model will assign a high probability to expertas.", "We provide accuracies for the trained inflection model in Tab.", "1.", "Experiments We used the Adam optimizer (Kingma and Ba, 2014) to train both parameterizations of our model until the change in dev-loss was less than 10 −5 bits.", "We set β = (0.9, 0.999) without tuning, and chose a learning rate of 0.005 and weight decay factor of 0.0001 after tuning.", "We tuned log α in the set {0.5, 0.75, 1, 2, 5, 10} and chose log α = 1.", "For the neural parameterization, we set n 1 = 9 and n 2 = 3 without any tuning.", "Finally, we trained the inflection model using only gendered words.", "We evaluate our approach both intrinsically and extrinsically.", "For the intrinsic evaluation, we focus on whether our approach yields the correct morphosyntactic tags and the correct reinflections.", "For the extrinsic evaluation, we assess the extent to which using the resulting transformed sentences reduces gender stereotyping in neural language models.", "Intrinsic Evaluation To the best of our knowledge, this task has not been studied previously.", "As a result, there is no existing annotated corpus of paired sentences that can be used as \"ground truth.\"", "We therefore annotated Spanish and Hebrew sentences ourselves, with annotations made by native speakers of each language.", "Specifically, for each language, we extracted sentences containing animate nouns from that language's UD treebank.", "The average length of these extracted sentences was 37 words.", "We then manually inspected each sentence, intervening on the gender of the animate noun and reinflecting the sentence accordingly.", "We chose Spanish and Hebrew because gender agreement operates differ- Table 3 : Tag-level precision, recall, F 1 score, and accuracy and form-level accuracy for the baselines (\"-BASE\") and for our approach (\"-LIN\" is the linear parameterization, \"-NN\" is the neural parameterization).", "ently in each language.", "We provide corpus statistics for both languages in the top two rows of Tab.", "2.", "We created a hard-coded ψ(·, · | ·, ·, ·) to serve as a baseline for each language.", "For Spanish, we only activated, i.e.", "set to a number greater than zero, values that relate adjectives and determiners to nouns; for Hebrew, we only activated values that relate adjectives and verbs to nouns.", "We created two separate baselines because gender agreement operates differently in each language.", "To evaluate our approach, we held all morphosyntactic subtags fixed except for gender.", "For each annotated sentence, we intervened on the gender of the animate noun.", "We then used our model to infer which of the remaining tags should be updated (updating a tag means swapping the gender subtag because all morpho-syntactic subtags were held fixed except for gender) and reinflected the lemmata.", "Finally, we used the annotations to compute the taglevel F 1 score and the form-level accuracy, excluding the animate nouns on which we intervened.", "Results.", "We present the results in Tab.", "3.", "Recall is consistently significantly lower than precision.", "As expected, the baselines have the highest precision (though not by much).", "This is because they reflect well-known rules for each language.", "That said, they have lower recall than our approach because they fail to capture more subtle relationships.", "For both languages, our approach struggles with conjunctions.", "For example, consider the phraseél es un ingeniero y escritor (he is an engineer and a writer).", "Replacing ingeniero with ingeniera does not necessarily result in escritor being changed to escritora.", "This is because two nouns do not normally need to have the same gender when they are conjoined.", "Moreover, our MRF does not include co-reference information, so it cannot tell that, in this case, both nouns refer to the same person.", "Note Figure 4 : Gender stereotyping (left) and grammaticality (right) using the original corpus, the corpus following CDA using naïve swapping of gendered words (\"Swap\"), and the corpus following CDA using our approach (\"MRF\").", "that including co-reference information in our MRF would create cycles and inference would no longer be exact.", "Additionally, the lack of co-reference information means that, for Spanish, our approach fails to convert nouns that are noun-modifiers or indirect objects of verbs.", "Somewhat surprisingly, the neural parameterization does not outperform the linear parameterization.", "We proposed the neural parameterization to allow parameter sharing among edges with different parts of speech and labels; however, this parameter sharing does not seem to make a difference in practice, so the linear parameterization is sufficient.", "Extrinsic Evaluation We extrinsically evaluate our approach by assessing the extent to which it reduces gender stereotyping.", "Following Lu et al.", "(2018) , focus on neural language models.", "We choose language models over word embeddings because standard measures of gender stereotyping for word embeddings cannot be applied to morphologically rich languages.", "As our measure of gender stereotyping, we compare the log ratio of the prefix probabilities under a language model P lm for gendered, animate nouns, such as ingeniero, combined with four adjectives: good, bad, smart, and beautiful.", "The translations we use for these adjectives are given in App.", "B.", "We chose the first two adjectives because they should be used equally to describe men and women, and the latter two because we expect that they will reveal gender stereotypes.", "For example, consider log x∈Σ * P lm (BOS El ingeniero bueno x) x∈Σ * P lm (BOS La ingeniera buena x) .", "If this log ratio is close to 0, then the language model is as likely to generate sentences that start with el ingeniero bueno (the good male engineer) as it is to generate sentences that start with la ingeniera bueno (the good female engineer).", "If the log ratio is negative, then the language model is more likely to generate the feminine form than the masculine form, while the opposite is true if the log ratio is positive.", "In practice, given the current gender disparity in engineering, we would expect the log ratio to be positive.", "If, however, the language model were trained on a corpus to which our CDA approach had been applied, we would then expect the log ratio to be much closer to zero.", "Because our approach is specifically intended to yield sentences that are grammatical, we additionally consider the following log ratio (i.e., the grammatical phrase over the ungrammatical phrase): log x∈Σ * P lm (BOS El ingeniero bueno x) x∈Σ * P lm (BOS El ingeniera bueno x) .", "We trained the linear parameterization using UD treebanks for Spanish, Hebrew, French, and Italian (see Tab.", "2).", "For each of the four languages, we parsed one million sentences from Wikipedia (May 2018 dump) using Dozat and Manning (2016) 's parser and extracted taggings and lemmata using the method of Müller et al.", "(2015) .", "We automatically extracted an animacy gazetteer from WordNet (Bond and Paik, 2012) and then manually filtered the output for correctness.", "We provide the size of the languages' animacy gazetteers and the percentage of automatically parsed sentences that contain an animate noun in Tab.", "4.", "For each sentence containing a noun in our animacy gazetteer, we created a copy of the sentence, intervened on Figure 5 : Gender stereotyping for words that are stereotyped toward men or women in Spanish using the original corpus, the corpus following CDA using naïve swapping of gendered words (\"Swap\"), and the corpus following CDA using our approach (\"MRF\").", "the noun, and then used our approach to transform the sentence.", "For sentences containing more than one animate noun, we generated a separate sentence for each possible combination of genders.", "Choosing which sentences to duplicate is a difficult task.", "For example, alemán in Spanish can refer to either a German man or the German language; however, we have no way of distinguishing between these two meanings without additional annotations.", "Multilingual animacy detection (Jahan et al., 2018) might help with this challenge; co-reference information might additionally help.", "For each language, we trained the BPE-RNNLM baseline open-vocabulary language model of Mielke and Eisner (2018) using the original corpus, the corpus following CDA using naïve swapping of gendered words, and the corpus following CDA using our approach.", "We then computed gender stereotyping and grammaticality as described above.", "We provide example phrases in Tab.", "5; we provide a more extensive list of phrases in App.", "C. Results Fig.", "4 demonstrates depicts gender stereotyping and grammaticality for each language using the original corpus, the corpus following CDA using naïve swapping of gendered words, and the corpus following CDA using our approach.", "It is immediately apparent that our approch reduces gender stereotyping.", "On average, our approach reduces gender stereotyping by a factor of 2.5 (the lowest and highest factors are 1.2 (Ita) and 5.0 (Esp), respectively).", "We expected that naïve swapping of gendered words would also reduce gender stereotyping.", "Indeed, we see that this simple heuristic reduces gender stereotyping for some but not all of the languages.", "For Spanish, we also examine specific words that are stereotyped Table 5 : Prefix log-likelihoods of Spanish phrases using the original corpus, the corpus following CDA using naïve swapping of gendered words (\"Swap\"), and the corpus following CDA using our approach (\"MRF\").", "Phrases 1 and 2 are grammatical, while phrases 3 and 4 are not (dentoted by \"*\").", "Gender stereotyping is measured using phrases 1 and 2.", "Grammaticality is measured using phrases 1 and 3 and using phrases 2 and 4; these scores are then averaged.", "toward men or women.", "We define a word to be stereotyped toward one gender if 75% of its occurrences are of that gender.", "Fig.", "5 suggests a clear reduction in gender stereotyping for specific words that are stereotyped toward men or women.", "The grammaticality of the corpora following CDA differs between languages.", "That said, with the exception of Hebrew, our approach either sacrifices less grammaticality than naïve swapping of gendered words and sometimes increases grammaticality over the original corpus.", "Given that we know the model did not perform as accurately for Hebrew (see Tab.", "3), this finding is not surprising.", "Related Work In contrast to previous work, we focus on mitigating gender stereotypes in languages with rich morphology-specifically languages that exhibit gender agreement.", "To date, the NLP community has focused on approaches for detecting and mitigating gender stereotypes in English.", "For example, Bolukbasi et al.", "(2016) proposed a way of mitigating gender stereotypes in word embeddings while preserving meanings; Lu et al.", "(2018) studied gender stereotypes in language models; and Rudinger et al.", "(2018) introduced a novel Winograd schema for evaluating gender stereotypes in co-reference resolution.", "The most closely related work is that of Zhao et al.", "(2018) , who used CDA to reduce gender stereotypes in co-reference resolution; however, their approach yields ungrammatical sentences in morphologically rich languages.", "Our approach is specifically intended to yield grammatical sentences when applied to such languages.", "Habash et al.", "(2019) also focused on morphologically rich languages, specifically Arabic, but in the context of gender identification in machine translation.", "Conclusion We presented a new approach for converting between masculine-inflected and feminine-inflected noun phrases in morphologically rich languages.", "To do this, we introduced a Markov random field with an optional neural parameterization that infers the manner in which a sentence must change to preserve morpho-syntactic agreement when altering the grammatical gender of particular nouns.", "To the best of our knowledge, this task has not been studied previously.", "As a result, there is no existing annotated corpus of paired sentences that can be used as \"ground truth.\"", "Despite this limitation, we evaluated our approach both intrinsically and extrinsically, achieving promising results.", "For example, we demonstrated that our approach reduces gender stereotyping in neural language models.", "Finally, we also identified avenues for future work, such as the inclusion of co-reference information.", "A Belief Propagation Update Equations Our belief propagation update equations are µ i→f (m) = f ∈N (i)\\{f } µ f →i (m) (3) µ f i →i (m) = φ i (m) µ i→f i (m) (4) µ f ij →i (m) = m ∈M ψ(m , m | p i , p j , ) µ j→f ij (m ) (5) µ f ij →j (m) = m ∈M ψ(m, m | p i , p j , ) µ i→f ij (m ) (6) where N (i) returns the set of neighbouring nodes of node i.", "The belief at any node is given by β(v) = f ∈N (v) µ f →v (m).", "(7) B Adjective Translations Tab.", "6 and Tab.", "7 contain the feminine and masculine translations of the four adjectives that we used.", "C Extrinsic Evaluation Example Phrases For each noun in our animacy gazetteer, we generated sixteen phrases.", "Consider the noun engineer as an example.", "We created four phrases-one for each translation of The good engineer, The bad engineer, The smart engineer, and The beautiful engineer.", "These phrases, as well as their prefix log-likelihoods are provided below in Tab.", "8.", "Table 8 : Prefix log-likelihoods of Spanish phrases using the original corpus, the corpus following CDA using naïve swapping of gendered words (\"Swap\"), and the corpus following CDA using our approach (\"MRF\").", "Ungrammatical phrases are denoted by \"*\".", "Phrase" ] }
{ "paper_header_number": [ "1", "2", "4.", "3", "3.1", "3.2", "3.3", "4", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Gender Stereotypes in Text", "Reinflect the lemmata to their new forms.", "A Markov Random Field for Morpho-Syntactic Agreement", "Parameterization", "Inference", "Parameter Estimation", "Intervention", "Experiments", "Intrinsic Evaluation", "Extrinsic Evaluation", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-98#paper-1253#slide-2
Our focus stereotypes in language modeling Lu et al 2018
Training data counts are visible as likelihoods under a language model: stereotype m f m He is a good doctor. He is a good nurse. pronoun f She is a good doctor. She is a good nurse. For every sentence with she/he: e.g., She is a nurse. add that sentence with he/she for training: e.g., He is a nurse. Now they should yield a balanced model!
Training data counts are visible as likelihoods under a language model: stereotype m f m He is a good doctor. He is a good nurse. pronoun f She is a good doctor. She is a good nurse. For every sentence with she/he: e.g., She is a nurse. add that sentence with he/she for training: e.g., He is a nurse. Now they should yield a balanced model!
[]
GEM-SciDuet-train-98#paper-1253#slide-3
1253
Counterfactual Data Augmentation for Mitigating Gender Stereotypes in Languages with Rich Morphology
Gender stereotypes are manifest in most of the world's languages and are consequently propagated or amplified by NLP systems. Although research has focused on mitigating gender stereotypes in English, the approaches that are commonly employed produce ungrammatical sentences in morphologically rich languages. We present a novel approach for converting between masculine-inflected and feminineinflected sentences in such languages. For Spanish and Hebrew, our approach achieves F 1 scores of 82% and 73% at the level of tags and accuracies of 90% and 87% at the level of forms. By evaluating our approach using four different languages, we show that, on average, it reduces gender stereotyping by a factor of 2.5 without any sacrifice to grammaticality. Sebastian J. Mielke and Jason Eisner. 2018. Spell once, summon anywhere: A two-level open-vocabulary language model. CoRR, abs/1804.08205. Thomas Müller, Ryan Cotterell, Alexander Fraser, and Hinrich Schütze. 2015. Joint lemmatization and morphological tagging with lemming. In Proceed-
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238 ], "paper_content_text": [ "Introduction One of the biggest challenges faced by modern natural language processing (NLP) systems is the inadvertent replication or amplification of societal biases.", "This is because NLP systems depend on language corpora, which are inherently \"not objective; they are creations of human design\" (Crawford, 2013) .", "One type of societal bias that has received considerable attention from the NLP community is gender stereotyping (Garg et al., 2017; Rudinger et al., 2017; Sutton et al., 2018) .", "Gender stereotypes can manifest in language in overt ways.", "For example, the sentence he is an engineer is more likely to appear in a corpus than she is an engineer due to the current gender disparity in engineering.", "Consequently, any NLP system that is trained such a corpus will likely learn to associate engineer with men, but not with women (De-Arteaga et al., 2019) .", "To date, the NLP community has focused primarily on approaches for detecting and mitigating gender stereotypes in English (Bolukbasi et al., 2016; Dixon et al., 2018; Zhao et al., 2017 ).", "Yet, gender stereotypes also exist in other languages .", "We extract the properties of each word in the sentence.", "We then fix a noun and its tags and infer the manner in which the remaining tags must be updated.", "Finally, we reinflect the lemmata to their new forms.", "because they are a function of society, not of grammar.", "Moreover, because English does not mark grammatical gender, approaches developed for English are not transferable to morphologically rich languages that exhibit gender agreement (Corbett, 1991) .", "In these languages, the words in a sentence are marked with morphological endings that reflect the grammatical gender of the surrounding nouns.", "This means that if the gender of one word changes, the others have to be updated to match.", "As a result, simple heuristics, such as augmenting a corpus with additional sentences in which he and she have been swapped (Zhao et al., 2018) , will yield ungrammatical sentences.", "Consider the Spanish phrase el ingeniero experto (the skilled engineer).", "Replacing ingeniero with ingeniera is insufficient-el must also be replaced with la and experto with experta.", "In this paper, we present a new approach to counterfactual data augmentation (CDA; Lu et al., 2018) for mitigating gender stereotypes associated with animate 1 nouns (i.e., nouns that represent people) for morphologically rich languages.", "We introduce a Markov random field with an optional neural parameterization that infers the manner in which a sentence must change when altering the grammatical gender of particular nouns.", "We use this model as part of a four-step process, depicted in Fig.", "1 , to reinflect entire sentences following an intervention on the grammatical gender of one word.", "We intrinsically evaluate our approach using Spanish and Hebrew, achieving tag-level F 1 scores of 83% and 72% and form-level accuracies of 90% and 87%, respectively.", "We also conduct an extrinsic evaluation using four languages.", "Following Lu et al.", "(2018) , we show that, on average, our approach reduces gender stereotyping in neural language models by a factor of 2.5 without sacrificing grammaticality.", "Gender Stereotypes in Text Men and women are mentioned at different rates in text (Coates, 1987) .", "This problem is exacerbated in certain contexts.", "For example, the sentence he is an engineer is more likely to appear in a corpus than she is an engineer due to the current gender disparity in engineering.", "This imbalance in representation can have a dramatic downstream effect on NLP systems trained on such a corpus, such as giving preference to male engineers over female engineers in an automated resumé filtering system.", "Gender stereotypes of this sort have been observed in word embeddings (Bolukbasi et al., 2016; Sutton et al., 2018) , contextual word embeddings (Zhao et al., 2019) , and co-reference resolution systems (Rudinger et al., 2018; Zhao et al., 2018) inter alia.", "A quick fix: swapping gendered words.", "One approach to mitigating such gender stereotypes is counterfactual data augmentation (CDA; Lu et al., 2018) .", "In English, this involves augmenting a corpus with additional sentences in which gendered words, such as he and she, have been swapped to yield a balanced representation.", "Indeed, Zhao et al.", "(2018) showed that this simple heuristic significantly reduces gender stereotyping in neural co-reference resolution systems, without harming system performance.", "Unfortunately, this approach is only applicable to English and other languages with little morphological inflection.", "When applied to morphologically rich languages that exhibit gender agreement, it yields ungrammatical sentences.", "The problem: inflected languages.", "Many languages, including Spanish and Hebrew, have gender inflections for nouns, verbs, and adjectivesi.e., the words in a sentence are marked with morphological endings that reflect the grammatical gender of the surrounding nouns.", "2 This means that if the gender of one word changes, the others have to be updated to preserve morpho-syntactic agreement (Corbett, 2012) .", "Consider the following example from Spanish, where we wish to transform Sentence (1) to Sentence (2).", "(Parts of words that mark gender are depicted in bold.)", "This task is not as simple as replacing el with la-ingeniero and experto must also be reinflected.", "Moreover, the changes required for one language are not the same as those required for another (e.g., verbs are marked with gender in Hebrew, but not in Spanish).", "( Our approach.", "Our goal is to transform sentences like Sentence (1) to Sentence (2) and vice versa.", "To the best of our knowledge, this task has not been studied previously.", "Indeed, there is no existing annotated corpus of paired sentences that could be used to train a supervised model.", "As a result, we take an unsupervised 3 approach using dependency trees, lemmata, part-of-speech (POS) tags, and morpho-syntactic tags from Universal Dependencies corpora (UD; Nivre et al., 2018) .", "Specifically, we propose the following four-step process: 1.", "Analyze the sentence (including parsing, morphological analysis, and lemmatization).", "Figure 2 : Dependency tree for the sentence El ingeniero alemán es muy experto.", "2.", "Intervene on a gendered word.", "3.", "Infer the new morpho-syntactic tags.", "Reinflect the lemmata to their new forms.", "This process is depicted in Fig.", "1 .", "The primary technical contribution is a novel Markov random field for performing step 3, described in the next section.", "A Markov Random Field for Morpho-Syntactic Agreement In this section, we present a Markov random field (MRF; Koller and Friedman, 2009 ) for morphosyntactic agreement.", "This model defines a joint distribution over sequences of morpho-syntactic tags, conditioned on a labeled dependency tree with associated part-of-speech tags.", "Given an intervention on a gendered word, we can use this model to infer the manner in which the remaining tags must be updated to preserve morpho-syntactic agreement.", "A dependency tree for a sentence (see Fig.", "2 for an example) is a set of ordered triples (i, j, ), where i and j are positions in the sentence (or a distinguished root symbol) and ∈ L is the label of the edge i → j in the tree; each position occurs exactly once as the first element in a triple.", "Each dependency tree T is associated with a sequence of morpho-syntactic tags m = m 1 , .", ".", ".", ", m |T | and a sequence of part-ofspeech (POS) tags p = p 1 , .", ".", ".", ", p |T | .", "For example, the tags m ∈ M and p ∈ P for ingeniero are [MSC; SG] and NOUN, respectively, because ingeniero is a masculine, singular noun.", "For notational simplicity, we define M = M |T | to be the set of all length-|T | sequences of morpho-syntactic tags.", "We define the probability of m given T and p as Pr(m | T, p) ∝ (i,j, )∈T φ i (m i ) · ψ(m i , m j | p i , p j , ), (1) where the binary factor ψ(·, · | ·, ·, ·) ≥ 0 scores how well the morpho-syntactic tags m i and m j agree given the POS tags p i and p j and the label .", "For example, consider the amod (adjectival modifier) edge from experto to ingeniero in Fig.", "2 .", "The factor ψ(m i , m j | A, N, amod) returns a high score if the corresponding morpho-syntactic tags agree in gender and number (e.g., m i = [MSC; SG] and m j = [MSC; SG]) and a low score if they do not (e.g., m i = [MSC; SG] and m j = [FEM; PL]).", "The unary factor φ i (·) ≥ 0 scores a morpho-syntactic tag m i outside the context of the dependency tree.", "As we explain in §3.1, we use these unary factors to force or disallow particular tags when performing an intervention; we do not learn them.", "Eq.", "(1) is normalized by the following partition function: Z(T, p) = m ∈M (i,j, )∈T φ i (m i ) · ψ(m i , m j | p i , p j , ).", "Z(T, p) can be calculated using belief propagation; we provide the update equations that we use in App.", "A.", "Our model is depicted in Fig.", "3 .", "It is noteworthy that this model is delexicalized-i.e., it considers only the labeled dependency tree and the POS tags, not the actual words themselves.", "Parameterization We consider a linear parameterization and a neural parameterization of the binary factor ψ(·, · | ·, ·, ·).", "Linear parameterization.", "We define a matrix W (p i , p j , ) ∈ R c×c for each triple (p i , p j , ), where c is the number of morpho-syntactic subtags.", "For example, [MSC; SG] has two subtags MSC and SG.", "We then define ψ(·, · | ·, ·, ·) as follows: ψ(m i , m j | p i , p j , ) = exp (m i W (p i , p j , )m j ), where m i ∈ {0, 1} c is a multi-hot encoding of m i .", "Neural parameterization.", "As an alternative, we also define a neural parameterization of W (p i , p j , ) to allow parameter sharing among El ingeniero alemán es muy experto edges with different parts of speech and labels: φ1(·) φ2(·) φ3(·) φ4(·) φ5(·) φ6(·) ψ(·, · | D, N, det) ψ(·, · | A, N, amod) ψ(·, · | N, V, cop) ψ(·, · | AV, A, advmod) ψ(·, · | A, N, amod) W (p i , p j , ) = exp (U tanh(V [e(p i ); e(p j ); e( )])) where U ∈ R c×c×n 1 , V ∈ R n 1 ×3n 2 , and n 1 and n 2 define the structure of the neural parameterization and each e(·) ∈ R n 2 is an embedding function.", "Parameterization of φ i .", "We use the unary factors only to force or disallow particular tags when performing an intervention.", "Specifically, we define φ i (m) = α if m = m i 1 otherwise, (2) where α > 1 is a strength parameter that determines the extent to which m i should remain unchanged following an intervention.", "In the limit as α → ∞, all tags will remain unchanged except for the tag directly involved in the intervention.", "4 Inference Because our MRF is acyclic and tree-shaped, we can use belief propagation (Pearl, 1988) to perform exact inference.", "The algorithm is a generalization of the forward-backward algorithm for hidden Markov models (Rabiner and Juang, 1986 Parameter Estimation We use gradient-based optimization.", "We treat the negative log-likelihood − log (Pr(m | T, p)) as the loss function for tree T and compute its gradient using automatic differentiation (Rall, 1981) .", "We learn the parameters of §3.1 by optimizing the negative log-likelihood using gradient descent.", "Intervention As explained in §2, our goal is to transform sentences like Sentence (1) to Sentence (2) by intervening on a gendered word and then using our model to infer the manner in which the remaining tags must be updated to preserve morphosyntactic agreement.", "For example, if we change the morpho-syntactic tag for ingeniero from [MSC;SG] to [FEM;SG], then we must also update the tags for el and experto, but do not need to update the tag for es, which should remain unchanged as [IN; PR; SG].", "If we intervene on the i th word in a sentence, changing its tag from m i to m i , then using our model to infer the manner in which the remaining tags must be updated means using Pr(m −i | m i , T, p) to identify high-probability tags for the remaining words.", "Crucially, we wish to change as little as possible when intervening on a gendered word.", "The unary factors φ i enable us to do exactly this.", "As described in the previous section, the strength parameter α determines the extent to which m i should remain unchanged following an intervention-the larger the value, the less likely it is that m i will be changed.", "Once the new tags have been inferred, the final step is to reinflect the lemmata to their new forms.", "This task has received considerable attention from the NLP community (Cotterell et al., 2016 (Cotterell et al., , 2017 .", "We use the inflection model of .", "This model conditions on the lemma x and morphosyntactic tag m to form a distribution over possible inflections.", "For example, given experto and [A; FEM; PL], the trained inflection model will assign a high probability to expertas.", "We provide accuracies for the trained inflection model in Tab.", "1.", "Experiments We used the Adam optimizer (Kingma and Ba, 2014) to train both parameterizations of our model until the change in dev-loss was less than 10 −5 bits.", "We set β = (0.9, 0.999) without tuning, and chose a learning rate of 0.005 and weight decay factor of 0.0001 after tuning.", "We tuned log α in the set {0.5, 0.75, 1, 2, 5, 10} and chose log α = 1.", "For the neural parameterization, we set n 1 = 9 and n 2 = 3 without any tuning.", "Finally, we trained the inflection model using only gendered words.", "We evaluate our approach both intrinsically and extrinsically.", "For the intrinsic evaluation, we focus on whether our approach yields the correct morphosyntactic tags and the correct reinflections.", "For the extrinsic evaluation, we assess the extent to which using the resulting transformed sentences reduces gender stereotyping in neural language models.", "Intrinsic Evaluation To the best of our knowledge, this task has not been studied previously.", "As a result, there is no existing annotated corpus of paired sentences that can be used as \"ground truth.\"", "We therefore annotated Spanish and Hebrew sentences ourselves, with annotations made by native speakers of each language.", "Specifically, for each language, we extracted sentences containing animate nouns from that language's UD treebank.", "The average length of these extracted sentences was 37 words.", "We then manually inspected each sentence, intervening on the gender of the animate noun and reinflecting the sentence accordingly.", "We chose Spanish and Hebrew because gender agreement operates differ- Table 3 : Tag-level precision, recall, F 1 score, and accuracy and form-level accuracy for the baselines (\"-BASE\") and for our approach (\"-LIN\" is the linear parameterization, \"-NN\" is the neural parameterization).", "ently in each language.", "We provide corpus statistics for both languages in the top two rows of Tab.", "2.", "We created a hard-coded ψ(·, · | ·, ·, ·) to serve as a baseline for each language.", "For Spanish, we only activated, i.e.", "set to a number greater than zero, values that relate adjectives and determiners to nouns; for Hebrew, we only activated values that relate adjectives and verbs to nouns.", "We created two separate baselines because gender agreement operates differently in each language.", "To evaluate our approach, we held all morphosyntactic subtags fixed except for gender.", "For each annotated sentence, we intervened on the gender of the animate noun.", "We then used our model to infer which of the remaining tags should be updated (updating a tag means swapping the gender subtag because all morpho-syntactic subtags were held fixed except for gender) and reinflected the lemmata.", "Finally, we used the annotations to compute the taglevel F 1 score and the form-level accuracy, excluding the animate nouns on which we intervened.", "Results.", "We present the results in Tab.", "3.", "Recall is consistently significantly lower than precision.", "As expected, the baselines have the highest precision (though not by much).", "This is because they reflect well-known rules for each language.", "That said, they have lower recall than our approach because they fail to capture more subtle relationships.", "For both languages, our approach struggles with conjunctions.", "For example, consider the phraseél es un ingeniero y escritor (he is an engineer and a writer).", "Replacing ingeniero with ingeniera does not necessarily result in escritor being changed to escritora.", "This is because two nouns do not normally need to have the same gender when they are conjoined.", "Moreover, our MRF does not include co-reference information, so it cannot tell that, in this case, both nouns refer to the same person.", "Note Figure 4 : Gender stereotyping (left) and grammaticality (right) using the original corpus, the corpus following CDA using naïve swapping of gendered words (\"Swap\"), and the corpus following CDA using our approach (\"MRF\").", "that including co-reference information in our MRF would create cycles and inference would no longer be exact.", "Additionally, the lack of co-reference information means that, for Spanish, our approach fails to convert nouns that are noun-modifiers or indirect objects of verbs.", "Somewhat surprisingly, the neural parameterization does not outperform the linear parameterization.", "We proposed the neural parameterization to allow parameter sharing among edges with different parts of speech and labels; however, this parameter sharing does not seem to make a difference in practice, so the linear parameterization is sufficient.", "Extrinsic Evaluation We extrinsically evaluate our approach by assessing the extent to which it reduces gender stereotyping.", "Following Lu et al.", "(2018) , focus on neural language models.", "We choose language models over word embeddings because standard measures of gender stereotyping for word embeddings cannot be applied to morphologically rich languages.", "As our measure of gender stereotyping, we compare the log ratio of the prefix probabilities under a language model P lm for gendered, animate nouns, such as ingeniero, combined with four adjectives: good, bad, smart, and beautiful.", "The translations we use for these adjectives are given in App.", "B.", "We chose the first two adjectives because they should be used equally to describe men and women, and the latter two because we expect that they will reveal gender stereotypes.", "For example, consider log x∈Σ * P lm (BOS El ingeniero bueno x) x∈Σ * P lm (BOS La ingeniera buena x) .", "If this log ratio is close to 0, then the language model is as likely to generate sentences that start with el ingeniero bueno (the good male engineer) as it is to generate sentences that start with la ingeniera bueno (the good female engineer).", "If the log ratio is negative, then the language model is more likely to generate the feminine form than the masculine form, while the opposite is true if the log ratio is positive.", "In practice, given the current gender disparity in engineering, we would expect the log ratio to be positive.", "If, however, the language model were trained on a corpus to which our CDA approach had been applied, we would then expect the log ratio to be much closer to zero.", "Because our approach is specifically intended to yield sentences that are grammatical, we additionally consider the following log ratio (i.e., the grammatical phrase over the ungrammatical phrase): log x∈Σ * P lm (BOS El ingeniero bueno x) x∈Σ * P lm (BOS El ingeniera bueno x) .", "We trained the linear parameterization using UD treebanks for Spanish, Hebrew, French, and Italian (see Tab.", "2).", "For each of the four languages, we parsed one million sentences from Wikipedia (May 2018 dump) using Dozat and Manning (2016) 's parser and extracted taggings and lemmata using the method of Müller et al.", "(2015) .", "We automatically extracted an animacy gazetteer from WordNet (Bond and Paik, 2012) and then manually filtered the output for correctness.", "We provide the size of the languages' animacy gazetteers and the percentage of automatically parsed sentences that contain an animate noun in Tab.", "4.", "For each sentence containing a noun in our animacy gazetteer, we created a copy of the sentence, intervened on Figure 5 : Gender stereotyping for words that are stereotyped toward men or women in Spanish using the original corpus, the corpus following CDA using naïve swapping of gendered words (\"Swap\"), and the corpus following CDA using our approach (\"MRF\").", "the noun, and then used our approach to transform the sentence.", "For sentences containing more than one animate noun, we generated a separate sentence for each possible combination of genders.", "Choosing which sentences to duplicate is a difficult task.", "For example, alemán in Spanish can refer to either a German man or the German language; however, we have no way of distinguishing between these two meanings without additional annotations.", "Multilingual animacy detection (Jahan et al., 2018) might help with this challenge; co-reference information might additionally help.", "For each language, we trained the BPE-RNNLM baseline open-vocabulary language model of Mielke and Eisner (2018) using the original corpus, the corpus following CDA using naïve swapping of gendered words, and the corpus following CDA using our approach.", "We then computed gender stereotyping and grammaticality as described above.", "We provide example phrases in Tab.", "5; we provide a more extensive list of phrases in App.", "C. Results Fig.", "4 demonstrates depicts gender stereotyping and grammaticality for each language using the original corpus, the corpus following CDA using naïve swapping of gendered words, and the corpus following CDA using our approach.", "It is immediately apparent that our approch reduces gender stereotyping.", "On average, our approach reduces gender stereotyping by a factor of 2.5 (the lowest and highest factors are 1.2 (Ita) and 5.0 (Esp), respectively).", "We expected that naïve swapping of gendered words would also reduce gender stereotyping.", "Indeed, we see that this simple heuristic reduces gender stereotyping for some but not all of the languages.", "For Spanish, we also examine specific words that are stereotyped Table 5 : Prefix log-likelihoods of Spanish phrases using the original corpus, the corpus following CDA using naïve swapping of gendered words (\"Swap\"), and the corpus following CDA using our approach (\"MRF\").", "Phrases 1 and 2 are grammatical, while phrases 3 and 4 are not (dentoted by \"*\").", "Gender stereotyping is measured using phrases 1 and 2.", "Grammaticality is measured using phrases 1 and 3 and using phrases 2 and 4; these scores are then averaged.", "toward men or women.", "We define a word to be stereotyped toward one gender if 75% of its occurrences are of that gender.", "Fig.", "5 suggests a clear reduction in gender stereotyping for specific words that are stereotyped toward men or women.", "The grammaticality of the corpora following CDA differs between languages.", "That said, with the exception of Hebrew, our approach either sacrifices less grammaticality than naïve swapping of gendered words and sometimes increases grammaticality over the original corpus.", "Given that we know the model did not perform as accurately for Hebrew (see Tab.", "3), this finding is not surprising.", "Related Work In contrast to previous work, we focus on mitigating gender stereotypes in languages with rich morphology-specifically languages that exhibit gender agreement.", "To date, the NLP community has focused on approaches for detecting and mitigating gender stereotypes in English.", "For example, Bolukbasi et al.", "(2016) proposed a way of mitigating gender stereotypes in word embeddings while preserving meanings; Lu et al.", "(2018) studied gender stereotypes in language models; and Rudinger et al.", "(2018) introduced a novel Winograd schema for evaluating gender stereotypes in co-reference resolution.", "The most closely related work is that of Zhao et al.", "(2018) , who used CDA to reduce gender stereotypes in co-reference resolution; however, their approach yields ungrammatical sentences in morphologically rich languages.", "Our approach is specifically intended to yield grammatical sentences when applied to such languages.", "Habash et al.", "(2019) also focused on morphologically rich languages, specifically Arabic, but in the context of gender identification in machine translation.", "Conclusion We presented a new approach for converting between masculine-inflected and feminine-inflected noun phrases in morphologically rich languages.", "To do this, we introduced a Markov random field with an optional neural parameterization that infers the manner in which a sentence must change to preserve morpho-syntactic agreement when altering the grammatical gender of particular nouns.", "To the best of our knowledge, this task has not been studied previously.", "As a result, there is no existing annotated corpus of paired sentences that can be used as \"ground truth.\"", "Despite this limitation, we evaluated our approach both intrinsically and extrinsically, achieving promising results.", "For example, we demonstrated that our approach reduces gender stereotyping in neural language models.", "Finally, we also identified avenues for future work, such as the inclusion of co-reference information.", "A Belief Propagation Update Equations Our belief propagation update equations are µ i→f (m) = f ∈N (i)\\{f } µ f →i (m) (3) µ f i →i (m) = φ i (m) µ i→f i (m) (4) µ f ij →i (m) = m ∈M ψ(m , m | p i , p j , ) µ j→f ij (m ) (5) µ f ij →j (m) = m ∈M ψ(m, m | p i , p j , ) µ i→f ij (m ) (6) where N (i) returns the set of neighbouring nodes of node i.", "The belief at any node is given by β(v) = f ∈N (v) µ f →v (m).", "(7) B Adjective Translations Tab.", "6 and Tab.", "7 contain the feminine and masculine translations of the four adjectives that we used.", "C Extrinsic Evaluation Example Phrases For each noun in our animacy gazetteer, we generated sixteen phrases.", "Consider the noun engineer as an example.", "We created four phrases-one for each translation of The good engineer, The bad engineer, The smart engineer, and The beautiful engineer.", "These phrases, as well as their prefix log-likelihoods are provided below in Tab.", "8.", "Table 8 : Prefix log-likelihoods of Spanish phrases using the original corpus, the corpus following CDA using naïve swapping of gendered words (\"Swap\"), and the corpus following CDA using our approach (\"MRF\").", "Ungrammatical phrases are denoted by \"*\".", "Phrase" ] }
{ "paper_header_number": [ "1", "2", "4.", "3", "3.1", "3.2", "3.3", "4", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Gender Stereotypes in Text", "Reinflect the lemmata to their new forms.", "A Markov Random Field for Morpho-Syntactic Agreement", "Parameterization", "Inference", "Parameter Estimation", "Intervention", "Experiments", "Intrinsic Evaluation", "Extrinsic Evaluation", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-98#paper-1253#slide-3
Agreement or what if German
m Er ist ein guter Arzt. Er ist ein guter Krankenpfleger. pronoun f Sie ist eine gute Arztin. Sie ist eine gute Krankenpflegerin. So, uh, can we just... change all words grammatical gender? Example: Der Arzt sitzt auf einem Stuhl (The male doctor sits on a chair) Swap all: Die Arztin sitzt auf einer Stuhl (The female doctor sits on a... what?) No, what we need is...
m Er ist ein guter Arzt. Er ist ein guter Krankenpfleger. pronoun f Sie ist eine gute Arztin. Sie ist eine gute Krankenpflegerin. So, uh, can we just... change all words grammatical gender? Example: Der Arzt sitzt auf einem Stuhl (The male doctor sits on a chair) Swap all: Die Arztin sitzt auf einer Stuhl (The female doctor sits on a... what?) No, what we need is...
[]
GEM-SciDuet-train-98#paper-1253#slide-4
1253
Counterfactual Data Augmentation for Mitigating Gender Stereotypes in Languages with Rich Morphology
Gender stereotypes are manifest in most of the world's languages and are consequently propagated or amplified by NLP systems. Although research has focused on mitigating gender stereotypes in English, the approaches that are commonly employed produce ungrammatical sentences in morphologically rich languages. We present a novel approach for converting between masculine-inflected and feminineinflected sentences in such languages. For Spanish and Hebrew, our approach achieves F 1 scores of 82% and 73% at the level of tags and accuracies of 90% and 87% at the level of forms. By evaluating our approach using four different languages, we show that, on average, it reduces gender stereotyping by a factor of 2.5 without any sacrifice to grammaticality. Sebastian J. Mielke and Jason Eisner. 2018. Spell once, summon anywhere: A two-level open-vocabulary language model. CoRR, abs/1804.08205. Thomas Müller, Ryan Cotterell, Alexander Fraser, and Hinrich Schütze. 2015. Joint lemmatization and morphological tagging with lemming. In Proceed-
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238 ], "paper_content_text": [ "Introduction One of the biggest challenges faced by modern natural language processing (NLP) systems is the inadvertent replication or amplification of societal biases.", "This is because NLP systems depend on language corpora, which are inherently \"not objective; they are creations of human design\" (Crawford, 2013) .", "One type of societal bias that has received considerable attention from the NLP community is gender stereotyping (Garg et al., 2017; Rudinger et al., 2017; Sutton et al., 2018) .", "Gender stereotypes can manifest in language in overt ways.", "For example, the sentence he is an engineer is more likely to appear in a corpus than she is an engineer due to the current gender disparity in engineering.", "Consequently, any NLP system that is trained such a corpus will likely learn to associate engineer with men, but not with women (De-Arteaga et al., 2019) .", "To date, the NLP community has focused primarily on approaches for detecting and mitigating gender stereotypes in English (Bolukbasi et al., 2016; Dixon et al., 2018; Zhao et al., 2017 ).", "Yet, gender stereotypes also exist in other languages .", "We extract the properties of each word in the sentence.", "We then fix a noun and its tags and infer the manner in which the remaining tags must be updated.", "Finally, we reinflect the lemmata to their new forms.", "because they are a function of society, not of grammar.", "Moreover, because English does not mark grammatical gender, approaches developed for English are not transferable to morphologically rich languages that exhibit gender agreement (Corbett, 1991) .", "In these languages, the words in a sentence are marked with morphological endings that reflect the grammatical gender of the surrounding nouns.", "This means that if the gender of one word changes, the others have to be updated to match.", "As a result, simple heuristics, such as augmenting a corpus with additional sentences in which he and she have been swapped (Zhao et al., 2018) , will yield ungrammatical sentences.", "Consider the Spanish phrase el ingeniero experto (the skilled engineer).", "Replacing ingeniero with ingeniera is insufficient-el must also be replaced with la and experto with experta.", "In this paper, we present a new approach to counterfactual data augmentation (CDA; Lu et al., 2018) for mitigating gender stereotypes associated with animate 1 nouns (i.e., nouns that represent people) for morphologically rich languages.", "We introduce a Markov random field with an optional neural parameterization that infers the manner in which a sentence must change when altering the grammatical gender of particular nouns.", "We use this model as part of a four-step process, depicted in Fig.", "1 , to reinflect entire sentences following an intervention on the grammatical gender of one word.", "We intrinsically evaluate our approach using Spanish and Hebrew, achieving tag-level F 1 scores of 83% and 72% and form-level accuracies of 90% and 87%, respectively.", "We also conduct an extrinsic evaluation using four languages.", "Following Lu et al.", "(2018) , we show that, on average, our approach reduces gender stereotyping in neural language models by a factor of 2.5 without sacrificing grammaticality.", "Gender Stereotypes in Text Men and women are mentioned at different rates in text (Coates, 1987) .", "This problem is exacerbated in certain contexts.", "For example, the sentence he is an engineer is more likely to appear in a corpus than she is an engineer due to the current gender disparity in engineering.", "This imbalance in representation can have a dramatic downstream effect on NLP systems trained on such a corpus, such as giving preference to male engineers over female engineers in an automated resumé filtering system.", "Gender stereotypes of this sort have been observed in word embeddings (Bolukbasi et al., 2016; Sutton et al., 2018) , contextual word embeddings (Zhao et al., 2019) , and co-reference resolution systems (Rudinger et al., 2018; Zhao et al., 2018) inter alia.", "A quick fix: swapping gendered words.", "One approach to mitigating such gender stereotypes is counterfactual data augmentation (CDA; Lu et al., 2018) .", "In English, this involves augmenting a corpus with additional sentences in which gendered words, such as he and she, have been swapped to yield a balanced representation.", "Indeed, Zhao et al.", "(2018) showed that this simple heuristic significantly reduces gender stereotyping in neural co-reference resolution systems, without harming system performance.", "Unfortunately, this approach is only applicable to English and other languages with little morphological inflection.", "When applied to morphologically rich languages that exhibit gender agreement, it yields ungrammatical sentences.", "The problem: inflected languages.", "Many languages, including Spanish and Hebrew, have gender inflections for nouns, verbs, and adjectivesi.e., the words in a sentence are marked with morphological endings that reflect the grammatical gender of the surrounding nouns.", "2 This means that if the gender of one word changes, the others have to be updated to preserve morpho-syntactic agreement (Corbett, 2012) .", "Consider the following example from Spanish, where we wish to transform Sentence (1) to Sentence (2).", "(Parts of words that mark gender are depicted in bold.)", "This task is not as simple as replacing el with la-ingeniero and experto must also be reinflected.", "Moreover, the changes required for one language are not the same as those required for another (e.g., verbs are marked with gender in Hebrew, but not in Spanish).", "( Our approach.", "Our goal is to transform sentences like Sentence (1) to Sentence (2) and vice versa.", "To the best of our knowledge, this task has not been studied previously.", "Indeed, there is no existing annotated corpus of paired sentences that could be used to train a supervised model.", "As a result, we take an unsupervised 3 approach using dependency trees, lemmata, part-of-speech (POS) tags, and morpho-syntactic tags from Universal Dependencies corpora (UD; Nivre et al., 2018) .", "Specifically, we propose the following four-step process: 1.", "Analyze the sentence (including parsing, morphological analysis, and lemmatization).", "Figure 2 : Dependency tree for the sentence El ingeniero alemán es muy experto.", "2.", "Intervene on a gendered word.", "3.", "Infer the new morpho-syntactic tags.", "Reinflect the lemmata to their new forms.", "This process is depicted in Fig.", "1 .", "The primary technical contribution is a novel Markov random field for performing step 3, described in the next section.", "A Markov Random Field for Morpho-Syntactic Agreement In this section, we present a Markov random field (MRF; Koller and Friedman, 2009 ) for morphosyntactic agreement.", "This model defines a joint distribution over sequences of morpho-syntactic tags, conditioned on a labeled dependency tree with associated part-of-speech tags.", "Given an intervention on a gendered word, we can use this model to infer the manner in which the remaining tags must be updated to preserve morpho-syntactic agreement.", "A dependency tree for a sentence (see Fig.", "2 for an example) is a set of ordered triples (i, j, ), where i and j are positions in the sentence (or a distinguished root symbol) and ∈ L is the label of the edge i → j in the tree; each position occurs exactly once as the first element in a triple.", "Each dependency tree T is associated with a sequence of morpho-syntactic tags m = m 1 , .", ".", ".", ", m |T | and a sequence of part-ofspeech (POS) tags p = p 1 , .", ".", ".", ", p |T | .", "For example, the tags m ∈ M and p ∈ P for ingeniero are [MSC; SG] and NOUN, respectively, because ingeniero is a masculine, singular noun.", "For notational simplicity, we define M = M |T | to be the set of all length-|T | sequences of morpho-syntactic tags.", "We define the probability of m given T and p as Pr(m | T, p) ∝ (i,j, )∈T φ i (m i ) · ψ(m i , m j | p i , p j , ), (1) where the binary factor ψ(·, · | ·, ·, ·) ≥ 0 scores how well the morpho-syntactic tags m i and m j agree given the POS tags p i and p j and the label .", "For example, consider the amod (adjectival modifier) edge from experto to ingeniero in Fig.", "2 .", "The factor ψ(m i , m j | A, N, amod) returns a high score if the corresponding morpho-syntactic tags agree in gender and number (e.g., m i = [MSC; SG] and m j = [MSC; SG]) and a low score if they do not (e.g., m i = [MSC; SG] and m j = [FEM; PL]).", "The unary factor φ i (·) ≥ 0 scores a morpho-syntactic tag m i outside the context of the dependency tree.", "As we explain in §3.1, we use these unary factors to force or disallow particular tags when performing an intervention; we do not learn them.", "Eq.", "(1) is normalized by the following partition function: Z(T, p) = m ∈M (i,j, )∈T φ i (m i ) · ψ(m i , m j | p i , p j , ).", "Z(T, p) can be calculated using belief propagation; we provide the update equations that we use in App.", "A.", "Our model is depicted in Fig.", "3 .", "It is noteworthy that this model is delexicalized-i.e., it considers only the labeled dependency tree and the POS tags, not the actual words themselves.", "Parameterization We consider a linear parameterization and a neural parameterization of the binary factor ψ(·, · | ·, ·, ·).", "Linear parameterization.", "We define a matrix W (p i , p j , ) ∈ R c×c for each triple (p i , p j , ), where c is the number of morpho-syntactic subtags.", "For example, [MSC; SG] has two subtags MSC and SG.", "We then define ψ(·, · | ·, ·, ·) as follows: ψ(m i , m j | p i , p j , ) = exp (m i W (p i , p j , )m j ), where m i ∈ {0, 1} c is a multi-hot encoding of m i .", "Neural parameterization.", "As an alternative, we also define a neural parameterization of W (p i , p j , ) to allow parameter sharing among El ingeniero alemán es muy experto edges with different parts of speech and labels: φ1(·) φ2(·) φ3(·) φ4(·) φ5(·) φ6(·) ψ(·, · | D, N, det) ψ(·, · | A, N, amod) ψ(·, · | N, V, cop) ψ(·, · | AV, A, advmod) ψ(·, · | A, N, amod) W (p i , p j , ) = exp (U tanh(V [e(p i ); e(p j ); e( )])) where U ∈ R c×c×n 1 , V ∈ R n 1 ×3n 2 , and n 1 and n 2 define the structure of the neural parameterization and each e(·) ∈ R n 2 is an embedding function.", "Parameterization of φ i .", "We use the unary factors only to force or disallow particular tags when performing an intervention.", "Specifically, we define φ i (m) = α if m = m i 1 otherwise, (2) where α > 1 is a strength parameter that determines the extent to which m i should remain unchanged following an intervention.", "In the limit as α → ∞, all tags will remain unchanged except for the tag directly involved in the intervention.", "4 Inference Because our MRF is acyclic and tree-shaped, we can use belief propagation (Pearl, 1988) to perform exact inference.", "The algorithm is a generalization of the forward-backward algorithm for hidden Markov models (Rabiner and Juang, 1986 Parameter Estimation We use gradient-based optimization.", "We treat the negative log-likelihood − log (Pr(m | T, p)) as the loss function for tree T and compute its gradient using automatic differentiation (Rall, 1981) .", "We learn the parameters of §3.1 by optimizing the negative log-likelihood using gradient descent.", "Intervention As explained in §2, our goal is to transform sentences like Sentence (1) to Sentence (2) by intervening on a gendered word and then using our model to infer the manner in which the remaining tags must be updated to preserve morphosyntactic agreement.", "For example, if we change the morpho-syntactic tag for ingeniero from [MSC;SG] to [FEM;SG], then we must also update the tags for el and experto, but do not need to update the tag for es, which should remain unchanged as [IN; PR; SG].", "If we intervene on the i th word in a sentence, changing its tag from m i to m i , then using our model to infer the manner in which the remaining tags must be updated means using Pr(m −i | m i , T, p) to identify high-probability tags for the remaining words.", "Crucially, we wish to change as little as possible when intervening on a gendered word.", "The unary factors φ i enable us to do exactly this.", "As described in the previous section, the strength parameter α determines the extent to which m i should remain unchanged following an intervention-the larger the value, the less likely it is that m i will be changed.", "Once the new tags have been inferred, the final step is to reinflect the lemmata to their new forms.", "This task has received considerable attention from the NLP community (Cotterell et al., 2016 (Cotterell et al., , 2017 .", "We use the inflection model of .", "This model conditions on the lemma x and morphosyntactic tag m to form a distribution over possible inflections.", "For example, given experto and [A; FEM; PL], the trained inflection model will assign a high probability to expertas.", "We provide accuracies for the trained inflection model in Tab.", "1.", "Experiments We used the Adam optimizer (Kingma and Ba, 2014) to train both parameterizations of our model until the change in dev-loss was less than 10 −5 bits.", "We set β = (0.9, 0.999) without tuning, and chose a learning rate of 0.005 and weight decay factor of 0.0001 after tuning.", "We tuned log α in the set {0.5, 0.75, 1, 2, 5, 10} and chose log α = 1.", "For the neural parameterization, we set n 1 = 9 and n 2 = 3 without any tuning.", "Finally, we trained the inflection model using only gendered words.", "We evaluate our approach both intrinsically and extrinsically.", "For the intrinsic evaluation, we focus on whether our approach yields the correct morphosyntactic tags and the correct reinflections.", "For the extrinsic evaluation, we assess the extent to which using the resulting transformed sentences reduces gender stereotyping in neural language models.", "Intrinsic Evaluation To the best of our knowledge, this task has not been studied previously.", "As a result, there is no existing annotated corpus of paired sentences that can be used as \"ground truth.\"", "We therefore annotated Spanish and Hebrew sentences ourselves, with annotations made by native speakers of each language.", "Specifically, for each language, we extracted sentences containing animate nouns from that language's UD treebank.", "The average length of these extracted sentences was 37 words.", "We then manually inspected each sentence, intervening on the gender of the animate noun and reinflecting the sentence accordingly.", "We chose Spanish and Hebrew because gender agreement operates differ- Table 3 : Tag-level precision, recall, F 1 score, and accuracy and form-level accuracy for the baselines (\"-BASE\") and for our approach (\"-LIN\" is the linear parameterization, \"-NN\" is the neural parameterization).", "ently in each language.", "We provide corpus statistics for both languages in the top two rows of Tab.", "2.", "We created a hard-coded ψ(·, · | ·, ·, ·) to serve as a baseline for each language.", "For Spanish, we only activated, i.e.", "set to a number greater than zero, values that relate adjectives and determiners to nouns; for Hebrew, we only activated values that relate adjectives and verbs to nouns.", "We created two separate baselines because gender agreement operates differently in each language.", "To evaluate our approach, we held all morphosyntactic subtags fixed except for gender.", "For each annotated sentence, we intervened on the gender of the animate noun.", "We then used our model to infer which of the remaining tags should be updated (updating a tag means swapping the gender subtag because all morpho-syntactic subtags were held fixed except for gender) and reinflected the lemmata.", "Finally, we used the annotations to compute the taglevel F 1 score and the form-level accuracy, excluding the animate nouns on which we intervened.", "Results.", "We present the results in Tab.", "3.", "Recall is consistently significantly lower than precision.", "As expected, the baselines have the highest precision (though not by much).", "This is because they reflect well-known rules for each language.", "That said, they have lower recall than our approach because they fail to capture more subtle relationships.", "For both languages, our approach struggles with conjunctions.", "For example, consider the phraseél es un ingeniero y escritor (he is an engineer and a writer).", "Replacing ingeniero with ingeniera does not necessarily result in escritor being changed to escritora.", "This is because two nouns do not normally need to have the same gender when they are conjoined.", "Moreover, our MRF does not include co-reference information, so it cannot tell that, in this case, both nouns refer to the same person.", "Note Figure 4 : Gender stereotyping (left) and grammaticality (right) using the original corpus, the corpus following CDA using naïve swapping of gendered words (\"Swap\"), and the corpus following CDA using our approach (\"MRF\").", "that including co-reference information in our MRF would create cycles and inference would no longer be exact.", "Additionally, the lack of co-reference information means that, for Spanish, our approach fails to convert nouns that are noun-modifiers or indirect objects of verbs.", "Somewhat surprisingly, the neural parameterization does not outperform the linear parameterization.", "We proposed the neural parameterization to allow parameter sharing among edges with different parts of speech and labels; however, this parameter sharing does not seem to make a difference in practice, so the linear parameterization is sufficient.", "Extrinsic Evaluation We extrinsically evaluate our approach by assessing the extent to which it reduces gender stereotyping.", "Following Lu et al.", "(2018) , focus on neural language models.", "We choose language models over word embeddings because standard measures of gender stereotyping for word embeddings cannot be applied to morphologically rich languages.", "As our measure of gender stereotyping, we compare the log ratio of the prefix probabilities under a language model P lm for gendered, animate nouns, such as ingeniero, combined with four adjectives: good, bad, smart, and beautiful.", "The translations we use for these adjectives are given in App.", "B.", "We chose the first two adjectives because they should be used equally to describe men and women, and the latter two because we expect that they will reveal gender stereotypes.", "For example, consider log x∈Σ * P lm (BOS El ingeniero bueno x) x∈Σ * P lm (BOS La ingeniera buena x) .", "If this log ratio is close to 0, then the language model is as likely to generate sentences that start with el ingeniero bueno (the good male engineer) as it is to generate sentences that start with la ingeniera bueno (the good female engineer).", "If the log ratio is negative, then the language model is more likely to generate the feminine form than the masculine form, while the opposite is true if the log ratio is positive.", "In practice, given the current gender disparity in engineering, we would expect the log ratio to be positive.", "If, however, the language model were trained on a corpus to which our CDA approach had been applied, we would then expect the log ratio to be much closer to zero.", "Because our approach is specifically intended to yield sentences that are grammatical, we additionally consider the following log ratio (i.e., the grammatical phrase over the ungrammatical phrase): log x∈Σ * P lm (BOS El ingeniero bueno x) x∈Σ * P lm (BOS El ingeniera bueno x) .", "We trained the linear parameterization using UD treebanks for Spanish, Hebrew, French, and Italian (see Tab.", "2).", "For each of the four languages, we parsed one million sentences from Wikipedia (May 2018 dump) using Dozat and Manning (2016) 's parser and extracted taggings and lemmata using the method of Müller et al.", "(2015) .", "We automatically extracted an animacy gazetteer from WordNet (Bond and Paik, 2012) and then manually filtered the output for correctness.", "We provide the size of the languages' animacy gazetteers and the percentage of automatically parsed sentences that contain an animate noun in Tab.", "4.", "For each sentence containing a noun in our animacy gazetteer, we created a copy of the sentence, intervened on Figure 5 : Gender stereotyping for words that are stereotyped toward men or women in Spanish using the original corpus, the corpus following CDA using naïve swapping of gendered words (\"Swap\"), and the corpus following CDA using our approach (\"MRF\").", "the noun, and then used our approach to transform the sentence.", "For sentences containing more than one animate noun, we generated a separate sentence for each possible combination of genders.", "Choosing which sentences to duplicate is a difficult task.", "For example, alemán in Spanish can refer to either a German man or the German language; however, we have no way of distinguishing between these two meanings without additional annotations.", "Multilingual animacy detection (Jahan et al., 2018) might help with this challenge; co-reference information might additionally help.", "For each language, we trained the BPE-RNNLM baseline open-vocabulary language model of Mielke and Eisner (2018) using the original corpus, the corpus following CDA using naïve swapping of gendered words, and the corpus following CDA using our approach.", "We then computed gender stereotyping and grammaticality as described above.", "We provide example phrases in Tab.", "5; we provide a more extensive list of phrases in App.", "C. Results Fig.", "4 demonstrates depicts gender stereotyping and grammaticality for each language using the original corpus, the corpus following CDA using naïve swapping of gendered words, and the corpus following CDA using our approach.", "It is immediately apparent that our approch reduces gender stereotyping.", "On average, our approach reduces gender stereotyping by a factor of 2.5 (the lowest and highest factors are 1.2 (Ita) and 5.0 (Esp), respectively).", "We expected that naïve swapping of gendered words would also reduce gender stereotyping.", "Indeed, we see that this simple heuristic reduces gender stereotyping for some but not all of the languages.", "For Spanish, we also examine specific words that are stereotyped Table 5 : Prefix log-likelihoods of Spanish phrases using the original corpus, the corpus following CDA using naïve swapping of gendered words (\"Swap\"), and the corpus following CDA using our approach (\"MRF\").", "Phrases 1 and 2 are grammatical, while phrases 3 and 4 are not (dentoted by \"*\").", "Gender stereotyping is measured using phrases 1 and 2.", "Grammaticality is measured using phrases 1 and 3 and using phrases 2 and 4; these scores are then averaged.", "toward men or women.", "We define a word to be stereotyped toward one gender if 75% of its occurrences are of that gender.", "Fig.", "5 suggests a clear reduction in gender stereotyping for specific words that are stereotyped toward men or women.", "The grammaticality of the corpora following CDA differs between languages.", "That said, with the exception of Hebrew, our approach either sacrifices less grammaticality than naïve swapping of gendered words and sometimes increases grammaticality over the original corpus.", "Given that we know the model did not perform as accurately for Hebrew (see Tab.", "3), this finding is not surprising.", "Related Work In contrast to previous work, we focus on mitigating gender stereotypes in languages with rich morphology-specifically languages that exhibit gender agreement.", "To date, the NLP community has focused on approaches for detecting and mitigating gender stereotypes in English.", "For example, Bolukbasi et al.", "(2016) proposed a way of mitigating gender stereotypes in word embeddings while preserving meanings; Lu et al.", "(2018) studied gender stereotypes in language models; and Rudinger et al.", "(2018) introduced a novel Winograd schema for evaluating gender stereotypes in co-reference resolution.", "The most closely related work is that of Zhao et al.", "(2018) , who used CDA to reduce gender stereotypes in co-reference resolution; however, their approach yields ungrammatical sentences in morphologically rich languages.", "Our approach is specifically intended to yield grammatical sentences when applied to such languages.", "Habash et al.", "(2019) also focused on morphologically rich languages, specifically Arabic, but in the context of gender identification in machine translation.", "Conclusion We presented a new approach for converting between masculine-inflected and feminine-inflected noun phrases in morphologically rich languages.", "To do this, we introduced a Markov random field with an optional neural parameterization that infers the manner in which a sentence must change to preserve morpho-syntactic agreement when altering the grammatical gender of particular nouns.", "To the best of our knowledge, this task has not been studied previously.", "As a result, there is no existing annotated corpus of paired sentences that can be used as \"ground truth.\"", "Despite this limitation, we evaluated our approach both intrinsically and extrinsically, achieving promising results.", "For example, we demonstrated that our approach reduces gender stereotyping in neural language models.", "Finally, we also identified avenues for future work, such as the inclusion of co-reference information.", "A Belief Propagation Update Equations Our belief propagation update equations are µ i→f (m) = f ∈N (i)\\{f } µ f →i (m) (3) µ f i →i (m) = φ i (m) µ i→f i (m) (4) µ f ij →i (m) = m ∈M ψ(m , m | p i , p j , ) µ j→f ij (m ) (5) µ f ij →j (m) = m ∈M ψ(m, m | p i , p j , ) µ i→f ij (m ) (6) where N (i) returns the set of neighbouring nodes of node i.", "The belief at any node is given by β(v) = f ∈N (v) µ f →v (m).", "(7) B Adjective Translations Tab.", "6 and Tab.", "7 contain the feminine and masculine translations of the four adjectives that we used.", "C Extrinsic Evaluation Example Phrases For each noun in our animacy gazetteer, we generated sixteen phrases.", "Consider the noun engineer as an example.", "We created four phrases-one for each translation of The good engineer, The bad engineer, The smart engineer, and The beautiful engineer.", "These phrases, as well as their prefix log-likelihoods are provided below in Tab.", "8.", "Table 8 : Prefix log-likelihoods of Spanish phrases using the original corpus, the corpus following CDA using naïve swapping of gendered words (\"Swap\"), and the corpus following CDA using our approach (\"MRF\").", "Ungrammatical phrases are denoted by \"*\".", "Phrase" ] }
{ "paper_header_number": [ "1", "2", "4.", "3", "3.1", "3.2", "3.3", "4", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Gender Stereotypes in Text", "Reinflect the lemmata to their new forms.", "A Markov Random Field for Morpho-Syntactic Agreement", "Parameterization", "Inference", "Parameter Estimation", "Intervention", "Experiments", "Intrinsic Evaluation", "Extrinsic Evaluation", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-98#paper-1253#slide-4
Syntax to the rescue use dependency parses
Der gute Arzt sitzt auf einem Stuhl Only words connected in the dependency parse should change! Build a MRF over morphological tags along the dependency parse! M ; SG;NOM M ;SG; NOM M ;SG; NOM learned from data, neural factors manual dampening staywhat not learned, they were boosts before tags intervention that F SG;NOM F ;SG; NOM F ;SG; NOM
Der gute Arzt sitzt auf einem Stuhl Only words connected in the dependency parse should change! Build a MRF over morphological tags along the dependency parse! M ; SG;NOM M ;SG; NOM M ;SG; NOM learned from data, neural factors manual dampening staywhat not learned, they were boosts before tags intervention that F SG;NOM F ;SG; NOM F ;SG; NOM
[]
GEM-SciDuet-train-98#paper-1253#slide-5
1253
Counterfactual Data Augmentation for Mitigating Gender Stereotypes in Languages with Rich Morphology
Gender stereotypes are manifest in most of the world's languages and are consequently propagated or amplified by NLP systems. Although research has focused on mitigating gender stereotypes in English, the approaches that are commonly employed produce ungrammatical sentences in morphologically rich languages. We present a novel approach for converting between masculine-inflected and feminineinflected sentences in such languages. For Spanish and Hebrew, our approach achieves F 1 scores of 82% and 73% at the level of tags and accuracies of 90% and 87% at the level of forms. By evaluating our approach using four different languages, we show that, on average, it reduces gender stereotyping by a factor of 2.5 without any sacrifice to grammaticality. Sebastian J. Mielke and Jason Eisner. 2018. Spell once, summon anywhere: A two-level open-vocabulary language model. CoRR, abs/1804.08205. Thomas Müller, Ryan Cotterell, Alexander Fraser, and Hinrich Schütze. 2015. Joint lemmatization and morphological tagging with lemming. In Proceed-
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238 ], "paper_content_text": [ "Introduction One of the biggest challenges faced by modern natural language processing (NLP) systems is the inadvertent replication or amplification of societal biases.", "This is because NLP systems depend on language corpora, which are inherently \"not objective; they are creations of human design\" (Crawford, 2013) .", "One type of societal bias that has received considerable attention from the NLP community is gender stereotyping (Garg et al., 2017; Rudinger et al., 2017; Sutton et al., 2018) .", "Gender stereotypes can manifest in language in overt ways.", "For example, the sentence he is an engineer is more likely to appear in a corpus than she is an engineer due to the current gender disparity in engineering.", "Consequently, any NLP system that is trained such a corpus will likely learn to associate engineer with men, but not with women (De-Arteaga et al., 2019) .", "To date, the NLP community has focused primarily on approaches for detecting and mitigating gender stereotypes in English (Bolukbasi et al., 2016; Dixon et al., 2018; Zhao et al., 2017 ).", "Yet, gender stereotypes also exist in other languages .", "We extract the properties of each word in the sentence.", "We then fix a noun and its tags and infer the manner in which the remaining tags must be updated.", "Finally, we reinflect the lemmata to their new forms.", "because they are a function of society, not of grammar.", "Moreover, because English does not mark grammatical gender, approaches developed for English are not transferable to morphologically rich languages that exhibit gender agreement (Corbett, 1991) .", "In these languages, the words in a sentence are marked with morphological endings that reflect the grammatical gender of the surrounding nouns.", "This means that if the gender of one word changes, the others have to be updated to match.", "As a result, simple heuristics, such as augmenting a corpus with additional sentences in which he and she have been swapped (Zhao et al., 2018) , will yield ungrammatical sentences.", "Consider the Spanish phrase el ingeniero experto (the skilled engineer).", "Replacing ingeniero with ingeniera is insufficient-el must also be replaced with la and experto with experta.", "In this paper, we present a new approach to counterfactual data augmentation (CDA; Lu et al., 2018) for mitigating gender stereotypes associated with animate 1 nouns (i.e., nouns that represent people) for morphologically rich languages.", "We introduce a Markov random field with an optional neural parameterization that infers the manner in which a sentence must change when altering the grammatical gender of particular nouns.", "We use this model as part of a four-step process, depicted in Fig.", "1 , to reinflect entire sentences following an intervention on the grammatical gender of one word.", "We intrinsically evaluate our approach using Spanish and Hebrew, achieving tag-level F 1 scores of 83% and 72% and form-level accuracies of 90% and 87%, respectively.", "We also conduct an extrinsic evaluation using four languages.", "Following Lu et al.", "(2018) , we show that, on average, our approach reduces gender stereotyping in neural language models by a factor of 2.5 without sacrificing grammaticality.", "Gender Stereotypes in Text Men and women are mentioned at different rates in text (Coates, 1987) .", "This problem is exacerbated in certain contexts.", "For example, the sentence he is an engineer is more likely to appear in a corpus than she is an engineer due to the current gender disparity in engineering.", "This imbalance in representation can have a dramatic downstream effect on NLP systems trained on such a corpus, such as giving preference to male engineers over female engineers in an automated resumé filtering system.", "Gender stereotypes of this sort have been observed in word embeddings (Bolukbasi et al., 2016; Sutton et al., 2018) , contextual word embeddings (Zhao et al., 2019) , and co-reference resolution systems (Rudinger et al., 2018; Zhao et al., 2018) inter alia.", "A quick fix: swapping gendered words.", "One approach to mitigating such gender stereotypes is counterfactual data augmentation (CDA; Lu et al., 2018) .", "In English, this involves augmenting a corpus with additional sentences in which gendered words, such as he and she, have been swapped to yield a balanced representation.", "Indeed, Zhao et al.", "(2018) showed that this simple heuristic significantly reduces gender stereotyping in neural co-reference resolution systems, without harming system performance.", "Unfortunately, this approach is only applicable to English and other languages with little morphological inflection.", "When applied to morphologically rich languages that exhibit gender agreement, it yields ungrammatical sentences.", "The problem: inflected languages.", "Many languages, including Spanish and Hebrew, have gender inflections for nouns, verbs, and adjectivesi.e., the words in a sentence are marked with morphological endings that reflect the grammatical gender of the surrounding nouns.", "2 This means that if the gender of one word changes, the others have to be updated to preserve morpho-syntactic agreement (Corbett, 2012) .", "Consider the following example from Spanish, where we wish to transform Sentence (1) to Sentence (2).", "(Parts of words that mark gender are depicted in bold.)", "This task is not as simple as replacing el with la-ingeniero and experto must also be reinflected.", "Moreover, the changes required for one language are not the same as those required for another (e.g., verbs are marked with gender in Hebrew, but not in Spanish).", "( Our approach.", "Our goal is to transform sentences like Sentence (1) to Sentence (2) and vice versa.", "To the best of our knowledge, this task has not been studied previously.", "Indeed, there is no existing annotated corpus of paired sentences that could be used to train a supervised model.", "As a result, we take an unsupervised 3 approach using dependency trees, lemmata, part-of-speech (POS) tags, and morpho-syntactic tags from Universal Dependencies corpora (UD; Nivre et al., 2018) .", "Specifically, we propose the following four-step process: 1.", "Analyze the sentence (including parsing, morphological analysis, and lemmatization).", "Figure 2 : Dependency tree for the sentence El ingeniero alemán es muy experto.", "2.", "Intervene on a gendered word.", "3.", "Infer the new morpho-syntactic tags.", "Reinflect the lemmata to their new forms.", "This process is depicted in Fig.", "1 .", "The primary technical contribution is a novel Markov random field for performing step 3, described in the next section.", "A Markov Random Field for Morpho-Syntactic Agreement In this section, we present a Markov random field (MRF; Koller and Friedman, 2009 ) for morphosyntactic agreement.", "This model defines a joint distribution over sequences of morpho-syntactic tags, conditioned on a labeled dependency tree with associated part-of-speech tags.", "Given an intervention on a gendered word, we can use this model to infer the manner in which the remaining tags must be updated to preserve morpho-syntactic agreement.", "A dependency tree for a sentence (see Fig.", "2 for an example) is a set of ordered triples (i, j, ), where i and j are positions in the sentence (or a distinguished root symbol) and ∈ L is the label of the edge i → j in the tree; each position occurs exactly once as the first element in a triple.", "Each dependency tree T is associated with a sequence of morpho-syntactic tags m = m 1 , .", ".", ".", ", m |T | and a sequence of part-ofspeech (POS) tags p = p 1 , .", ".", ".", ", p |T | .", "For example, the tags m ∈ M and p ∈ P for ingeniero are [MSC; SG] and NOUN, respectively, because ingeniero is a masculine, singular noun.", "For notational simplicity, we define M = M |T | to be the set of all length-|T | sequences of morpho-syntactic tags.", "We define the probability of m given T and p as Pr(m | T, p) ∝ (i,j, )∈T φ i (m i ) · ψ(m i , m j | p i , p j , ), (1) where the binary factor ψ(·, · | ·, ·, ·) ≥ 0 scores how well the morpho-syntactic tags m i and m j agree given the POS tags p i and p j and the label .", "For example, consider the amod (adjectival modifier) edge from experto to ingeniero in Fig.", "2 .", "The factor ψ(m i , m j | A, N, amod) returns a high score if the corresponding morpho-syntactic tags agree in gender and number (e.g., m i = [MSC; SG] and m j = [MSC; SG]) and a low score if they do not (e.g., m i = [MSC; SG] and m j = [FEM; PL]).", "The unary factor φ i (·) ≥ 0 scores a morpho-syntactic tag m i outside the context of the dependency tree.", "As we explain in §3.1, we use these unary factors to force or disallow particular tags when performing an intervention; we do not learn them.", "Eq.", "(1) is normalized by the following partition function: Z(T, p) = m ∈M (i,j, )∈T φ i (m i ) · ψ(m i , m j | p i , p j , ).", "Z(T, p) can be calculated using belief propagation; we provide the update equations that we use in App.", "A.", "Our model is depicted in Fig.", "3 .", "It is noteworthy that this model is delexicalized-i.e., it considers only the labeled dependency tree and the POS tags, not the actual words themselves.", "Parameterization We consider a linear parameterization and a neural parameterization of the binary factor ψ(·, · | ·, ·, ·).", "Linear parameterization.", "We define a matrix W (p i , p j , ) ∈ R c×c for each triple (p i , p j , ), where c is the number of morpho-syntactic subtags.", "For example, [MSC; SG] has two subtags MSC and SG.", "We then define ψ(·, · | ·, ·, ·) as follows: ψ(m i , m j | p i , p j , ) = exp (m i W (p i , p j , )m j ), where m i ∈ {0, 1} c is a multi-hot encoding of m i .", "Neural parameterization.", "As an alternative, we also define a neural parameterization of W (p i , p j , ) to allow parameter sharing among El ingeniero alemán es muy experto edges with different parts of speech and labels: φ1(·) φ2(·) φ3(·) φ4(·) φ5(·) φ6(·) ψ(·, · | D, N, det) ψ(·, · | A, N, amod) ψ(·, · | N, V, cop) ψ(·, · | AV, A, advmod) ψ(·, · | A, N, amod) W (p i , p j , ) = exp (U tanh(V [e(p i ); e(p j ); e( )])) where U ∈ R c×c×n 1 , V ∈ R n 1 ×3n 2 , and n 1 and n 2 define the structure of the neural parameterization and each e(·) ∈ R n 2 is an embedding function.", "Parameterization of φ i .", "We use the unary factors only to force or disallow particular tags when performing an intervention.", "Specifically, we define φ i (m) = α if m = m i 1 otherwise, (2) where α > 1 is a strength parameter that determines the extent to which m i should remain unchanged following an intervention.", "In the limit as α → ∞, all tags will remain unchanged except for the tag directly involved in the intervention.", "4 Inference Because our MRF is acyclic and tree-shaped, we can use belief propagation (Pearl, 1988) to perform exact inference.", "The algorithm is a generalization of the forward-backward algorithm for hidden Markov models (Rabiner and Juang, 1986 Parameter Estimation We use gradient-based optimization.", "We treat the negative log-likelihood − log (Pr(m | T, p)) as the loss function for tree T and compute its gradient using automatic differentiation (Rall, 1981) .", "We learn the parameters of §3.1 by optimizing the negative log-likelihood using gradient descent.", "Intervention As explained in §2, our goal is to transform sentences like Sentence (1) to Sentence (2) by intervening on a gendered word and then using our model to infer the manner in which the remaining tags must be updated to preserve morphosyntactic agreement.", "For example, if we change the morpho-syntactic tag for ingeniero from [MSC;SG] to [FEM;SG], then we must also update the tags for el and experto, but do not need to update the tag for es, which should remain unchanged as [IN; PR; SG].", "If we intervene on the i th word in a sentence, changing its tag from m i to m i , then using our model to infer the manner in which the remaining tags must be updated means using Pr(m −i | m i , T, p) to identify high-probability tags for the remaining words.", "Crucially, we wish to change as little as possible when intervening on a gendered word.", "The unary factors φ i enable us to do exactly this.", "As described in the previous section, the strength parameter α determines the extent to which m i should remain unchanged following an intervention-the larger the value, the less likely it is that m i will be changed.", "Once the new tags have been inferred, the final step is to reinflect the lemmata to their new forms.", "This task has received considerable attention from the NLP community (Cotterell et al., 2016 (Cotterell et al., , 2017 .", "We use the inflection model of .", "This model conditions on the lemma x and morphosyntactic tag m to form a distribution over possible inflections.", "For example, given experto and [A; FEM; PL], the trained inflection model will assign a high probability to expertas.", "We provide accuracies for the trained inflection model in Tab.", "1.", "Experiments We used the Adam optimizer (Kingma and Ba, 2014) to train both parameterizations of our model until the change in dev-loss was less than 10 −5 bits.", "We set β = (0.9, 0.999) without tuning, and chose a learning rate of 0.005 and weight decay factor of 0.0001 after tuning.", "We tuned log α in the set {0.5, 0.75, 1, 2, 5, 10} and chose log α = 1.", "For the neural parameterization, we set n 1 = 9 and n 2 = 3 without any tuning.", "Finally, we trained the inflection model using only gendered words.", "We evaluate our approach both intrinsically and extrinsically.", "For the intrinsic evaluation, we focus on whether our approach yields the correct morphosyntactic tags and the correct reinflections.", "For the extrinsic evaluation, we assess the extent to which using the resulting transformed sentences reduces gender stereotyping in neural language models.", "Intrinsic Evaluation To the best of our knowledge, this task has not been studied previously.", "As a result, there is no existing annotated corpus of paired sentences that can be used as \"ground truth.\"", "We therefore annotated Spanish and Hebrew sentences ourselves, with annotations made by native speakers of each language.", "Specifically, for each language, we extracted sentences containing animate nouns from that language's UD treebank.", "The average length of these extracted sentences was 37 words.", "We then manually inspected each sentence, intervening on the gender of the animate noun and reinflecting the sentence accordingly.", "We chose Spanish and Hebrew because gender agreement operates differ- Table 3 : Tag-level precision, recall, F 1 score, and accuracy and form-level accuracy for the baselines (\"-BASE\") and for our approach (\"-LIN\" is the linear parameterization, \"-NN\" is the neural parameterization).", "ently in each language.", "We provide corpus statistics for both languages in the top two rows of Tab.", "2.", "We created a hard-coded ψ(·, · | ·, ·, ·) to serve as a baseline for each language.", "For Spanish, we only activated, i.e.", "set to a number greater than zero, values that relate adjectives and determiners to nouns; for Hebrew, we only activated values that relate adjectives and verbs to nouns.", "We created two separate baselines because gender agreement operates differently in each language.", "To evaluate our approach, we held all morphosyntactic subtags fixed except for gender.", "For each annotated sentence, we intervened on the gender of the animate noun.", "We then used our model to infer which of the remaining tags should be updated (updating a tag means swapping the gender subtag because all morpho-syntactic subtags were held fixed except for gender) and reinflected the lemmata.", "Finally, we used the annotations to compute the taglevel F 1 score and the form-level accuracy, excluding the animate nouns on which we intervened.", "Results.", "We present the results in Tab.", "3.", "Recall is consistently significantly lower than precision.", "As expected, the baselines have the highest precision (though not by much).", "This is because they reflect well-known rules for each language.", "That said, they have lower recall than our approach because they fail to capture more subtle relationships.", "For both languages, our approach struggles with conjunctions.", "For example, consider the phraseél es un ingeniero y escritor (he is an engineer and a writer).", "Replacing ingeniero with ingeniera does not necessarily result in escritor being changed to escritora.", "This is because two nouns do not normally need to have the same gender when they are conjoined.", "Moreover, our MRF does not include co-reference information, so it cannot tell that, in this case, both nouns refer to the same person.", "Note Figure 4 : Gender stereotyping (left) and grammaticality (right) using the original corpus, the corpus following CDA using naïve swapping of gendered words (\"Swap\"), and the corpus following CDA using our approach (\"MRF\").", "that including co-reference information in our MRF would create cycles and inference would no longer be exact.", "Additionally, the lack of co-reference information means that, for Spanish, our approach fails to convert nouns that are noun-modifiers or indirect objects of verbs.", "Somewhat surprisingly, the neural parameterization does not outperform the linear parameterization.", "We proposed the neural parameterization to allow parameter sharing among edges with different parts of speech and labels; however, this parameter sharing does not seem to make a difference in practice, so the linear parameterization is sufficient.", "Extrinsic Evaluation We extrinsically evaluate our approach by assessing the extent to which it reduces gender stereotyping.", "Following Lu et al.", "(2018) , focus on neural language models.", "We choose language models over word embeddings because standard measures of gender stereotyping for word embeddings cannot be applied to morphologically rich languages.", "As our measure of gender stereotyping, we compare the log ratio of the prefix probabilities under a language model P lm for gendered, animate nouns, such as ingeniero, combined with four adjectives: good, bad, smart, and beautiful.", "The translations we use for these adjectives are given in App.", "B.", "We chose the first two adjectives because they should be used equally to describe men and women, and the latter two because we expect that they will reveal gender stereotypes.", "For example, consider log x∈Σ * P lm (BOS El ingeniero bueno x) x∈Σ * P lm (BOS La ingeniera buena x) .", "If this log ratio is close to 0, then the language model is as likely to generate sentences that start with el ingeniero bueno (the good male engineer) as it is to generate sentences that start with la ingeniera bueno (the good female engineer).", "If the log ratio is negative, then the language model is more likely to generate the feminine form than the masculine form, while the opposite is true if the log ratio is positive.", "In practice, given the current gender disparity in engineering, we would expect the log ratio to be positive.", "If, however, the language model were trained on a corpus to which our CDA approach had been applied, we would then expect the log ratio to be much closer to zero.", "Because our approach is specifically intended to yield sentences that are grammatical, we additionally consider the following log ratio (i.e., the grammatical phrase over the ungrammatical phrase): log x∈Σ * P lm (BOS El ingeniero bueno x) x∈Σ * P lm (BOS El ingeniera bueno x) .", "We trained the linear parameterization using UD treebanks for Spanish, Hebrew, French, and Italian (see Tab.", "2).", "For each of the four languages, we parsed one million sentences from Wikipedia (May 2018 dump) using Dozat and Manning (2016) 's parser and extracted taggings and lemmata using the method of Müller et al.", "(2015) .", "We automatically extracted an animacy gazetteer from WordNet (Bond and Paik, 2012) and then manually filtered the output for correctness.", "We provide the size of the languages' animacy gazetteers and the percentage of automatically parsed sentences that contain an animate noun in Tab.", "4.", "For each sentence containing a noun in our animacy gazetteer, we created a copy of the sentence, intervened on Figure 5 : Gender stereotyping for words that are stereotyped toward men or women in Spanish using the original corpus, the corpus following CDA using naïve swapping of gendered words (\"Swap\"), and the corpus following CDA using our approach (\"MRF\").", "the noun, and then used our approach to transform the sentence.", "For sentences containing more than one animate noun, we generated a separate sentence for each possible combination of genders.", "Choosing which sentences to duplicate is a difficult task.", "For example, alemán in Spanish can refer to either a German man or the German language; however, we have no way of distinguishing between these two meanings without additional annotations.", "Multilingual animacy detection (Jahan et al., 2018) might help with this challenge; co-reference information might additionally help.", "For each language, we trained the BPE-RNNLM baseline open-vocabulary language model of Mielke and Eisner (2018) using the original corpus, the corpus following CDA using naïve swapping of gendered words, and the corpus following CDA using our approach.", "We then computed gender stereotyping and grammaticality as described above.", "We provide example phrases in Tab.", "5; we provide a more extensive list of phrases in App.", "C. Results Fig.", "4 demonstrates depicts gender stereotyping and grammaticality for each language using the original corpus, the corpus following CDA using naïve swapping of gendered words, and the corpus following CDA using our approach.", "It is immediately apparent that our approch reduces gender stereotyping.", "On average, our approach reduces gender stereotyping by a factor of 2.5 (the lowest and highest factors are 1.2 (Ita) and 5.0 (Esp), respectively).", "We expected that naïve swapping of gendered words would also reduce gender stereotyping.", "Indeed, we see that this simple heuristic reduces gender stereotyping for some but not all of the languages.", "For Spanish, we also examine specific words that are stereotyped Table 5 : Prefix log-likelihoods of Spanish phrases using the original corpus, the corpus following CDA using naïve swapping of gendered words (\"Swap\"), and the corpus following CDA using our approach (\"MRF\").", "Phrases 1 and 2 are grammatical, while phrases 3 and 4 are not (dentoted by \"*\").", "Gender stereotyping is measured using phrases 1 and 2.", "Grammaticality is measured using phrases 1 and 3 and using phrases 2 and 4; these scores are then averaged.", "toward men or women.", "We define a word to be stereotyped toward one gender if 75% of its occurrences are of that gender.", "Fig.", "5 suggests a clear reduction in gender stereotyping for specific words that are stereotyped toward men or women.", "The grammaticality of the corpora following CDA differs between languages.", "That said, with the exception of Hebrew, our approach either sacrifices less grammaticality than naïve swapping of gendered words and sometimes increases grammaticality over the original corpus.", "Given that we know the model did not perform as accurately for Hebrew (see Tab.", "3), this finding is not surprising.", "Related Work In contrast to previous work, we focus on mitigating gender stereotypes in languages with rich morphology-specifically languages that exhibit gender agreement.", "To date, the NLP community has focused on approaches for detecting and mitigating gender stereotypes in English.", "For example, Bolukbasi et al.", "(2016) proposed a way of mitigating gender stereotypes in word embeddings while preserving meanings; Lu et al.", "(2018) studied gender stereotypes in language models; and Rudinger et al.", "(2018) introduced a novel Winograd schema for evaluating gender stereotypes in co-reference resolution.", "The most closely related work is that of Zhao et al.", "(2018) , who used CDA to reduce gender stereotypes in co-reference resolution; however, their approach yields ungrammatical sentences in morphologically rich languages.", "Our approach is specifically intended to yield grammatical sentences when applied to such languages.", "Habash et al.", "(2019) also focused on morphologically rich languages, specifically Arabic, but in the context of gender identification in machine translation.", "Conclusion We presented a new approach for converting between masculine-inflected and feminine-inflected noun phrases in morphologically rich languages.", "To do this, we introduced a Markov random field with an optional neural parameterization that infers the manner in which a sentence must change to preserve morpho-syntactic agreement when altering the grammatical gender of particular nouns.", "To the best of our knowledge, this task has not been studied previously.", "As a result, there is no existing annotated corpus of paired sentences that can be used as \"ground truth.\"", "Despite this limitation, we evaluated our approach both intrinsically and extrinsically, achieving promising results.", "For example, we demonstrated that our approach reduces gender stereotyping in neural language models.", "Finally, we also identified avenues for future work, such as the inclusion of co-reference information.", "A Belief Propagation Update Equations Our belief propagation update equations are µ i→f (m) = f ∈N (i)\\{f } µ f →i (m) (3) µ f i →i (m) = φ i (m) µ i→f i (m) (4) µ f ij →i (m) = m ∈M ψ(m , m | p i , p j , ) µ j→f ij (m ) (5) µ f ij →j (m) = m ∈M ψ(m, m | p i , p j , ) µ i→f ij (m ) (6) where N (i) returns the set of neighbouring nodes of node i.", "The belief at any node is given by β(v) = f ∈N (v) µ f →v (m).", "(7) B Adjective Translations Tab.", "6 and Tab.", "7 contain the feminine and masculine translations of the four adjectives that we used.", "C Extrinsic Evaluation Example Phrases For each noun in our animacy gazetteer, we generated sixteen phrases.", "Consider the noun engineer as an example.", "We created four phrases-one for each translation of The good engineer, The bad engineer, The smart engineer, and The beautiful engineer.", "These phrases, as well as their prefix log-likelihoods are provided below in Tab.", "8.", "Table 8 : Prefix log-likelihoods of Spanish phrases using the original corpus, the corpus following CDA using naïve swapping of gendered words (\"Swap\"), and the corpus following CDA using our approach (\"MRF\").", "Ungrammatical phrases are denoted by \"*\".", "Phrase" ] }
{ "paper_header_number": [ "1", "2", "4.", "3", "3.1", "3.2", "3.3", "4", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Gender Stereotypes in Text", "Reinflect the lemmata to their new forms.", "A Markov Random Field for Morpho-Syntactic Agreement", "Parameterization", "Inference", "Parameter Estimation", "Intervention", "Experiments", "Intrinsic Evaluation", "Extrinsic Evaluation", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-98#paper-1253#slide-5
Recap what is a Markov Random Field Koller and Friedman 2009
Model p(x y, z) by decomposing into factors Every factor gives a score to certain assignments: Add up all factors to obtain global score: score(x y z Get p by global normalization (easy in trees): p(x y z exp score(x y z
Model p(x y, z) by decomposing into factors Every factor gives a score to certain assignments: Add up all factors to obtain global score: score(x y z Get p by global normalization (easy in trees): p(x y z exp score(x y z
[]
GEM-SciDuet-train-98#paper-1253#slide-6
1253
Counterfactual Data Augmentation for Mitigating Gender Stereotypes in Languages with Rich Morphology
Gender stereotypes are manifest in most of the world's languages and are consequently propagated or amplified by NLP systems. Although research has focused on mitigating gender stereotypes in English, the approaches that are commonly employed produce ungrammatical sentences in morphologically rich languages. We present a novel approach for converting between masculine-inflected and feminineinflected sentences in such languages. For Spanish and Hebrew, our approach achieves F 1 scores of 82% and 73% at the level of tags and accuracies of 90% and 87% at the level of forms. By evaluating our approach using four different languages, we show that, on average, it reduces gender stereotyping by a factor of 2.5 without any sacrifice to grammaticality. Sebastian J. Mielke and Jason Eisner. 2018. Spell once, summon anywhere: A two-level open-vocabulary language model. CoRR, abs/1804.08205. Thomas Müller, Ryan Cotterell, Alexander Fraser, and Hinrich Schütze. 2015. Joint lemmatization and morphological tagging with lemming. In Proceed-
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238 ], "paper_content_text": [ "Introduction One of the biggest challenges faced by modern natural language processing (NLP) systems is the inadvertent replication or amplification of societal biases.", "This is because NLP systems depend on language corpora, which are inherently \"not objective; they are creations of human design\" (Crawford, 2013) .", "One type of societal bias that has received considerable attention from the NLP community is gender stereotyping (Garg et al., 2017; Rudinger et al., 2017; Sutton et al., 2018) .", "Gender stereotypes can manifest in language in overt ways.", "For example, the sentence he is an engineer is more likely to appear in a corpus than she is an engineer due to the current gender disparity in engineering.", "Consequently, any NLP system that is trained such a corpus will likely learn to associate engineer with men, but not with women (De-Arteaga et al., 2019) .", "To date, the NLP community has focused primarily on approaches for detecting and mitigating gender stereotypes in English (Bolukbasi et al., 2016; Dixon et al., 2018; Zhao et al., 2017 ).", "Yet, gender stereotypes also exist in other languages .", "We extract the properties of each word in the sentence.", "We then fix a noun and its tags and infer the manner in which the remaining tags must be updated.", "Finally, we reinflect the lemmata to their new forms.", "because they are a function of society, not of grammar.", "Moreover, because English does not mark grammatical gender, approaches developed for English are not transferable to morphologically rich languages that exhibit gender agreement (Corbett, 1991) .", "In these languages, the words in a sentence are marked with morphological endings that reflect the grammatical gender of the surrounding nouns.", "This means that if the gender of one word changes, the others have to be updated to match.", "As a result, simple heuristics, such as augmenting a corpus with additional sentences in which he and she have been swapped (Zhao et al., 2018) , will yield ungrammatical sentences.", "Consider the Spanish phrase el ingeniero experto (the skilled engineer).", "Replacing ingeniero with ingeniera is insufficient-el must also be replaced with la and experto with experta.", "In this paper, we present a new approach to counterfactual data augmentation (CDA; Lu et al., 2018) for mitigating gender stereotypes associated with animate 1 nouns (i.e., nouns that represent people) for morphologically rich languages.", "We introduce a Markov random field with an optional neural parameterization that infers the manner in which a sentence must change when altering the grammatical gender of particular nouns.", "We use this model as part of a four-step process, depicted in Fig.", "1 , to reinflect entire sentences following an intervention on the grammatical gender of one word.", "We intrinsically evaluate our approach using Spanish and Hebrew, achieving tag-level F 1 scores of 83% and 72% and form-level accuracies of 90% and 87%, respectively.", "We also conduct an extrinsic evaluation using four languages.", "Following Lu et al.", "(2018) , we show that, on average, our approach reduces gender stereotyping in neural language models by a factor of 2.5 without sacrificing grammaticality.", "Gender Stereotypes in Text Men and women are mentioned at different rates in text (Coates, 1987) .", "This problem is exacerbated in certain contexts.", "For example, the sentence he is an engineer is more likely to appear in a corpus than she is an engineer due to the current gender disparity in engineering.", "This imbalance in representation can have a dramatic downstream effect on NLP systems trained on such a corpus, such as giving preference to male engineers over female engineers in an automated resumé filtering system.", "Gender stereotypes of this sort have been observed in word embeddings (Bolukbasi et al., 2016; Sutton et al., 2018) , contextual word embeddings (Zhao et al., 2019) , and co-reference resolution systems (Rudinger et al., 2018; Zhao et al., 2018) inter alia.", "A quick fix: swapping gendered words.", "One approach to mitigating such gender stereotypes is counterfactual data augmentation (CDA; Lu et al., 2018) .", "In English, this involves augmenting a corpus with additional sentences in which gendered words, such as he and she, have been swapped to yield a balanced representation.", "Indeed, Zhao et al.", "(2018) showed that this simple heuristic significantly reduces gender stereotyping in neural co-reference resolution systems, without harming system performance.", "Unfortunately, this approach is only applicable to English and other languages with little morphological inflection.", "When applied to morphologically rich languages that exhibit gender agreement, it yields ungrammatical sentences.", "The problem: inflected languages.", "Many languages, including Spanish and Hebrew, have gender inflections for nouns, verbs, and adjectivesi.e., the words in a sentence are marked with morphological endings that reflect the grammatical gender of the surrounding nouns.", "2 This means that if the gender of one word changes, the others have to be updated to preserve morpho-syntactic agreement (Corbett, 2012) .", "Consider the following example from Spanish, where we wish to transform Sentence (1) to Sentence (2).", "(Parts of words that mark gender are depicted in bold.)", "This task is not as simple as replacing el with la-ingeniero and experto must also be reinflected.", "Moreover, the changes required for one language are not the same as those required for another (e.g., verbs are marked with gender in Hebrew, but not in Spanish).", "( Our approach.", "Our goal is to transform sentences like Sentence (1) to Sentence (2) and vice versa.", "To the best of our knowledge, this task has not been studied previously.", "Indeed, there is no existing annotated corpus of paired sentences that could be used to train a supervised model.", "As a result, we take an unsupervised 3 approach using dependency trees, lemmata, part-of-speech (POS) tags, and morpho-syntactic tags from Universal Dependencies corpora (UD; Nivre et al., 2018) .", "Specifically, we propose the following four-step process: 1.", "Analyze the sentence (including parsing, morphological analysis, and lemmatization).", "Figure 2 : Dependency tree for the sentence El ingeniero alemán es muy experto.", "2.", "Intervene on a gendered word.", "3.", "Infer the new morpho-syntactic tags.", "Reinflect the lemmata to their new forms.", "This process is depicted in Fig.", "1 .", "The primary technical contribution is a novel Markov random field for performing step 3, described in the next section.", "A Markov Random Field for Morpho-Syntactic Agreement In this section, we present a Markov random field (MRF; Koller and Friedman, 2009 ) for morphosyntactic agreement.", "This model defines a joint distribution over sequences of morpho-syntactic tags, conditioned on a labeled dependency tree with associated part-of-speech tags.", "Given an intervention on a gendered word, we can use this model to infer the manner in which the remaining tags must be updated to preserve morpho-syntactic agreement.", "A dependency tree for a sentence (see Fig.", "2 for an example) is a set of ordered triples (i, j, ), where i and j are positions in the sentence (or a distinguished root symbol) and ∈ L is the label of the edge i → j in the tree; each position occurs exactly once as the first element in a triple.", "Each dependency tree T is associated with a sequence of morpho-syntactic tags m = m 1 , .", ".", ".", ", m |T | and a sequence of part-ofspeech (POS) tags p = p 1 , .", ".", ".", ", p |T | .", "For example, the tags m ∈ M and p ∈ P for ingeniero are [MSC; SG] and NOUN, respectively, because ingeniero is a masculine, singular noun.", "For notational simplicity, we define M = M |T | to be the set of all length-|T | sequences of morpho-syntactic tags.", "We define the probability of m given T and p as Pr(m | T, p) ∝ (i,j, )∈T φ i (m i ) · ψ(m i , m j | p i , p j , ), (1) where the binary factor ψ(·, · | ·, ·, ·) ≥ 0 scores how well the morpho-syntactic tags m i and m j agree given the POS tags p i and p j and the label .", "For example, consider the amod (adjectival modifier) edge from experto to ingeniero in Fig.", "2 .", "The factor ψ(m i , m j | A, N, amod) returns a high score if the corresponding morpho-syntactic tags agree in gender and number (e.g., m i = [MSC; SG] and m j = [MSC; SG]) and a low score if they do not (e.g., m i = [MSC; SG] and m j = [FEM; PL]).", "The unary factor φ i (·) ≥ 0 scores a morpho-syntactic tag m i outside the context of the dependency tree.", "As we explain in §3.1, we use these unary factors to force or disallow particular tags when performing an intervention; we do not learn them.", "Eq.", "(1) is normalized by the following partition function: Z(T, p) = m ∈M (i,j, )∈T φ i (m i ) · ψ(m i , m j | p i , p j , ).", "Z(T, p) can be calculated using belief propagation; we provide the update equations that we use in App.", "A.", "Our model is depicted in Fig.", "3 .", "It is noteworthy that this model is delexicalized-i.e., it considers only the labeled dependency tree and the POS tags, not the actual words themselves.", "Parameterization We consider a linear parameterization and a neural parameterization of the binary factor ψ(·, · | ·, ·, ·).", "Linear parameterization.", "We define a matrix W (p i , p j , ) ∈ R c×c for each triple (p i , p j , ), where c is the number of morpho-syntactic subtags.", "For example, [MSC; SG] has two subtags MSC and SG.", "We then define ψ(·, · | ·, ·, ·) as follows: ψ(m i , m j | p i , p j , ) = exp (m i W (p i , p j , )m j ), where m i ∈ {0, 1} c is a multi-hot encoding of m i .", "Neural parameterization.", "As an alternative, we also define a neural parameterization of W (p i , p j , ) to allow parameter sharing among El ingeniero alemán es muy experto edges with different parts of speech and labels: φ1(·) φ2(·) φ3(·) φ4(·) φ5(·) φ6(·) ψ(·, · | D, N, det) ψ(·, · | A, N, amod) ψ(·, · | N, V, cop) ψ(·, · | AV, A, advmod) ψ(·, · | A, N, amod) W (p i , p j , ) = exp (U tanh(V [e(p i ); e(p j ); e( )])) where U ∈ R c×c×n 1 , V ∈ R n 1 ×3n 2 , and n 1 and n 2 define the structure of the neural parameterization and each e(·) ∈ R n 2 is an embedding function.", "Parameterization of φ i .", "We use the unary factors only to force or disallow particular tags when performing an intervention.", "Specifically, we define φ i (m) = α if m = m i 1 otherwise, (2) where α > 1 is a strength parameter that determines the extent to which m i should remain unchanged following an intervention.", "In the limit as α → ∞, all tags will remain unchanged except for the tag directly involved in the intervention.", "4 Inference Because our MRF is acyclic and tree-shaped, we can use belief propagation (Pearl, 1988) to perform exact inference.", "The algorithm is a generalization of the forward-backward algorithm for hidden Markov models (Rabiner and Juang, 1986 Parameter Estimation We use gradient-based optimization.", "We treat the negative log-likelihood − log (Pr(m | T, p)) as the loss function for tree T and compute its gradient using automatic differentiation (Rall, 1981) .", "We learn the parameters of §3.1 by optimizing the negative log-likelihood using gradient descent.", "Intervention As explained in §2, our goal is to transform sentences like Sentence (1) to Sentence (2) by intervening on a gendered word and then using our model to infer the manner in which the remaining tags must be updated to preserve morphosyntactic agreement.", "For example, if we change the morpho-syntactic tag for ingeniero from [MSC;SG] to [FEM;SG], then we must also update the tags for el and experto, but do not need to update the tag for es, which should remain unchanged as [IN; PR; SG].", "If we intervene on the i th word in a sentence, changing its tag from m i to m i , then using our model to infer the manner in which the remaining tags must be updated means using Pr(m −i | m i , T, p) to identify high-probability tags for the remaining words.", "Crucially, we wish to change as little as possible when intervening on a gendered word.", "The unary factors φ i enable us to do exactly this.", "As described in the previous section, the strength parameter α determines the extent to which m i should remain unchanged following an intervention-the larger the value, the less likely it is that m i will be changed.", "Once the new tags have been inferred, the final step is to reinflect the lemmata to their new forms.", "This task has received considerable attention from the NLP community (Cotterell et al., 2016 (Cotterell et al., , 2017 .", "We use the inflection model of .", "This model conditions on the lemma x and morphosyntactic tag m to form a distribution over possible inflections.", "For example, given experto and [A; FEM; PL], the trained inflection model will assign a high probability to expertas.", "We provide accuracies for the trained inflection model in Tab.", "1.", "Experiments We used the Adam optimizer (Kingma and Ba, 2014) to train both parameterizations of our model until the change in dev-loss was less than 10 −5 bits.", "We set β = (0.9, 0.999) without tuning, and chose a learning rate of 0.005 and weight decay factor of 0.0001 after tuning.", "We tuned log α in the set {0.5, 0.75, 1, 2, 5, 10} and chose log α = 1.", "For the neural parameterization, we set n 1 = 9 and n 2 = 3 without any tuning.", "Finally, we trained the inflection model using only gendered words.", "We evaluate our approach both intrinsically and extrinsically.", "For the intrinsic evaluation, we focus on whether our approach yields the correct morphosyntactic tags and the correct reinflections.", "For the extrinsic evaluation, we assess the extent to which using the resulting transformed sentences reduces gender stereotyping in neural language models.", "Intrinsic Evaluation To the best of our knowledge, this task has not been studied previously.", "As a result, there is no existing annotated corpus of paired sentences that can be used as \"ground truth.\"", "We therefore annotated Spanish and Hebrew sentences ourselves, with annotations made by native speakers of each language.", "Specifically, for each language, we extracted sentences containing animate nouns from that language's UD treebank.", "The average length of these extracted sentences was 37 words.", "We then manually inspected each sentence, intervening on the gender of the animate noun and reinflecting the sentence accordingly.", "We chose Spanish and Hebrew because gender agreement operates differ- Table 3 : Tag-level precision, recall, F 1 score, and accuracy and form-level accuracy for the baselines (\"-BASE\") and for our approach (\"-LIN\" is the linear parameterization, \"-NN\" is the neural parameterization).", "ently in each language.", "We provide corpus statistics for both languages in the top two rows of Tab.", "2.", "We created a hard-coded ψ(·, · | ·, ·, ·) to serve as a baseline for each language.", "For Spanish, we only activated, i.e.", "set to a number greater than zero, values that relate adjectives and determiners to nouns; for Hebrew, we only activated values that relate adjectives and verbs to nouns.", "We created two separate baselines because gender agreement operates differently in each language.", "To evaluate our approach, we held all morphosyntactic subtags fixed except for gender.", "For each annotated sentence, we intervened on the gender of the animate noun.", "We then used our model to infer which of the remaining tags should be updated (updating a tag means swapping the gender subtag because all morpho-syntactic subtags were held fixed except for gender) and reinflected the lemmata.", "Finally, we used the annotations to compute the taglevel F 1 score and the form-level accuracy, excluding the animate nouns on which we intervened.", "Results.", "We present the results in Tab.", "3.", "Recall is consistently significantly lower than precision.", "As expected, the baselines have the highest precision (though not by much).", "This is because they reflect well-known rules for each language.", "That said, they have lower recall than our approach because they fail to capture more subtle relationships.", "For both languages, our approach struggles with conjunctions.", "For example, consider the phraseél es un ingeniero y escritor (he is an engineer and a writer).", "Replacing ingeniero with ingeniera does not necessarily result in escritor being changed to escritora.", "This is because two nouns do not normally need to have the same gender when they are conjoined.", "Moreover, our MRF does not include co-reference information, so it cannot tell that, in this case, both nouns refer to the same person.", "Note Figure 4 : Gender stereotyping (left) and grammaticality (right) using the original corpus, the corpus following CDA using naïve swapping of gendered words (\"Swap\"), and the corpus following CDA using our approach (\"MRF\").", "that including co-reference information in our MRF would create cycles and inference would no longer be exact.", "Additionally, the lack of co-reference information means that, for Spanish, our approach fails to convert nouns that are noun-modifiers or indirect objects of verbs.", "Somewhat surprisingly, the neural parameterization does not outperform the linear parameterization.", "We proposed the neural parameterization to allow parameter sharing among edges with different parts of speech and labels; however, this parameter sharing does not seem to make a difference in practice, so the linear parameterization is sufficient.", "Extrinsic Evaluation We extrinsically evaluate our approach by assessing the extent to which it reduces gender stereotyping.", "Following Lu et al.", "(2018) , focus on neural language models.", "We choose language models over word embeddings because standard measures of gender stereotyping for word embeddings cannot be applied to morphologically rich languages.", "As our measure of gender stereotyping, we compare the log ratio of the prefix probabilities under a language model P lm for gendered, animate nouns, such as ingeniero, combined with four adjectives: good, bad, smart, and beautiful.", "The translations we use for these adjectives are given in App.", "B.", "We chose the first two adjectives because they should be used equally to describe men and women, and the latter two because we expect that they will reveal gender stereotypes.", "For example, consider log x∈Σ * P lm (BOS El ingeniero bueno x) x∈Σ * P lm (BOS La ingeniera buena x) .", "If this log ratio is close to 0, then the language model is as likely to generate sentences that start with el ingeniero bueno (the good male engineer) as it is to generate sentences that start with la ingeniera bueno (the good female engineer).", "If the log ratio is negative, then the language model is more likely to generate the feminine form than the masculine form, while the opposite is true if the log ratio is positive.", "In practice, given the current gender disparity in engineering, we would expect the log ratio to be positive.", "If, however, the language model were trained on a corpus to which our CDA approach had been applied, we would then expect the log ratio to be much closer to zero.", "Because our approach is specifically intended to yield sentences that are grammatical, we additionally consider the following log ratio (i.e., the grammatical phrase over the ungrammatical phrase): log x∈Σ * P lm (BOS El ingeniero bueno x) x∈Σ * P lm (BOS El ingeniera bueno x) .", "We trained the linear parameterization using UD treebanks for Spanish, Hebrew, French, and Italian (see Tab.", "2).", "For each of the four languages, we parsed one million sentences from Wikipedia (May 2018 dump) using Dozat and Manning (2016) 's parser and extracted taggings and lemmata using the method of Müller et al.", "(2015) .", "We automatically extracted an animacy gazetteer from WordNet (Bond and Paik, 2012) and then manually filtered the output for correctness.", "We provide the size of the languages' animacy gazetteers and the percentage of automatically parsed sentences that contain an animate noun in Tab.", "4.", "For each sentence containing a noun in our animacy gazetteer, we created a copy of the sentence, intervened on Figure 5 : Gender stereotyping for words that are stereotyped toward men or women in Spanish using the original corpus, the corpus following CDA using naïve swapping of gendered words (\"Swap\"), and the corpus following CDA using our approach (\"MRF\").", "the noun, and then used our approach to transform the sentence.", "For sentences containing more than one animate noun, we generated a separate sentence for each possible combination of genders.", "Choosing which sentences to duplicate is a difficult task.", "For example, alemán in Spanish can refer to either a German man or the German language; however, we have no way of distinguishing between these two meanings without additional annotations.", "Multilingual animacy detection (Jahan et al., 2018) might help with this challenge; co-reference information might additionally help.", "For each language, we trained the BPE-RNNLM baseline open-vocabulary language model of Mielke and Eisner (2018) using the original corpus, the corpus following CDA using naïve swapping of gendered words, and the corpus following CDA using our approach.", "We then computed gender stereotyping and grammaticality as described above.", "We provide example phrases in Tab.", "5; we provide a more extensive list of phrases in App.", "C. Results Fig.", "4 demonstrates depicts gender stereotyping and grammaticality for each language using the original corpus, the corpus following CDA using naïve swapping of gendered words, and the corpus following CDA using our approach.", "It is immediately apparent that our approch reduces gender stereotyping.", "On average, our approach reduces gender stereotyping by a factor of 2.5 (the lowest and highest factors are 1.2 (Ita) and 5.0 (Esp), respectively).", "We expected that naïve swapping of gendered words would also reduce gender stereotyping.", "Indeed, we see that this simple heuristic reduces gender stereotyping for some but not all of the languages.", "For Spanish, we also examine specific words that are stereotyped Table 5 : Prefix log-likelihoods of Spanish phrases using the original corpus, the corpus following CDA using naïve swapping of gendered words (\"Swap\"), and the corpus following CDA using our approach (\"MRF\").", "Phrases 1 and 2 are grammatical, while phrases 3 and 4 are not (dentoted by \"*\").", "Gender stereotyping is measured using phrases 1 and 2.", "Grammaticality is measured using phrases 1 and 3 and using phrases 2 and 4; these scores are then averaged.", "toward men or women.", "We define a word to be stereotyped toward one gender if 75% of its occurrences are of that gender.", "Fig.", "5 suggests a clear reduction in gender stereotyping for specific words that are stereotyped toward men or women.", "The grammaticality of the corpora following CDA differs between languages.", "That said, with the exception of Hebrew, our approach either sacrifices less grammaticality than naïve swapping of gendered words and sometimes increases grammaticality over the original corpus.", "Given that we know the model did not perform as accurately for Hebrew (see Tab.", "3), this finding is not surprising.", "Related Work In contrast to previous work, we focus on mitigating gender stereotypes in languages with rich morphology-specifically languages that exhibit gender agreement.", "To date, the NLP community has focused on approaches for detecting and mitigating gender stereotypes in English.", "For example, Bolukbasi et al.", "(2016) proposed a way of mitigating gender stereotypes in word embeddings while preserving meanings; Lu et al.", "(2018) studied gender stereotypes in language models; and Rudinger et al.", "(2018) introduced a novel Winograd schema for evaluating gender stereotypes in co-reference resolution.", "The most closely related work is that of Zhao et al.", "(2018) , who used CDA to reduce gender stereotypes in co-reference resolution; however, their approach yields ungrammatical sentences in morphologically rich languages.", "Our approach is specifically intended to yield grammatical sentences when applied to such languages.", "Habash et al.", "(2019) also focused on morphologically rich languages, specifically Arabic, but in the context of gender identification in machine translation.", "Conclusion We presented a new approach for converting between masculine-inflected and feminine-inflected noun phrases in morphologically rich languages.", "To do this, we introduced a Markov random field with an optional neural parameterization that infers the manner in which a sentence must change to preserve morpho-syntactic agreement when altering the grammatical gender of particular nouns.", "To the best of our knowledge, this task has not been studied previously.", "As a result, there is no existing annotated corpus of paired sentences that can be used as \"ground truth.\"", "Despite this limitation, we evaluated our approach both intrinsically and extrinsically, achieving promising results.", "For example, we demonstrated that our approach reduces gender stereotyping in neural language models.", "Finally, we also identified avenues for future work, such as the inclusion of co-reference information.", "A Belief Propagation Update Equations Our belief propagation update equations are µ i→f (m) = f ∈N (i)\\{f } µ f →i (m) (3) µ f i →i (m) = φ i (m) µ i→f i (m) (4) µ f ij →i (m) = m ∈M ψ(m , m | p i , p j , ) µ j→f ij (m ) (5) µ f ij →j (m) = m ∈M ψ(m, m | p i , p j , ) µ i→f ij (m ) (6) where N (i) returns the set of neighbouring nodes of node i.", "The belief at any node is given by β(v) = f ∈N (v) µ f →v (m).", "(7) B Adjective Translations Tab.", "6 and Tab.", "7 contain the feminine and masculine translations of the four adjectives that we used.", "C Extrinsic Evaluation Example Phrases For each noun in our animacy gazetteer, we generated sixteen phrases.", "Consider the noun engineer as an example.", "We created four phrases-one for each translation of The good engineer, The bad engineer, The smart engineer, and The beautiful engineer.", "These phrases, as well as their prefix log-likelihoods are provided below in Tab.", "8.", "Table 8 : Prefix log-likelihoods of Spanish phrases using the original corpus, the corpus following CDA using naïve swapping of gendered words (\"Swap\"), and the corpus following CDA using our approach (\"MRF\").", "Ungrammatical phrases are denoted by \"*\".", "Phrase" ] }
{ "paper_header_number": [ "1", "2", "4.", "3", "3.1", "3.2", "3.3", "4", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Gender Stereotypes in Text", "Reinflect the lemmata to their new forms.", "A Markov Random Field for Morpho-Syntactic Agreement", "Parameterization", "Inference", "Parameter Estimation", "Intervention", "Experiments", "Intrinsic Evaluation", "Extrinsic Evaluation", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-98#paper-1253#slide-6
Reinflect tokens to obtain the CDA sentence
Get the new sentence by performing morphological reinflection where tags changes: (this is a reasonably well-working procedure, established in three shared tasks at SIGMORPHON and CoNLL) Die gute Arztin sitzt auf einem Stuhl F SG;NOM F ;SG; NOM F ;SG; NOM
Get the new sentence by performing morphological reinflection where tags changes: (this is a reasonably well-working procedure, established in three shared tasks at SIGMORPHON and CoNLL) Die gute Arztin sitzt auf einem Stuhl F SG;NOM F ;SG; NOM F ;SG; NOM
[]
GEM-SciDuet-train-98#paper-1253#slide-7
1253
Counterfactual Data Augmentation for Mitigating Gender Stereotypes in Languages with Rich Morphology
Gender stereotypes are manifest in most of the world's languages and are consequently propagated or amplified by NLP systems. Although research has focused on mitigating gender stereotypes in English, the approaches that are commonly employed produce ungrammatical sentences in morphologically rich languages. We present a novel approach for converting between masculine-inflected and feminineinflected sentences in such languages. For Spanish and Hebrew, our approach achieves F 1 scores of 82% and 73% at the level of tags and accuracies of 90% and 87% at the level of forms. By evaluating our approach using four different languages, we show that, on average, it reduces gender stereotyping by a factor of 2.5 without any sacrifice to grammaticality. Sebastian J. Mielke and Jason Eisner. 2018. Spell once, summon anywhere: A two-level open-vocabulary language model. CoRR, abs/1804.08205. Thomas Müller, Ryan Cotterell, Alexander Fraser, and Hinrich Schütze. 2015. Joint lemmatization and morphological tagging with lemming. In Proceed-
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238 ], "paper_content_text": [ "Introduction One of the biggest challenges faced by modern natural language processing (NLP) systems is the inadvertent replication or amplification of societal biases.", "This is because NLP systems depend on language corpora, which are inherently \"not objective; they are creations of human design\" (Crawford, 2013) .", "One type of societal bias that has received considerable attention from the NLP community is gender stereotyping (Garg et al., 2017; Rudinger et al., 2017; Sutton et al., 2018) .", "Gender stereotypes can manifest in language in overt ways.", "For example, the sentence he is an engineer is more likely to appear in a corpus than she is an engineer due to the current gender disparity in engineering.", "Consequently, any NLP system that is trained such a corpus will likely learn to associate engineer with men, but not with women (De-Arteaga et al., 2019) .", "To date, the NLP community has focused primarily on approaches for detecting and mitigating gender stereotypes in English (Bolukbasi et al., 2016; Dixon et al., 2018; Zhao et al., 2017 ).", "Yet, gender stereotypes also exist in other languages .", "We extract the properties of each word in the sentence.", "We then fix a noun and its tags and infer the manner in which the remaining tags must be updated.", "Finally, we reinflect the lemmata to their new forms.", "because they are a function of society, not of grammar.", "Moreover, because English does not mark grammatical gender, approaches developed for English are not transferable to morphologically rich languages that exhibit gender agreement (Corbett, 1991) .", "In these languages, the words in a sentence are marked with morphological endings that reflect the grammatical gender of the surrounding nouns.", "This means that if the gender of one word changes, the others have to be updated to match.", "As a result, simple heuristics, such as augmenting a corpus with additional sentences in which he and she have been swapped (Zhao et al., 2018) , will yield ungrammatical sentences.", "Consider the Spanish phrase el ingeniero experto (the skilled engineer).", "Replacing ingeniero with ingeniera is insufficient-el must also be replaced with la and experto with experta.", "In this paper, we present a new approach to counterfactual data augmentation (CDA; Lu et al., 2018) for mitigating gender stereotypes associated with animate 1 nouns (i.e., nouns that represent people) for morphologically rich languages.", "We introduce a Markov random field with an optional neural parameterization that infers the manner in which a sentence must change when altering the grammatical gender of particular nouns.", "We use this model as part of a four-step process, depicted in Fig.", "1 , to reinflect entire sentences following an intervention on the grammatical gender of one word.", "We intrinsically evaluate our approach using Spanish and Hebrew, achieving tag-level F 1 scores of 83% and 72% and form-level accuracies of 90% and 87%, respectively.", "We also conduct an extrinsic evaluation using four languages.", "Following Lu et al.", "(2018) , we show that, on average, our approach reduces gender stereotyping in neural language models by a factor of 2.5 without sacrificing grammaticality.", "Gender Stereotypes in Text Men and women are mentioned at different rates in text (Coates, 1987) .", "This problem is exacerbated in certain contexts.", "For example, the sentence he is an engineer is more likely to appear in a corpus than she is an engineer due to the current gender disparity in engineering.", "This imbalance in representation can have a dramatic downstream effect on NLP systems trained on such a corpus, such as giving preference to male engineers over female engineers in an automated resumé filtering system.", "Gender stereotypes of this sort have been observed in word embeddings (Bolukbasi et al., 2016; Sutton et al., 2018) , contextual word embeddings (Zhao et al., 2019) , and co-reference resolution systems (Rudinger et al., 2018; Zhao et al., 2018) inter alia.", "A quick fix: swapping gendered words.", "One approach to mitigating such gender stereotypes is counterfactual data augmentation (CDA; Lu et al., 2018) .", "In English, this involves augmenting a corpus with additional sentences in which gendered words, such as he and she, have been swapped to yield a balanced representation.", "Indeed, Zhao et al.", "(2018) showed that this simple heuristic significantly reduces gender stereotyping in neural co-reference resolution systems, without harming system performance.", "Unfortunately, this approach is only applicable to English and other languages with little morphological inflection.", "When applied to morphologically rich languages that exhibit gender agreement, it yields ungrammatical sentences.", "The problem: inflected languages.", "Many languages, including Spanish and Hebrew, have gender inflections for nouns, verbs, and adjectivesi.e., the words in a sentence are marked with morphological endings that reflect the grammatical gender of the surrounding nouns.", "2 This means that if the gender of one word changes, the others have to be updated to preserve morpho-syntactic agreement (Corbett, 2012) .", "Consider the following example from Spanish, where we wish to transform Sentence (1) to Sentence (2).", "(Parts of words that mark gender are depicted in bold.)", "This task is not as simple as replacing el with la-ingeniero and experto must also be reinflected.", "Moreover, the changes required for one language are not the same as those required for another (e.g., verbs are marked with gender in Hebrew, but not in Spanish).", "( Our approach.", "Our goal is to transform sentences like Sentence (1) to Sentence (2) and vice versa.", "To the best of our knowledge, this task has not been studied previously.", "Indeed, there is no existing annotated corpus of paired sentences that could be used to train a supervised model.", "As a result, we take an unsupervised 3 approach using dependency trees, lemmata, part-of-speech (POS) tags, and morpho-syntactic tags from Universal Dependencies corpora (UD; Nivre et al., 2018) .", "Specifically, we propose the following four-step process: 1.", "Analyze the sentence (including parsing, morphological analysis, and lemmatization).", "Figure 2 : Dependency tree for the sentence El ingeniero alemán es muy experto.", "2.", "Intervene on a gendered word.", "3.", "Infer the new morpho-syntactic tags.", "Reinflect the lemmata to their new forms.", "This process is depicted in Fig.", "1 .", "The primary technical contribution is a novel Markov random field for performing step 3, described in the next section.", "A Markov Random Field for Morpho-Syntactic Agreement In this section, we present a Markov random field (MRF; Koller and Friedman, 2009 ) for morphosyntactic agreement.", "This model defines a joint distribution over sequences of morpho-syntactic tags, conditioned on a labeled dependency tree with associated part-of-speech tags.", "Given an intervention on a gendered word, we can use this model to infer the manner in which the remaining tags must be updated to preserve morpho-syntactic agreement.", "A dependency tree for a sentence (see Fig.", "2 for an example) is a set of ordered triples (i, j, ), where i and j are positions in the sentence (or a distinguished root symbol) and ∈ L is the label of the edge i → j in the tree; each position occurs exactly once as the first element in a triple.", "Each dependency tree T is associated with a sequence of morpho-syntactic tags m = m 1 , .", ".", ".", ", m |T | and a sequence of part-ofspeech (POS) tags p = p 1 , .", ".", ".", ", p |T | .", "For example, the tags m ∈ M and p ∈ P for ingeniero are [MSC; SG] and NOUN, respectively, because ingeniero is a masculine, singular noun.", "For notational simplicity, we define M = M |T | to be the set of all length-|T | sequences of morpho-syntactic tags.", "We define the probability of m given T and p as Pr(m | T, p) ∝ (i,j, )∈T φ i (m i ) · ψ(m i , m j | p i , p j , ), (1) where the binary factor ψ(·, · | ·, ·, ·) ≥ 0 scores how well the morpho-syntactic tags m i and m j agree given the POS tags p i and p j and the label .", "For example, consider the amod (adjectival modifier) edge from experto to ingeniero in Fig.", "2 .", "The factor ψ(m i , m j | A, N, amod) returns a high score if the corresponding morpho-syntactic tags agree in gender and number (e.g., m i = [MSC; SG] and m j = [MSC; SG]) and a low score if they do not (e.g., m i = [MSC; SG] and m j = [FEM; PL]).", "The unary factor φ i (·) ≥ 0 scores a morpho-syntactic tag m i outside the context of the dependency tree.", "As we explain in §3.1, we use these unary factors to force or disallow particular tags when performing an intervention; we do not learn them.", "Eq.", "(1) is normalized by the following partition function: Z(T, p) = m ∈M (i,j, )∈T φ i (m i ) · ψ(m i , m j | p i , p j , ).", "Z(T, p) can be calculated using belief propagation; we provide the update equations that we use in App.", "A.", "Our model is depicted in Fig.", "3 .", "It is noteworthy that this model is delexicalized-i.e., it considers only the labeled dependency tree and the POS tags, not the actual words themselves.", "Parameterization We consider a linear parameterization and a neural parameterization of the binary factor ψ(·, · | ·, ·, ·).", "Linear parameterization.", "We define a matrix W (p i , p j , ) ∈ R c×c for each triple (p i , p j , ), where c is the number of morpho-syntactic subtags.", "For example, [MSC; SG] has two subtags MSC and SG.", "We then define ψ(·, · | ·, ·, ·) as follows: ψ(m i , m j | p i , p j , ) = exp (m i W (p i , p j , )m j ), where m i ∈ {0, 1} c is a multi-hot encoding of m i .", "Neural parameterization.", "As an alternative, we also define a neural parameterization of W (p i , p j , ) to allow parameter sharing among El ingeniero alemán es muy experto edges with different parts of speech and labels: φ1(·) φ2(·) φ3(·) φ4(·) φ5(·) φ6(·) ψ(·, · | D, N, det) ψ(·, · | A, N, amod) ψ(·, · | N, V, cop) ψ(·, · | AV, A, advmod) ψ(·, · | A, N, amod) W (p i , p j , ) = exp (U tanh(V [e(p i ); e(p j ); e( )])) where U ∈ R c×c×n 1 , V ∈ R n 1 ×3n 2 , and n 1 and n 2 define the structure of the neural parameterization and each e(·) ∈ R n 2 is an embedding function.", "Parameterization of φ i .", "We use the unary factors only to force or disallow particular tags when performing an intervention.", "Specifically, we define φ i (m) = α if m = m i 1 otherwise, (2) where α > 1 is a strength parameter that determines the extent to which m i should remain unchanged following an intervention.", "In the limit as α → ∞, all tags will remain unchanged except for the tag directly involved in the intervention.", "4 Inference Because our MRF is acyclic and tree-shaped, we can use belief propagation (Pearl, 1988) to perform exact inference.", "The algorithm is a generalization of the forward-backward algorithm for hidden Markov models (Rabiner and Juang, 1986 Parameter Estimation We use gradient-based optimization.", "We treat the negative log-likelihood − log (Pr(m | T, p)) as the loss function for tree T and compute its gradient using automatic differentiation (Rall, 1981) .", "We learn the parameters of §3.1 by optimizing the negative log-likelihood using gradient descent.", "Intervention As explained in §2, our goal is to transform sentences like Sentence (1) to Sentence (2) by intervening on a gendered word and then using our model to infer the manner in which the remaining tags must be updated to preserve morphosyntactic agreement.", "For example, if we change the morpho-syntactic tag for ingeniero from [MSC;SG] to [FEM;SG], then we must also update the tags for el and experto, but do not need to update the tag for es, which should remain unchanged as [IN; PR; SG].", "If we intervene on the i th word in a sentence, changing its tag from m i to m i , then using our model to infer the manner in which the remaining tags must be updated means using Pr(m −i | m i , T, p) to identify high-probability tags for the remaining words.", "Crucially, we wish to change as little as possible when intervening on a gendered word.", "The unary factors φ i enable us to do exactly this.", "As described in the previous section, the strength parameter α determines the extent to which m i should remain unchanged following an intervention-the larger the value, the less likely it is that m i will be changed.", "Once the new tags have been inferred, the final step is to reinflect the lemmata to their new forms.", "This task has received considerable attention from the NLP community (Cotterell et al., 2016 (Cotterell et al., , 2017 .", "We use the inflection model of .", "This model conditions on the lemma x and morphosyntactic tag m to form a distribution over possible inflections.", "For example, given experto and [A; FEM; PL], the trained inflection model will assign a high probability to expertas.", "We provide accuracies for the trained inflection model in Tab.", "1.", "Experiments We used the Adam optimizer (Kingma and Ba, 2014) to train both parameterizations of our model until the change in dev-loss was less than 10 −5 bits.", "We set β = (0.9, 0.999) without tuning, and chose a learning rate of 0.005 and weight decay factor of 0.0001 after tuning.", "We tuned log α in the set {0.5, 0.75, 1, 2, 5, 10} and chose log α = 1.", "For the neural parameterization, we set n 1 = 9 and n 2 = 3 without any tuning.", "Finally, we trained the inflection model using only gendered words.", "We evaluate our approach both intrinsically and extrinsically.", "For the intrinsic evaluation, we focus on whether our approach yields the correct morphosyntactic tags and the correct reinflections.", "For the extrinsic evaluation, we assess the extent to which using the resulting transformed sentences reduces gender stereotyping in neural language models.", "Intrinsic Evaluation To the best of our knowledge, this task has not been studied previously.", "As a result, there is no existing annotated corpus of paired sentences that can be used as \"ground truth.\"", "We therefore annotated Spanish and Hebrew sentences ourselves, with annotations made by native speakers of each language.", "Specifically, for each language, we extracted sentences containing animate nouns from that language's UD treebank.", "The average length of these extracted sentences was 37 words.", "We then manually inspected each sentence, intervening on the gender of the animate noun and reinflecting the sentence accordingly.", "We chose Spanish and Hebrew because gender agreement operates differ- Table 3 : Tag-level precision, recall, F 1 score, and accuracy and form-level accuracy for the baselines (\"-BASE\") and for our approach (\"-LIN\" is the linear parameterization, \"-NN\" is the neural parameterization).", "ently in each language.", "We provide corpus statistics for both languages in the top two rows of Tab.", "2.", "We created a hard-coded ψ(·, · | ·, ·, ·) to serve as a baseline for each language.", "For Spanish, we only activated, i.e.", "set to a number greater than zero, values that relate adjectives and determiners to nouns; for Hebrew, we only activated values that relate adjectives and verbs to nouns.", "We created two separate baselines because gender agreement operates differently in each language.", "To evaluate our approach, we held all morphosyntactic subtags fixed except for gender.", "For each annotated sentence, we intervened on the gender of the animate noun.", "We then used our model to infer which of the remaining tags should be updated (updating a tag means swapping the gender subtag because all morpho-syntactic subtags were held fixed except for gender) and reinflected the lemmata.", "Finally, we used the annotations to compute the taglevel F 1 score and the form-level accuracy, excluding the animate nouns on which we intervened.", "Results.", "We present the results in Tab.", "3.", "Recall is consistently significantly lower than precision.", "As expected, the baselines have the highest precision (though not by much).", "This is because they reflect well-known rules for each language.", "That said, they have lower recall than our approach because they fail to capture more subtle relationships.", "For both languages, our approach struggles with conjunctions.", "For example, consider the phraseél es un ingeniero y escritor (he is an engineer and a writer).", "Replacing ingeniero with ingeniera does not necessarily result in escritor being changed to escritora.", "This is because two nouns do not normally need to have the same gender when they are conjoined.", "Moreover, our MRF does not include co-reference information, so it cannot tell that, in this case, both nouns refer to the same person.", "Note Figure 4 : Gender stereotyping (left) and grammaticality (right) using the original corpus, the corpus following CDA using naïve swapping of gendered words (\"Swap\"), and the corpus following CDA using our approach (\"MRF\").", "that including co-reference information in our MRF would create cycles and inference would no longer be exact.", "Additionally, the lack of co-reference information means that, for Spanish, our approach fails to convert nouns that are noun-modifiers or indirect objects of verbs.", "Somewhat surprisingly, the neural parameterization does not outperform the linear parameterization.", "We proposed the neural parameterization to allow parameter sharing among edges with different parts of speech and labels; however, this parameter sharing does not seem to make a difference in practice, so the linear parameterization is sufficient.", "Extrinsic Evaluation We extrinsically evaluate our approach by assessing the extent to which it reduces gender stereotyping.", "Following Lu et al.", "(2018) , focus on neural language models.", "We choose language models over word embeddings because standard measures of gender stereotyping for word embeddings cannot be applied to morphologically rich languages.", "As our measure of gender stereotyping, we compare the log ratio of the prefix probabilities under a language model P lm for gendered, animate nouns, such as ingeniero, combined with four adjectives: good, bad, smart, and beautiful.", "The translations we use for these adjectives are given in App.", "B.", "We chose the first two adjectives because they should be used equally to describe men and women, and the latter two because we expect that they will reveal gender stereotypes.", "For example, consider log x∈Σ * P lm (BOS El ingeniero bueno x) x∈Σ * P lm (BOS La ingeniera buena x) .", "If this log ratio is close to 0, then the language model is as likely to generate sentences that start with el ingeniero bueno (the good male engineer) as it is to generate sentences that start with la ingeniera bueno (the good female engineer).", "If the log ratio is negative, then the language model is more likely to generate the feminine form than the masculine form, while the opposite is true if the log ratio is positive.", "In practice, given the current gender disparity in engineering, we would expect the log ratio to be positive.", "If, however, the language model were trained on a corpus to which our CDA approach had been applied, we would then expect the log ratio to be much closer to zero.", "Because our approach is specifically intended to yield sentences that are grammatical, we additionally consider the following log ratio (i.e., the grammatical phrase over the ungrammatical phrase): log x∈Σ * P lm (BOS El ingeniero bueno x) x∈Σ * P lm (BOS El ingeniera bueno x) .", "We trained the linear parameterization using UD treebanks for Spanish, Hebrew, French, and Italian (see Tab.", "2).", "For each of the four languages, we parsed one million sentences from Wikipedia (May 2018 dump) using Dozat and Manning (2016) 's parser and extracted taggings and lemmata using the method of Müller et al.", "(2015) .", "We automatically extracted an animacy gazetteer from WordNet (Bond and Paik, 2012) and then manually filtered the output for correctness.", "We provide the size of the languages' animacy gazetteers and the percentage of automatically parsed sentences that contain an animate noun in Tab.", "4.", "For each sentence containing a noun in our animacy gazetteer, we created a copy of the sentence, intervened on Figure 5 : Gender stereotyping for words that are stereotyped toward men or women in Spanish using the original corpus, the corpus following CDA using naïve swapping of gendered words (\"Swap\"), and the corpus following CDA using our approach (\"MRF\").", "the noun, and then used our approach to transform the sentence.", "For sentences containing more than one animate noun, we generated a separate sentence for each possible combination of genders.", "Choosing which sentences to duplicate is a difficult task.", "For example, alemán in Spanish can refer to either a German man or the German language; however, we have no way of distinguishing between these two meanings without additional annotations.", "Multilingual animacy detection (Jahan et al., 2018) might help with this challenge; co-reference information might additionally help.", "For each language, we trained the BPE-RNNLM baseline open-vocabulary language model of Mielke and Eisner (2018) using the original corpus, the corpus following CDA using naïve swapping of gendered words, and the corpus following CDA using our approach.", "We then computed gender stereotyping and grammaticality as described above.", "We provide example phrases in Tab.", "5; we provide a more extensive list of phrases in App.", "C. Results Fig.", "4 demonstrates depicts gender stereotyping and grammaticality for each language using the original corpus, the corpus following CDA using naïve swapping of gendered words, and the corpus following CDA using our approach.", "It is immediately apparent that our approch reduces gender stereotyping.", "On average, our approach reduces gender stereotyping by a factor of 2.5 (the lowest and highest factors are 1.2 (Ita) and 5.0 (Esp), respectively).", "We expected that naïve swapping of gendered words would also reduce gender stereotyping.", "Indeed, we see that this simple heuristic reduces gender stereotyping for some but not all of the languages.", "For Spanish, we also examine specific words that are stereotyped Table 5 : Prefix log-likelihoods of Spanish phrases using the original corpus, the corpus following CDA using naïve swapping of gendered words (\"Swap\"), and the corpus following CDA using our approach (\"MRF\").", "Phrases 1 and 2 are grammatical, while phrases 3 and 4 are not (dentoted by \"*\").", "Gender stereotyping is measured using phrases 1 and 2.", "Grammaticality is measured using phrases 1 and 3 and using phrases 2 and 4; these scores are then averaged.", "toward men or women.", "We define a word to be stereotyped toward one gender if 75% of its occurrences are of that gender.", "Fig.", "5 suggests a clear reduction in gender stereotyping for specific words that are stereotyped toward men or women.", "The grammaticality of the corpora following CDA differs between languages.", "That said, with the exception of Hebrew, our approach either sacrifices less grammaticality than naïve swapping of gendered words and sometimes increases grammaticality over the original corpus.", "Given that we know the model did not perform as accurately for Hebrew (see Tab.", "3), this finding is not surprising.", "Related Work In contrast to previous work, we focus on mitigating gender stereotypes in languages with rich morphology-specifically languages that exhibit gender agreement.", "To date, the NLP community has focused on approaches for detecting and mitigating gender stereotypes in English.", "For example, Bolukbasi et al.", "(2016) proposed a way of mitigating gender stereotypes in word embeddings while preserving meanings; Lu et al.", "(2018) studied gender stereotypes in language models; and Rudinger et al.", "(2018) introduced a novel Winograd schema for evaluating gender stereotypes in co-reference resolution.", "The most closely related work is that of Zhao et al.", "(2018) , who used CDA to reduce gender stereotypes in co-reference resolution; however, their approach yields ungrammatical sentences in morphologically rich languages.", "Our approach is specifically intended to yield grammatical sentences when applied to such languages.", "Habash et al.", "(2019) also focused on morphologically rich languages, specifically Arabic, but in the context of gender identification in machine translation.", "Conclusion We presented a new approach for converting between masculine-inflected and feminine-inflected noun phrases in morphologically rich languages.", "To do this, we introduced a Markov random field with an optional neural parameterization that infers the manner in which a sentence must change to preserve morpho-syntactic agreement when altering the grammatical gender of particular nouns.", "To the best of our knowledge, this task has not been studied previously.", "As a result, there is no existing annotated corpus of paired sentences that can be used as \"ground truth.\"", "Despite this limitation, we evaluated our approach both intrinsically and extrinsically, achieving promising results.", "For example, we demonstrated that our approach reduces gender stereotyping in neural language models.", "Finally, we also identified avenues for future work, such as the inclusion of co-reference information.", "A Belief Propagation Update Equations Our belief propagation update equations are µ i→f (m) = f ∈N (i)\\{f } µ f →i (m) (3) µ f i →i (m) = φ i (m) µ i→f i (m) (4) µ f ij →i (m) = m ∈M ψ(m , m | p i , p j , ) µ j→f ij (m ) (5) µ f ij →j (m) = m ∈M ψ(m, m | p i , p j , ) µ i→f ij (m ) (6) where N (i) returns the set of neighbouring nodes of node i.", "The belief at any node is given by β(v) = f ∈N (v) µ f →v (m).", "(7) B Adjective Translations Tab.", "6 and Tab.", "7 contain the feminine and masculine translations of the four adjectives that we used.", "C Extrinsic Evaluation Example Phrases For each noun in our animacy gazetteer, we generated sixteen phrases.", "Consider the noun engineer as an example.", "We created four phrases-one for each translation of The good engineer, The bad engineer, The smart engineer, and The beautiful engineer.", "These phrases, as well as their prefix log-likelihoods are provided below in Tab.", "8.", "Table 8 : Prefix log-likelihoods of Spanish phrases using the original corpus, the corpus following CDA using naïve swapping of gendered words (\"Swap\"), and the corpus following CDA using our approach (\"MRF\").", "Ungrammatical phrases are denoted by \"*\".", "Phrase" ] }
{ "paper_header_number": [ "1", "2", "4.", "3", "3.1", "3.2", "3.3", "4", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Gender Stereotypes in Text", "Reinflect the lemmata to their new forms.", "A Markov Random Field for Morpho-Syntactic Agreement", "Parameterization", "Inference", "Parameter Estimation", "Intervention", "Experiments", "Intrinsic Evaluation", "Extrinsic Evaluation", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-98#paper-1253#slide-7
Intrinsic evaluation how good are we at gender swapping Hebrew Spanish
We manually annotated over 100 sentences for each language and checked performance: P R F1 Acc Acc
We manually annotated over 100 sentences for each language and checked performance: P R F1 Acc Acc
[]
GEM-SciDuet-train-98#paper-1253#slide-8
1253
Counterfactual Data Augmentation for Mitigating Gender Stereotypes in Languages with Rich Morphology
Gender stereotypes are manifest in most of the world's languages and are consequently propagated or amplified by NLP systems. Although research has focused on mitigating gender stereotypes in English, the approaches that are commonly employed produce ungrammatical sentences in morphologically rich languages. We present a novel approach for converting between masculine-inflected and feminineinflected sentences in such languages. For Spanish and Hebrew, our approach achieves F 1 scores of 82% and 73% at the level of tags and accuracies of 90% and 87% at the level of forms. By evaluating our approach using four different languages, we show that, on average, it reduces gender stereotyping by a factor of 2.5 without any sacrifice to grammaticality. Sebastian J. Mielke and Jason Eisner. 2018. Spell once, summon anywhere: A two-level open-vocabulary language model. CoRR, abs/1804.08205. Thomas Müller, Ryan Cotterell, Alexander Fraser, and Hinrich Schütze. 2015. Joint lemmatization and morphological tagging with lemming. In Proceed-
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238 ], "paper_content_text": [ "Introduction One of the biggest challenges faced by modern natural language processing (NLP) systems is the inadvertent replication or amplification of societal biases.", "This is because NLP systems depend on language corpora, which are inherently \"not objective; they are creations of human design\" (Crawford, 2013) .", "One type of societal bias that has received considerable attention from the NLP community is gender stereotyping (Garg et al., 2017; Rudinger et al., 2017; Sutton et al., 2018) .", "Gender stereotypes can manifest in language in overt ways.", "For example, the sentence he is an engineer is more likely to appear in a corpus than she is an engineer due to the current gender disparity in engineering.", "Consequently, any NLP system that is trained such a corpus will likely learn to associate engineer with men, but not with women (De-Arteaga et al., 2019) .", "To date, the NLP community has focused primarily on approaches for detecting and mitigating gender stereotypes in English (Bolukbasi et al., 2016; Dixon et al., 2018; Zhao et al., 2017 ).", "Yet, gender stereotypes also exist in other languages .", "We extract the properties of each word in the sentence.", "We then fix a noun and its tags and infer the manner in which the remaining tags must be updated.", "Finally, we reinflect the lemmata to their new forms.", "because they are a function of society, not of grammar.", "Moreover, because English does not mark grammatical gender, approaches developed for English are not transferable to morphologically rich languages that exhibit gender agreement (Corbett, 1991) .", "In these languages, the words in a sentence are marked with morphological endings that reflect the grammatical gender of the surrounding nouns.", "This means that if the gender of one word changes, the others have to be updated to match.", "As a result, simple heuristics, such as augmenting a corpus with additional sentences in which he and she have been swapped (Zhao et al., 2018) , will yield ungrammatical sentences.", "Consider the Spanish phrase el ingeniero experto (the skilled engineer).", "Replacing ingeniero with ingeniera is insufficient-el must also be replaced with la and experto with experta.", "In this paper, we present a new approach to counterfactual data augmentation (CDA; Lu et al., 2018) for mitigating gender stereotypes associated with animate 1 nouns (i.e., nouns that represent people) for morphologically rich languages.", "We introduce a Markov random field with an optional neural parameterization that infers the manner in which a sentence must change when altering the grammatical gender of particular nouns.", "We use this model as part of a four-step process, depicted in Fig.", "1 , to reinflect entire sentences following an intervention on the grammatical gender of one word.", "We intrinsically evaluate our approach using Spanish and Hebrew, achieving tag-level F 1 scores of 83% and 72% and form-level accuracies of 90% and 87%, respectively.", "We also conduct an extrinsic evaluation using four languages.", "Following Lu et al.", "(2018) , we show that, on average, our approach reduces gender stereotyping in neural language models by a factor of 2.5 without sacrificing grammaticality.", "Gender Stereotypes in Text Men and women are mentioned at different rates in text (Coates, 1987) .", "This problem is exacerbated in certain contexts.", "For example, the sentence he is an engineer is more likely to appear in a corpus than she is an engineer due to the current gender disparity in engineering.", "This imbalance in representation can have a dramatic downstream effect on NLP systems trained on such a corpus, such as giving preference to male engineers over female engineers in an automated resumé filtering system.", "Gender stereotypes of this sort have been observed in word embeddings (Bolukbasi et al., 2016; Sutton et al., 2018) , contextual word embeddings (Zhao et al., 2019) , and co-reference resolution systems (Rudinger et al., 2018; Zhao et al., 2018) inter alia.", "A quick fix: swapping gendered words.", "One approach to mitigating such gender stereotypes is counterfactual data augmentation (CDA; Lu et al., 2018) .", "In English, this involves augmenting a corpus with additional sentences in which gendered words, such as he and she, have been swapped to yield a balanced representation.", "Indeed, Zhao et al.", "(2018) showed that this simple heuristic significantly reduces gender stereotyping in neural co-reference resolution systems, without harming system performance.", "Unfortunately, this approach is only applicable to English and other languages with little morphological inflection.", "When applied to morphologically rich languages that exhibit gender agreement, it yields ungrammatical sentences.", "The problem: inflected languages.", "Many languages, including Spanish and Hebrew, have gender inflections for nouns, verbs, and adjectivesi.e., the words in a sentence are marked with morphological endings that reflect the grammatical gender of the surrounding nouns.", "2 This means that if the gender of one word changes, the others have to be updated to preserve morpho-syntactic agreement (Corbett, 2012) .", "Consider the following example from Spanish, where we wish to transform Sentence (1) to Sentence (2).", "(Parts of words that mark gender are depicted in bold.)", "This task is not as simple as replacing el with la-ingeniero and experto must also be reinflected.", "Moreover, the changes required for one language are not the same as those required for another (e.g., verbs are marked with gender in Hebrew, but not in Spanish).", "( Our approach.", "Our goal is to transform sentences like Sentence (1) to Sentence (2) and vice versa.", "To the best of our knowledge, this task has not been studied previously.", "Indeed, there is no existing annotated corpus of paired sentences that could be used to train a supervised model.", "As a result, we take an unsupervised 3 approach using dependency trees, lemmata, part-of-speech (POS) tags, and morpho-syntactic tags from Universal Dependencies corpora (UD; Nivre et al., 2018) .", "Specifically, we propose the following four-step process: 1.", "Analyze the sentence (including parsing, morphological analysis, and lemmatization).", "Figure 2 : Dependency tree for the sentence El ingeniero alemán es muy experto.", "2.", "Intervene on a gendered word.", "3.", "Infer the new morpho-syntactic tags.", "Reinflect the lemmata to their new forms.", "This process is depicted in Fig.", "1 .", "The primary technical contribution is a novel Markov random field for performing step 3, described in the next section.", "A Markov Random Field for Morpho-Syntactic Agreement In this section, we present a Markov random field (MRF; Koller and Friedman, 2009 ) for morphosyntactic agreement.", "This model defines a joint distribution over sequences of morpho-syntactic tags, conditioned on a labeled dependency tree with associated part-of-speech tags.", "Given an intervention on a gendered word, we can use this model to infer the manner in which the remaining tags must be updated to preserve morpho-syntactic agreement.", "A dependency tree for a sentence (see Fig.", "2 for an example) is a set of ordered triples (i, j, ), where i and j are positions in the sentence (or a distinguished root symbol) and ∈ L is the label of the edge i → j in the tree; each position occurs exactly once as the first element in a triple.", "Each dependency tree T is associated with a sequence of morpho-syntactic tags m = m 1 , .", ".", ".", ", m |T | and a sequence of part-ofspeech (POS) tags p = p 1 , .", ".", ".", ", p |T | .", "For example, the tags m ∈ M and p ∈ P for ingeniero are [MSC; SG] and NOUN, respectively, because ingeniero is a masculine, singular noun.", "For notational simplicity, we define M = M |T | to be the set of all length-|T | sequences of morpho-syntactic tags.", "We define the probability of m given T and p as Pr(m | T, p) ∝ (i,j, )∈T φ i (m i ) · ψ(m i , m j | p i , p j , ), (1) where the binary factor ψ(·, · | ·, ·, ·) ≥ 0 scores how well the morpho-syntactic tags m i and m j agree given the POS tags p i and p j and the label .", "For example, consider the amod (adjectival modifier) edge from experto to ingeniero in Fig.", "2 .", "The factor ψ(m i , m j | A, N, amod) returns a high score if the corresponding morpho-syntactic tags agree in gender and number (e.g., m i = [MSC; SG] and m j = [MSC; SG]) and a low score if they do not (e.g., m i = [MSC; SG] and m j = [FEM; PL]).", "The unary factor φ i (·) ≥ 0 scores a morpho-syntactic tag m i outside the context of the dependency tree.", "As we explain in §3.1, we use these unary factors to force or disallow particular tags when performing an intervention; we do not learn them.", "Eq.", "(1) is normalized by the following partition function: Z(T, p) = m ∈M (i,j, )∈T φ i (m i ) · ψ(m i , m j | p i , p j , ).", "Z(T, p) can be calculated using belief propagation; we provide the update equations that we use in App.", "A.", "Our model is depicted in Fig.", "3 .", "It is noteworthy that this model is delexicalized-i.e., it considers only the labeled dependency tree and the POS tags, not the actual words themselves.", "Parameterization We consider a linear parameterization and a neural parameterization of the binary factor ψ(·, · | ·, ·, ·).", "Linear parameterization.", "We define a matrix W (p i , p j , ) ∈ R c×c for each triple (p i , p j , ), where c is the number of morpho-syntactic subtags.", "For example, [MSC; SG] has two subtags MSC and SG.", "We then define ψ(·, · | ·, ·, ·) as follows: ψ(m i , m j | p i , p j , ) = exp (m i W (p i , p j , )m j ), where m i ∈ {0, 1} c is a multi-hot encoding of m i .", "Neural parameterization.", "As an alternative, we also define a neural parameterization of W (p i , p j , ) to allow parameter sharing among El ingeniero alemán es muy experto edges with different parts of speech and labels: φ1(·) φ2(·) φ3(·) φ4(·) φ5(·) φ6(·) ψ(·, · | D, N, det) ψ(·, · | A, N, amod) ψ(·, · | N, V, cop) ψ(·, · | AV, A, advmod) ψ(·, · | A, N, amod) W (p i , p j , ) = exp (U tanh(V [e(p i ); e(p j ); e( )])) where U ∈ R c×c×n 1 , V ∈ R n 1 ×3n 2 , and n 1 and n 2 define the structure of the neural parameterization and each e(·) ∈ R n 2 is an embedding function.", "Parameterization of φ i .", "We use the unary factors only to force or disallow particular tags when performing an intervention.", "Specifically, we define φ i (m) = α if m = m i 1 otherwise, (2) where α > 1 is a strength parameter that determines the extent to which m i should remain unchanged following an intervention.", "In the limit as α → ∞, all tags will remain unchanged except for the tag directly involved in the intervention.", "4 Inference Because our MRF is acyclic and tree-shaped, we can use belief propagation (Pearl, 1988) to perform exact inference.", "The algorithm is a generalization of the forward-backward algorithm for hidden Markov models (Rabiner and Juang, 1986 Parameter Estimation We use gradient-based optimization.", "We treat the negative log-likelihood − log (Pr(m | T, p)) as the loss function for tree T and compute its gradient using automatic differentiation (Rall, 1981) .", "We learn the parameters of §3.1 by optimizing the negative log-likelihood using gradient descent.", "Intervention As explained in §2, our goal is to transform sentences like Sentence (1) to Sentence (2) by intervening on a gendered word and then using our model to infer the manner in which the remaining tags must be updated to preserve morphosyntactic agreement.", "For example, if we change the morpho-syntactic tag for ingeniero from [MSC;SG] to [FEM;SG], then we must also update the tags for el and experto, but do not need to update the tag for es, which should remain unchanged as [IN; PR; SG].", "If we intervene on the i th word in a sentence, changing its tag from m i to m i , then using our model to infer the manner in which the remaining tags must be updated means using Pr(m −i | m i , T, p) to identify high-probability tags for the remaining words.", "Crucially, we wish to change as little as possible when intervening on a gendered word.", "The unary factors φ i enable us to do exactly this.", "As described in the previous section, the strength parameter α determines the extent to which m i should remain unchanged following an intervention-the larger the value, the less likely it is that m i will be changed.", "Once the new tags have been inferred, the final step is to reinflect the lemmata to their new forms.", "This task has received considerable attention from the NLP community (Cotterell et al., 2016 (Cotterell et al., , 2017 .", "We use the inflection model of .", "This model conditions on the lemma x and morphosyntactic tag m to form a distribution over possible inflections.", "For example, given experto and [A; FEM; PL], the trained inflection model will assign a high probability to expertas.", "We provide accuracies for the trained inflection model in Tab.", "1.", "Experiments We used the Adam optimizer (Kingma and Ba, 2014) to train both parameterizations of our model until the change in dev-loss was less than 10 −5 bits.", "We set β = (0.9, 0.999) without tuning, and chose a learning rate of 0.005 and weight decay factor of 0.0001 after tuning.", "We tuned log α in the set {0.5, 0.75, 1, 2, 5, 10} and chose log α = 1.", "For the neural parameterization, we set n 1 = 9 and n 2 = 3 without any tuning.", "Finally, we trained the inflection model using only gendered words.", "We evaluate our approach both intrinsically and extrinsically.", "For the intrinsic evaluation, we focus on whether our approach yields the correct morphosyntactic tags and the correct reinflections.", "For the extrinsic evaluation, we assess the extent to which using the resulting transformed sentences reduces gender stereotyping in neural language models.", "Intrinsic Evaluation To the best of our knowledge, this task has not been studied previously.", "As a result, there is no existing annotated corpus of paired sentences that can be used as \"ground truth.\"", "We therefore annotated Spanish and Hebrew sentences ourselves, with annotations made by native speakers of each language.", "Specifically, for each language, we extracted sentences containing animate nouns from that language's UD treebank.", "The average length of these extracted sentences was 37 words.", "We then manually inspected each sentence, intervening on the gender of the animate noun and reinflecting the sentence accordingly.", "We chose Spanish and Hebrew because gender agreement operates differ- Table 3 : Tag-level precision, recall, F 1 score, and accuracy and form-level accuracy for the baselines (\"-BASE\") and for our approach (\"-LIN\" is the linear parameterization, \"-NN\" is the neural parameterization).", "ently in each language.", "We provide corpus statistics for both languages in the top two rows of Tab.", "2.", "We created a hard-coded ψ(·, · | ·, ·, ·) to serve as a baseline for each language.", "For Spanish, we only activated, i.e.", "set to a number greater than zero, values that relate adjectives and determiners to nouns; for Hebrew, we only activated values that relate adjectives and verbs to nouns.", "We created two separate baselines because gender agreement operates differently in each language.", "To evaluate our approach, we held all morphosyntactic subtags fixed except for gender.", "For each annotated sentence, we intervened on the gender of the animate noun.", "We then used our model to infer which of the remaining tags should be updated (updating a tag means swapping the gender subtag because all morpho-syntactic subtags were held fixed except for gender) and reinflected the lemmata.", "Finally, we used the annotations to compute the taglevel F 1 score and the form-level accuracy, excluding the animate nouns on which we intervened.", "Results.", "We present the results in Tab.", "3.", "Recall is consistently significantly lower than precision.", "As expected, the baselines have the highest precision (though not by much).", "This is because they reflect well-known rules for each language.", "That said, they have lower recall than our approach because they fail to capture more subtle relationships.", "For both languages, our approach struggles with conjunctions.", "For example, consider the phraseél es un ingeniero y escritor (he is an engineer and a writer).", "Replacing ingeniero with ingeniera does not necessarily result in escritor being changed to escritora.", "This is because two nouns do not normally need to have the same gender when they are conjoined.", "Moreover, our MRF does not include co-reference information, so it cannot tell that, in this case, both nouns refer to the same person.", "Note Figure 4 : Gender stereotyping (left) and grammaticality (right) using the original corpus, the corpus following CDA using naïve swapping of gendered words (\"Swap\"), and the corpus following CDA using our approach (\"MRF\").", "that including co-reference information in our MRF would create cycles and inference would no longer be exact.", "Additionally, the lack of co-reference information means that, for Spanish, our approach fails to convert nouns that are noun-modifiers or indirect objects of verbs.", "Somewhat surprisingly, the neural parameterization does not outperform the linear parameterization.", "We proposed the neural parameterization to allow parameter sharing among edges with different parts of speech and labels; however, this parameter sharing does not seem to make a difference in practice, so the linear parameterization is sufficient.", "Extrinsic Evaluation We extrinsically evaluate our approach by assessing the extent to which it reduces gender stereotyping.", "Following Lu et al.", "(2018) , focus on neural language models.", "We choose language models over word embeddings because standard measures of gender stereotyping for word embeddings cannot be applied to morphologically rich languages.", "As our measure of gender stereotyping, we compare the log ratio of the prefix probabilities under a language model P lm for gendered, animate nouns, such as ingeniero, combined with four adjectives: good, bad, smart, and beautiful.", "The translations we use for these adjectives are given in App.", "B.", "We chose the first two adjectives because they should be used equally to describe men and women, and the latter two because we expect that they will reveal gender stereotypes.", "For example, consider log x∈Σ * P lm (BOS El ingeniero bueno x) x∈Σ * P lm (BOS La ingeniera buena x) .", "If this log ratio is close to 0, then the language model is as likely to generate sentences that start with el ingeniero bueno (the good male engineer) as it is to generate sentences that start with la ingeniera bueno (the good female engineer).", "If the log ratio is negative, then the language model is more likely to generate the feminine form than the masculine form, while the opposite is true if the log ratio is positive.", "In practice, given the current gender disparity in engineering, we would expect the log ratio to be positive.", "If, however, the language model were trained on a corpus to which our CDA approach had been applied, we would then expect the log ratio to be much closer to zero.", "Because our approach is specifically intended to yield sentences that are grammatical, we additionally consider the following log ratio (i.e., the grammatical phrase over the ungrammatical phrase): log x∈Σ * P lm (BOS El ingeniero bueno x) x∈Σ * P lm (BOS El ingeniera bueno x) .", "We trained the linear parameterization using UD treebanks for Spanish, Hebrew, French, and Italian (see Tab.", "2).", "For each of the four languages, we parsed one million sentences from Wikipedia (May 2018 dump) using Dozat and Manning (2016) 's parser and extracted taggings and lemmata using the method of Müller et al.", "(2015) .", "We automatically extracted an animacy gazetteer from WordNet (Bond and Paik, 2012) and then manually filtered the output for correctness.", "We provide the size of the languages' animacy gazetteers and the percentage of automatically parsed sentences that contain an animate noun in Tab.", "4.", "For each sentence containing a noun in our animacy gazetteer, we created a copy of the sentence, intervened on Figure 5 : Gender stereotyping for words that are stereotyped toward men or women in Spanish using the original corpus, the corpus following CDA using naïve swapping of gendered words (\"Swap\"), and the corpus following CDA using our approach (\"MRF\").", "the noun, and then used our approach to transform the sentence.", "For sentences containing more than one animate noun, we generated a separate sentence for each possible combination of genders.", "Choosing which sentences to duplicate is a difficult task.", "For example, alemán in Spanish can refer to either a German man or the German language; however, we have no way of distinguishing between these two meanings without additional annotations.", "Multilingual animacy detection (Jahan et al., 2018) might help with this challenge; co-reference information might additionally help.", "For each language, we trained the BPE-RNNLM baseline open-vocabulary language model of Mielke and Eisner (2018) using the original corpus, the corpus following CDA using naïve swapping of gendered words, and the corpus following CDA using our approach.", "We then computed gender stereotyping and grammaticality as described above.", "We provide example phrases in Tab.", "5; we provide a more extensive list of phrases in App.", "C. Results Fig.", "4 demonstrates depicts gender stereotyping and grammaticality for each language using the original corpus, the corpus following CDA using naïve swapping of gendered words, and the corpus following CDA using our approach.", "It is immediately apparent that our approch reduces gender stereotyping.", "On average, our approach reduces gender stereotyping by a factor of 2.5 (the lowest and highest factors are 1.2 (Ita) and 5.0 (Esp), respectively).", "We expected that naïve swapping of gendered words would also reduce gender stereotyping.", "Indeed, we see that this simple heuristic reduces gender stereotyping for some but not all of the languages.", "For Spanish, we also examine specific words that are stereotyped Table 5 : Prefix log-likelihoods of Spanish phrases using the original corpus, the corpus following CDA using naïve swapping of gendered words (\"Swap\"), and the corpus following CDA using our approach (\"MRF\").", "Phrases 1 and 2 are grammatical, while phrases 3 and 4 are not (dentoted by \"*\").", "Gender stereotyping is measured using phrases 1 and 2.", "Grammaticality is measured using phrases 1 and 3 and using phrases 2 and 4; these scores are then averaged.", "toward men or women.", "We define a word to be stereotyped toward one gender if 75% of its occurrences are of that gender.", "Fig.", "5 suggests a clear reduction in gender stereotyping for specific words that are stereotyped toward men or women.", "The grammaticality of the corpora following CDA differs between languages.", "That said, with the exception of Hebrew, our approach either sacrifices less grammaticality than naïve swapping of gendered words and sometimes increases grammaticality over the original corpus.", "Given that we know the model did not perform as accurately for Hebrew (see Tab.", "3), this finding is not surprising.", "Related Work In contrast to previous work, we focus on mitigating gender stereotypes in languages with rich morphology-specifically languages that exhibit gender agreement.", "To date, the NLP community has focused on approaches for detecting and mitigating gender stereotypes in English.", "For example, Bolukbasi et al.", "(2016) proposed a way of mitigating gender stereotypes in word embeddings while preserving meanings; Lu et al.", "(2018) studied gender stereotypes in language models; and Rudinger et al.", "(2018) introduced a novel Winograd schema for evaluating gender stereotypes in co-reference resolution.", "The most closely related work is that of Zhao et al.", "(2018) , who used CDA to reduce gender stereotypes in co-reference resolution; however, their approach yields ungrammatical sentences in morphologically rich languages.", "Our approach is specifically intended to yield grammatical sentences when applied to such languages.", "Habash et al.", "(2019) also focused on morphologically rich languages, specifically Arabic, but in the context of gender identification in machine translation.", "Conclusion We presented a new approach for converting between masculine-inflected and feminine-inflected noun phrases in morphologically rich languages.", "To do this, we introduced a Markov random field with an optional neural parameterization that infers the manner in which a sentence must change to preserve morpho-syntactic agreement when altering the grammatical gender of particular nouns.", "To the best of our knowledge, this task has not been studied previously.", "As a result, there is no existing annotated corpus of paired sentences that can be used as \"ground truth.\"", "Despite this limitation, we evaluated our approach both intrinsically and extrinsically, achieving promising results.", "For example, we demonstrated that our approach reduces gender stereotyping in neural language models.", "Finally, we also identified avenues for future work, such as the inclusion of co-reference information.", "A Belief Propagation Update Equations Our belief propagation update equations are µ i→f (m) = f ∈N (i)\\{f } µ f →i (m) (3) µ f i →i (m) = φ i (m) µ i→f i (m) (4) µ f ij →i (m) = m ∈M ψ(m , m | p i , p j , ) µ j→f ij (m ) (5) µ f ij →j (m) = m ∈M ψ(m, m | p i , p j , ) µ i→f ij (m ) (6) where N (i) returns the set of neighbouring nodes of node i.", "The belief at any node is given by β(v) = f ∈N (v) µ f →v (m).", "(7) B Adjective Translations Tab.", "6 and Tab.", "7 contain the feminine and masculine translations of the four adjectives that we used.", "C Extrinsic Evaluation Example Phrases For each noun in our animacy gazetteer, we generated sixteen phrases.", "Consider the noun engineer as an example.", "We created four phrases-one for each translation of The good engineer, The bad engineer, The smart engineer, and The beautiful engineer.", "These phrases, as well as their prefix log-likelihoods are provided below in Tab.", "8.", "Table 8 : Prefix log-likelihoods of Spanish phrases using the original corpus, the corpus following CDA using naïve swapping of gendered words (\"Swap\"), and the corpus following CDA using our approach (\"MRF\").", "Ungrammatical phrases are denoted by \"*\".", "Phrase" ] }
{ "paper_header_number": [ "1", "2", "4.", "3", "3.1", "3.2", "3.3", "4", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Gender Stereotypes in Text", "Reinflect the lemmata to their new forms.", "A Markov Random Field for Morpho-Syntactic Agreement", "Parameterization", "Inference", "Parameter Estimation", "Intervention", "Experiments", "Intrinsic Evaluation", "Extrinsic Evaluation", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-98#paper-1253#slide-8
Extrinsic evaluation train language models on C balanced data then evaluate
p(Der gute Arzt x) m log log p(Die gute Arztin x) ok Gender Bias 4 2 Esp Fra Heb Ita Esp Fra Heb Ita p(Der gute Arztin x) bad
p(Der gute Arzt x) m log log p(Die gute Arztin x) ok Gender Bias 4 2 Esp Fra Heb Ita Esp Fra Heb Ita p(Der gute Arztin x) bad
[]
GEM-SciDuet-train-98#paper-1253#slide-9
1253
Counterfactual Data Augmentation for Mitigating Gender Stereotypes in Languages with Rich Morphology
Gender stereotypes are manifest in most of the world's languages and are consequently propagated or amplified by NLP systems. Although research has focused on mitigating gender stereotypes in English, the approaches that are commonly employed produce ungrammatical sentences in morphologically rich languages. We present a novel approach for converting between masculine-inflected and feminineinflected sentences in such languages. For Spanish and Hebrew, our approach achieves F 1 scores of 82% and 73% at the level of tags and accuracies of 90% and 87% at the level of forms. By evaluating our approach using four different languages, we show that, on average, it reduces gender stereotyping by a factor of 2.5 without any sacrifice to grammaticality. Sebastian J. Mielke and Jason Eisner. 2018. Spell once, summon anywhere: A two-level open-vocabulary language model. CoRR, abs/1804.08205. Thomas Müller, Ryan Cotterell, Alexander Fraser, and Hinrich Schütze. 2015. Joint lemmatization and morphological tagging with lemming. In Proceed-
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238 ], "paper_content_text": [ "Introduction One of the biggest challenges faced by modern natural language processing (NLP) systems is the inadvertent replication or amplification of societal biases.", "This is because NLP systems depend on language corpora, which are inherently \"not objective; they are creations of human design\" (Crawford, 2013) .", "One type of societal bias that has received considerable attention from the NLP community is gender stereotyping (Garg et al., 2017; Rudinger et al., 2017; Sutton et al., 2018) .", "Gender stereotypes can manifest in language in overt ways.", "For example, the sentence he is an engineer is more likely to appear in a corpus than she is an engineer due to the current gender disparity in engineering.", "Consequently, any NLP system that is trained such a corpus will likely learn to associate engineer with men, but not with women (De-Arteaga et al., 2019) .", "To date, the NLP community has focused primarily on approaches for detecting and mitigating gender stereotypes in English (Bolukbasi et al., 2016; Dixon et al., 2018; Zhao et al., 2017 ).", "Yet, gender stereotypes also exist in other languages .", "We extract the properties of each word in the sentence.", "We then fix a noun and its tags and infer the manner in which the remaining tags must be updated.", "Finally, we reinflect the lemmata to their new forms.", "because they are a function of society, not of grammar.", "Moreover, because English does not mark grammatical gender, approaches developed for English are not transferable to morphologically rich languages that exhibit gender agreement (Corbett, 1991) .", "In these languages, the words in a sentence are marked with morphological endings that reflect the grammatical gender of the surrounding nouns.", "This means that if the gender of one word changes, the others have to be updated to match.", "As a result, simple heuristics, such as augmenting a corpus with additional sentences in which he and she have been swapped (Zhao et al., 2018) , will yield ungrammatical sentences.", "Consider the Spanish phrase el ingeniero experto (the skilled engineer).", "Replacing ingeniero with ingeniera is insufficient-el must also be replaced with la and experto with experta.", "In this paper, we present a new approach to counterfactual data augmentation (CDA; Lu et al., 2018) for mitigating gender stereotypes associated with animate 1 nouns (i.e., nouns that represent people) for morphologically rich languages.", "We introduce a Markov random field with an optional neural parameterization that infers the manner in which a sentence must change when altering the grammatical gender of particular nouns.", "We use this model as part of a four-step process, depicted in Fig.", "1 , to reinflect entire sentences following an intervention on the grammatical gender of one word.", "We intrinsically evaluate our approach using Spanish and Hebrew, achieving tag-level F 1 scores of 83% and 72% and form-level accuracies of 90% and 87%, respectively.", "We also conduct an extrinsic evaluation using four languages.", "Following Lu et al.", "(2018) , we show that, on average, our approach reduces gender stereotyping in neural language models by a factor of 2.5 without sacrificing grammaticality.", "Gender Stereotypes in Text Men and women are mentioned at different rates in text (Coates, 1987) .", "This problem is exacerbated in certain contexts.", "For example, the sentence he is an engineer is more likely to appear in a corpus than she is an engineer due to the current gender disparity in engineering.", "This imbalance in representation can have a dramatic downstream effect on NLP systems trained on such a corpus, such as giving preference to male engineers over female engineers in an automated resumé filtering system.", "Gender stereotypes of this sort have been observed in word embeddings (Bolukbasi et al., 2016; Sutton et al., 2018) , contextual word embeddings (Zhao et al., 2019) , and co-reference resolution systems (Rudinger et al., 2018; Zhao et al., 2018) inter alia.", "A quick fix: swapping gendered words.", "One approach to mitigating such gender stereotypes is counterfactual data augmentation (CDA; Lu et al., 2018) .", "In English, this involves augmenting a corpus with additional sentences in which gendered words, such as he and she, have been swapped to yield a balanced representation.", "Indeed, Zhao et al.", "(2018) showed that this simple heuristic significantly reduces gender stereotyping in neural co-reference resolution systems, without harming system performance.", "Unfortunately, this approach is only applicable to English and other languages with little morphological inflection.", "When applied to morphologically rich languages that exhibit gender agreement, it yields ungrammatical sentences.", "The problem: inflected languages.", "Many languages, including Spanish and Hebrew, have gender inflections for nouns, verbs, and adjectivesi.e., the words in a sentence are marked with morphological endings that reflect the grammatical gender of the surrounding nouns.", "2 This means that if the gender of one word changes, the others have to be updated to preserve morpho-syntactic agreement (Corbett, 2012) .", "Consider the following example from Spanish, where we wish to transform Sentence (1) to Sentence (2).", "(Parts of words that mark gender are depicted in bold.)", "This task is not as simple as replacing el with la-ingeniero and experto must also be reinflected.", "Moreover, the changes required for one language are not the same as those required for another (e.g., verbs are marked with gender in Hebrew, but not in Spanish).", "( Our approach.", "Our goal is to transform sentences like Sentence (1) to Sentence (2) and vice versa.", "To the best of our knowledge, this task has not been studied previously.", "Indeed, there is no existing annotated corpus of paired sentences that could be used to train a supervised model.", "As a result, we take an unsupervised 3 approach using dependency trees, lemmata, part-of-speech (POS) tags, and morpho-syntactic tags from Universal Dependencies corpora (UD; Nivre et al., 2018) .", "Specifically, we propose the following four-step process: 1.", "Analyze the sentence (including parsing, morphological analysis, and lemmatization).", "Figure 2 : Dependency tree for the sentence El ingeniero alemán es muy experto.", "2.", "Intervene on a gendered word.", "3.", "Infer the new morpho-syntactic tags.", "Reinflect the lemmata to their new forms.", "This process is depicted in Fig.", "1 .", "The primary technical contribution is a novel Markov random field for performing step 3, described in the next section.", "A Markov Random Field for Morpho-Syntactic Agreement In this section, we present a Markov random field (MRF; Koller and Friedman, 2009 ) for morphosyntactic agreement.", "This model defines a joint distribution over sequences of morpho-syntactic tags, conditioned on a labeled dependency tree with associated part-of-speech tags.", "Given an intervention on a gendered word, we can use this model to infer the manner in which the remaining tags must be updated to preserve morpho-syntactic agreement.", "A dependency tree for a sentence (see Fig.", "2 for an example) is a set of ordered triples (i, j, ), where i and j are positions in the sentence (or a distinguished root symbol) and ∈ L is the label of the edge i → j in the tree; each position occurs exactly once as the first element in a triple.", "Each dependency tree T is associated with a sequence of morpho-syntactic tags m = m 1 , .", ".", ".", ", m |T | and a sequence of part-ofspeech (POS) tags p = p 1 , .", ".", ".", ", p |T | .", "For example, the tags m ∈ M and p ∈ P for ingeniero are [MSC; SG] and NOUN, respectively, because ingeniero is a masculine, singular noun.", "For notational simplicity, we define M = M |T | to be the set of all length-|T | sequences of morpho-syntactic tags.", "We define the probability of m given T and p as Pr(m | T, p) ∝ (i,j, )∈T φ i (m i ) · ψ(m i , m j | p i , p j , ), (1) where the binary factor ψ(·, · | ·, ·, ·) ≥ 0 scores how well the morpho-syntactic tags m i and m j agree given the POS tags p i and p j and the label .", "For example, consider the amod (adjectival modifier) edge from experto to ingeniero in Fig.", "2 .", "The factor ψ(m i , m j | A, N, amod) returns a high score if the corresponding morpho-syntactic tags agree in gender and number (e.g., m i = [MSC; SG] and m j = [MSC; SG]) and a low score if they do not (e.g., m i = [MSC; SG] and m j = [FEM; PL]).", "The unary factor φ i (·) ≥ 0 scores a morpho-syntactic tag m i outside the context of the dependency tree.", "As we explain in §3.1, we use these unary factors to force or disallow particular tags when performing an intervention; we do not learn them.", "Eq.", "(1) is normalized by the following partition function: Z(T, p) = m ∈M (i,j, )∈T φ i (m i ) · ψ(m i , m j | p i , p j , ).", "Z(T, p) can be calculated using belief propagation; we provide the update equations that we use in App.", "A.", "Our model is depicted in Fig.", "3 .", "It is noteworthy that this model is delexicalized-i.e., it considers only the labeled dependency tree and the POS tags, not the actual words themselves.", "Parameterization We consider a linear parameterization and a neural parameterization of the binary factor ψ(·, · | ·, ·, ·).", "Linear parameterization.", "We define a matrix W (p i , p j , ) ∈ R c×c for each triple (p i , p j , ), where c is the number of morpho-syntactic subtags.", "For example, [MSC; SG] has two subtags MSC and SG.", "We then define ψ(·, · | ·, ·, ·) as follows: ψ(m i , m j | p i , p j , ) = exp (m i W (p i , p j , )m j ), where m i ∈ {0, 1} c is a multi-hot encoding of m i .", "Neural parameterization.", "As an alternative, we also define a neural parameterization of W (p i , p j , ) to allow parameter sharing among El ingeniero alemán es muy experto edges with different parts of speech and labels: φ1(·) φ2(·) φ3(·) φ4(·) φ5(·) φ6(·) ψ(·, · | D, N, det) ψ(·, · | A, N, amod) ψ(·, · | N, V, cop) ψ(·, · | AV, A, advmod) ψ(·, · | A, N, amod) W (p i , p j , ) = exp (U tanh(V [e(p i ); e(p j ); e( )])) where U ∈ R c×c×n 1 , V ∈ R n 1 ×3n 2 , and n 1 and n 2 define the structure of the neural parameterization and each e(·) ∈ R n 2 is an embedding function.", "Parameterization of φ i .", "We use the unary factors only to force or disallow particular tags when performing an intervention.", "Specifically, we define φ i (m) = α if m = m i 1 otherwise, (2) where α > 1 is a strength parameter that determines the extent to which m i should remain unchanged following an intervention.", "In the limit as α → ∞, all tags will remain unchanged except for the tag directly involved in the intervention.", "4 Inference Because our MRF is acyclic and tree-shaped, we can use belief propagation (Pearl, 1988) to perform exact inference.", "The algorithm is a generalization of the forward-backward algorithm for hidden Markov models (Rabiner and Juang, 1986 Parameter Estimation We use gradient-based optimization.", "We treat the negative log-likelihood − log (Pr(m | T, p)) as the loss function for tree T and compute its gradient using automatic differentiation (Rall, 1981) .", "We learn the parameters of §3.1 by optimizing the negative log-likelihood using gradient descent.", "Intervention As explained in §2, our goal is to transform sentences like Sentence (1) to Sentence (2) by intervening on a gendered word and then using our model to infer the manner in which the remaining tags must be updated to preserve morphosyntactic agreement.", "For example, if we change the morpho-syntactic tag for ingeniero from [MSC;SG] to [FEM;SG], then we must also update the tags for el and experto, but do not need to update the tag for es, which should remain unchanged as [IN; PR; SG].", "If we intervene on the i th word in a sentence, changing its tag from m i to m i , then using our model to infer the manner in which the remaining tags must be updated means using Pr(m −i | m i , T, p) to identify high-probability tags for the remaining words.", "Crucially, we wish to change as little as possible when intervening on a gendered word.", "The unary factors φ i enable us to do exactly this.", "As described in the previous section, the strength parameter α determines the extent to which m i should remain unchanged following an intervention-the larger the value, the less likely it is that m i will be changed.", "Once the new tags have been inferred, the final step is to reinflect the lemmata to their new forms.", "This task has received considerable attention from the NLP community (Cotterell et al., 2016 (Cotterell et al., , 2017 .", "We use the inflection model of .", "This model conditions on the lemma x and morphosyntactic tag m to form a distribution over possible inflections.", "For example, given experto and [A; FEM; PL], the trained inflection model will assign a high probability to expertas.", "We provide accuracies for the trained inflection model in Tab.", "1.", "Experiments We used the Adam optimizer (Kingma and Ba, 2014) to train both parameterizations of our model until the change in dev-loss was less than 10 −5 bits.", "We set β = (0.9, 0.999) without tuning, and chose a learning rate of 0.005 and weight decay factor of 0.0001 after tuning.", "We tuned log α in the set {0.5, 0.75, 1, 2, 5, 10} and chose log α = 1.", "For the neural parameterization, we set n 1 = 9 and n 2 = 3 without any tuning.", "Finally, we trained the inflection model using only gendered words.", "We evaluate our approach both intrinsically and extrinsically.", "For the intrinsic evaluation, we focus on whether our approach yields the correct morphosyntactic tags and the correct reinflections.", "For the extrinsic evaluation, we assess the extent to which using the resulting transformed sentences reduces gender stereotyping in neural language models.", "Intrinsic Evaluation To the best of our knowledge, this task has not been studied previously.", "As a result, there is no existing annotated corpus of paired sentences that can be used as \"ground truth.\"", "We therefore annotated Spanish and Hebrew sentences ourselves, with annotations made by native speakers of each language.", "Specifically, for each language, we extracted sentences containing animate nouns from that language's UD treebank.", "The average length of these extracted sentences was 37 words.", "We then manually inspected each sentence, intervening on the gender of the animate noun and reinflecting the sentence accordingly.", "We chose Spanish and Hebrew because gender agreement operates differ- Table 3 : Tag-level precision, recall, F 1 score, and accuracy and form-level accuracy for the baselines (\"-BASE\") and for our approach (\"-LIN\" is the linear parameterization, \"-NN\" is the neural parameterization).", "ently in each language.", "We provide corpus statistics for both languages in the top two rows of Tab.", "2.", "We created a hard-coded ψ(·, · | ·, ·, ·) to serve as a baseline for each language.", "For Spanish, we only activated, i.e.", "set to a number greater than zero, values that relate adjectives and determiners to nouns; for Hebrew, we only activated values that relate adjectives and verbs to nouns.", "We created two separate baselines because gender agreement operates differently in each language.", "To evaluate our approach, we held all morphosyntactic subtags fixed except for gender.", "For each annotated sentence, we intervened on the gender of the animate noun.", "We then used our model to infer which of the remaining tags should be updated (updating a tag means swapping the gender subtag because all morpho-syntactic subtags were held fixed except for gender) and reinflected the lemmata.", "Finally, we used the annotations to compute the taglevel F 1 score and the form-level accuracy, excluding the animate nouns on which we intervened.", "Results.", "We present the results in Tab.", "3.", "Recall is consistently significantly lower than precision.", "As expected, the baselines have the highest precision (though not by much).", "This is because they reflect well-known rules for each language.", "That said, they have lower recall than our approach because they fail to capture more subtle relationships.", "For both languages, our approach struggles with conjunctions.", "For example, consider the phraseél es un ingeniero y escritor (he is an engineer and a writer).", "Replacing ingeniero with ingeniera does not necessarily result in escritor being changed to escritora.", "This is because two nouns do not normally need to have the same gender when they are conjoined.", "Moreover, our MRF does not include co-reference information, so it cannot tell that, in this case, both nouns refer to the same person.", "Note Figure 4 : Gender stereotyping (left) and grammaticality (right) using the original corpus, the corpus following CDA using naïve swapping of gendered words (\"Swap\"), and the corpus following CDA using our approach (\"MRF\").", "that including co-reference information in our MRF would create cycles and inference would no longer be exact.", "Additionally, the lack of co-reference information means that, for Spanish, our approach fails to convert nouns that are noun-modifiers or indirect objects of verbs.", "Somewhat surprisingly, the neural parameterization does not outperform the linear parameterization.", "We proposed the neural parameterization to allow parameter sharing among edges with different parts of speech and labels; however, this parameter sharing does not seem to make a difference in practice, so the linear parameterization is sufficient.", "Extrinsic Evaluation We extrinsically evaluate our approach by assessing the extent to which it reduces gender stereotyping.", "Following Lu et al.", "(2018) , focus on neural language models.", "We choose language models over word embeddings because standard measures of gender stereotyping for word embeddings cannot be applied to morphologically rich languages.", "As our measure of gender stereotyping, we compare the log ratio of the prefix probabilities under a language model P lm for gendered, animate nouns, such as ingeniero, combined with four adjectives: good, bad, smart, and beautiful.", "The translations we use for these adjectives are given in App.", "B.", "We chose the first two adjectives because they should be used equally to describe men and women, and the latter two because we expect that they will reveal gender stereotypes.", "For example, consider log x∈Σ * P lm (BOS El ingeniero bueno x) x∈Σ * P lm (BOS La ingeniera buena x) .", "If this log ratio is close to 0, then the language model is as likely to generate sentences that start with el ingeniero bueno (the good male engineer) as it is to generate sentences that start with la ingeniera bueno (the good female engineer).", "If the log ratio is negative, then the language model is more likely to generate the feminine form than the masculine form, while the opposite is true if the log ratio is positive.", "In practice, given the current gender disparity in engineering, we would expect the log ratio to be positive.", "If, however, the language model were trained on a corpus to which our CDA approach had been applied, we would then expect the log ratio to be much closer to zero.", "Because our approach is specifically intended to yield sentences that are grammatical, we additionally consider the following log ratio (i.e., the grammatical phrase over the ungrammatical phrase): log x∈Σ * P lm (BOS El ingeniero bueno x) x∈Σ * P lm (BOS El ingeniera bueno x) .", "We trained the linear parameterization using UD treebanks for Spanish, Hebrew, French, and Italian (see Tab.", "2).", "For each of the four languages, we parsed one million sentences from Wikipedia (May 2018 dump) using Dozat and Manning (2016) 's parser and extracted taggings and lemmata using the method of Müller et al.", "(2015) .", "We automatically extracted an animacy gazetteer from WordNet (Bond and Paik, 2012) and then manually filtered the output for correctness.", "We provide the size of the languages' animacy gazetteers and the percentage of automatically parsed sentences that contain an animate noun in Tab.", "4.", "For each sentence containing a noun in our animacy gazetteer, we created a copy of the sentence, intervened on Figure 5 : Gender stereotyping for words that are stereotyped toward men or women in Spanish using the original corpus, the corpus following CDA using naïve swapping of gendered words (\"Swap\"), and the corpus following CDA using our approach (\"MRF\").", "the noun, and then used our approach to transform the sentence.", "For sentences containing more than one animate noun, we generated a separate sentence for each possible combination of genders.", "Choosing which sentences to duplicate is a difficult task.", "For example, alemán in Spanish can refer to either a German man or the German language; however, we have no way of distinguishing between these two meanings without additional annotations.", "Multilingual animacy detection (Jahan et al., 2018) might help with this challenge; co-reference information might additionally help.", "For each language, we trained the BPE-RNNLM baseline open-vocabulary language model of Mielke and Eisner (2018) using the original corpus, the corpus following CDA using naïve swapping of gendered words, and the corpus following CDA using our approach.", "We then computed gender stereotyping and grammaticality as described above.", "We provide example phrases in Tab.", "5; we provide a more extensive list of phrases in App.", "C. Results Fig.", "4 demonstrates depicts gender stereotyping and grammaticality for each language using the original corpus, the corpus following CDA using naïve swapping of gendered words, and the corpus following CDA using our approach.", "It is immediately apparent that our approch reduces gender stereotyping.", "On average, our approach reduces gender stereotyping by a factor of 2.5 (the lowest and highest factors are 1.2 (Ita) and 5.0 (Esp), respectively).", "We expected that naïve swapping of gendered words would also reduce gender stereotyping.", "Indeed, we see that this simple heuristic reduces gender stereotyping for some but not all of the languages.", "For Spanish, we also examine specific words that are stereotyped Table 5 : Prefix log-likelihoods of Spanish phrases using the original corpus, the corpus following CDA using naïve swapping of gendered words (\"Swap\"), and the corpus following CDA using our approach (\"MRF\").", "Phrases 1 and 2 are grammatical, while phrases 3 and 4 are not (dentoted by \"*\").", "Gender stereotyping is measured using phrases 1 and 2.", "Grammaticality is measured using phrases 1 and 3 and using phrases 2 and 4; these scores are then averaged.", "toward men or women.", "We define a word to be stereotyped toward one gender if 75% of its occurrences are of that gender.", "Fig.", "5 suggests a clear reduction in gender stereotyping for specific words that are stereotyped toward men or women.", "The grammaticality of the corpora following CDA differs between languages.", "That said, with the exception of Hebrew, our approach either sacrifices less grammaticality than naïve swapping of gendered words and sometimes increases grammaticality over the original corpus.", "Given that we know the model did not perform as accurately for Hebrew (see Tab.", "3), this finding is not surprising.", "Related Work In contrast to previous work, we focus on mitigating gender stereotypes in languages with rich morphology-specifically languages that exhibit gender agreement.", "To date, the NLP community has focused on approaches for detecting and mitigating gender stereotypes in English.", "For example, Bolukbasi et al.", "(2016) proposed a way of mitigating gender stereotypes in word embeddings while preserving meanings; Lu et al.", "(2018) studied gender stereotypes in language models; and Rudinger et al.", "(2018) introduced a novel Winograd schema for evaluating gender stereotypes in co-reference resolution.", "The most closely related work is that of Zhao et al.", "(2018) , who used CDA to reduce gender stereotypes in co-reference resolution; however, their approach yields ungrammatical sentences in morphologically rich languages.", "Our approach is specifically intended to yield grammatical sentences when applied to such languages.", "Habash et al.", "(2019) also focused on morphologically rich languages, specifically Arabic, but in the context of gender identification in machine translation.", "Conclusion We presented a new approach for converting between masculine-inflected and feminine-inflected noun phrases in morphologically rich languages.", "To do this, we introduced a Markov random field with an optional neural parameterization that infers the manner in which a sentence must change to preserve morpho-syntactic agreement when altering the grammatical gender of particular nouns.", "To the best of our knowledge, this task has not been studied previously.", "As a result, there is no existing annotated corpus of paired sentences that can be used as \"ground truth.\"", "Despite this limitation, we evaluated our approach both intrinsically and extrinsically, achieving promising results.", "For example, we demonstrated that our approach reduces gender stereotyping in neural language models.", "Finally, we also identified avenues for future work, such as the inclusion of co-reference information.", "A Belief Propagation Update Equations Our belief propagation update equations are µ i→f (m) = f ∈N (i)\\{f } µ f →i (m) (3) µ f i →i (m) = φ i (m) µ i→f i (m) (4) µ f ij →i (m) = m ∈M ψ(m , m | p i , p j , ) µ j→f ij (m ) (5) µ f ij →j (m) = m ∈M ψ(m, m | p i , p j , ) µ i→f ij (m ) (6) where N (i) returns the set of neighbouring nodes of node i.", "The belief at any node is given by β(v) = f ∈N (v) µ f →v (m).", "(7) B Adjective Translations Tab.", "6 and Tab.", "7 contain the feminine and masculine translations of the four adjectives that we used.", "C Extrinsic Evaluation Example Phrases For each noun in our animacy gazetteer, we generated sixteen phrases.", "Consider the noun engineer as an example.", "We created four phrases-one for each translation of The good engineer, The bad engineer, The smart engineer, and The beautiful engineer.", "These phrases, as well as their prefix log-likelihoods are provided below in Tab.", "8.", "Table 8 : Prefix log-likelihoods of Spanish phrases using the original corpus, the corpus following CDA using naïve swapping of gendered words (\"Swap\"), and the corpus following CDA using our approach (\"MRF\").", "Ungrammatical phrases are denoted by \"*\".", "Phrase" ] }
{ "paper_header_number": [ "1", "2", "4.", "3", "3.1", "3.2", "3.3", "4", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Gender Stereotypes in Text", "Reinflect the lemmata to their new forms.", "A Markov Random Field for Morpho-Syntactic Agreement", "Parameterization", "Inference", "Parameter Estimation", "Intervention", "Experiments", "Intrinsic Evaluation", "Extrinsic Evaluation", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-98#paper-1253#slide-9
Conclusion
1. As so often, things that are easy in English... ...become surprisingly hard in other languages. 2. Old-school probabilistic models often work well enoughTM 3. And, always, careful with your training data, Eugene!
1. As so often, things that are easy in English... ...become surprisingly hard in other languages. 2. Old-school probabilistic models often work well enoughTM 3. And, always, careful with your training data, Eugene!
[]
GEM-SciDuet-train-99#paper-1262#slide-0
1262
Time Expression Analysis and Recognition Using Syntactic Token Types and General Heuristic Rules
Extracting time expressions from free text is a fundamental task for many applications. We analyze time expressions from four different datasets and find that only a small group of words are used to express time information and that the words in time expressions demonstrate similar syntactic behaviour. Based on the findings, we propose a type-based approach named SynTime 1 for time expression recognition. Specifically, we define three main syntactic token types, namely time token, modifier, and numeral, to group time-related token regular expressions. On the types we design general heuristic rules to recognize time expressions. In recognition, SynTime first identifies time tokens from raw text, then searches their surroundings for modifiers and numerals to form time segments, and finally merges the time segments to time expressions. As a lightweight rule-based tagger, SynTime runs in real time, and can be easily expanded by simply adding keywords for the text from different domains and different text types. Experiments on benchmark datasets and tweets data show that SynTime outperforms state-of-the-art methods.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249 ], "paper_content_text": [ "Introduction Time expression plays an important role in information retrieval and many applications in natural language processing (Alonso et al., 2011; Campos et al., 2014) .", "Recognizing time expressions from free text has attracted considerable attention since last decade (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) .", "1 Source: https://github.com/zhongxiaoshi/syntime We analyze time expressions in four datasets: TimeBank (Pustejovsky et al., 2003b) , Gigaword (Parker et al., 2011) , WikiWars (Mazur and Dale, 2010) , and Tweets.", "From the analysis we make four findings about time expressions.", "First, most time expressions are very short, with 80% of time expressions containing no more than three tokens.", "Second, at least 91.8% of time expressions contain at least one time token.", "Third, the vocabulary used to express time information is very small, with a small group of keywords.", "Finally, words in time expressions demonstrate similar syntactic behaviour.", "All the findings relate to the principle of least effort (Zipf, 1949) .", "That is, people tend to act under the least effort in order to minimize the cost of energy at both individual level and collective level to language usage (Zipf, 1949) .", "Time expression is part of language and acts as an interface of communication.", "Short expressions, occurrence, small vocabulary, and similar syntactic behaviour all reduce the cost of energy required to communicate.", "According to the findings we propose a typebased approach named SynTime ('Syn' stands for syntactic) to recognize time expressions.", "Specifically, we define three main token types, namely time token, modifier, and numeral, to group timerelated token regular expressions.", "Time tokens are the words that explicitly express time information, such as time units (e.g., 'year').", "Modifiers modify time tokens; they appear before or after time tokens, e.g., 'several' and 'ago' in 'several years ago.'", "Numerals are ordinals and numbers.", "From free text SynTime first identifies time tokens, then recognizes modifiers and numerals.", "Naturally, SynTime is a rule-based tagger.", "The key difference between SynTime and other rulebased taggers lies in the way of defining token types and the way of designing rules.", "The definition of token type in SynTime is inspired by part-of-speech in which \"linguists group some words of language into classes (sets) which show similar syntactic behaviour.\"", "(Manning and Schutze, 1999) SynTime defines token types for tokens according to their syntactic behaviour.", "Other rulebased taggers define types for tokens based on their semantic meaning.", "For example, SUTime defines 5 semantic modifier types, such as frequency modifiers; 2 while SynTime defines 5 syntactic modifier types, such as modifiers that appear before time tokens.", "(See Section 4.1 for details.)", "Accordingly, other rule-based taggers design deterministic rules based on their meanings of tokens themselves.", "SynTime instead designs general rules on the token types rather than on the tokens themselves.", "For example, our general rules do not work on tokens 'February' nor '1989' but on their token types 'MONTH' and 'YEAR.'", "That is why we call SynTime a type-based approach.", "More importantly, other rule-based taggers design rules in a fixed method, including fixed length and fixed position.", "In contrast, SynTime designs general rules in a heuristic way, based on the idea of boundary expansion.", "The general heuristic rules are quite light-weight that it makes SynTime much more flexible and expansible, and leads SynTime to run in real time.", "The heuristic rules are designed on token types and are independent of specific tokens, SynTime therefore is independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domains and specific text types in English.", "(The test for other languages needs only to construct a collection of token regular expressions in the target language under our defined token types.)", "Specifically, we evaluate SynTime against three state-of-the-art methods (i.e., HeidelTime, SUTime, and UWTime) on three datasets: TimeBank, WikiWars, and Tweets.", "3 datasets.", "More importantly, SynTime achieves the best recalls on all three datasets and exceptionally good results on Tweets dataset.", "To sum up, we make the following contributions.", "• We analyze time expressions from four datasets and make four findings.", "The findings provide evidence in terms of time expression for the principle of least effort (Zipf, 1949) .", "• We propose a time tagger named SynTime to recognize time expressions using syntactic token types and general heuristic rules.", "Syn-Time is independent of specific tokens, and therefore independent of specific domains, specific text types, and specific languages.", "• We conduct experiments on three datasets, and the results demonstrate the effectiveness of SynTime against state-of-the-art baselines.", "Related Work Many research works on time expression identification are reported in TempEval exercises (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) .", "The task is divided into two subtasks: recognition and normalization.", "Rule-based Time Expression Recognition.", "Rule-based time taggers like GUTime, Heidel-Time, and SUTime, predefine time-related words and rules (Verhagen et al., 2005; Strötgen and Gertz, 2010; Chang and Manning, 2012) .", "Heidel-Time (Strötgen and Gertz, 2010) hand-crafts rules with time resources like weekdays and months, and leverages language clues like part-of-speech to identify time expression.", "SUTime (Chang and Manning, 2012) designs deterministic rules using a cascade finite automata (Hobbs et al., 1997) on regular expressions over tokens (Chang and Manning, 2014) .", "It first identifies individual words, then expands them to chunks, and finally to time expressions.", "Rule-based taggers achieve very good results in TempEval exercises.", "SynTime is also a rule-based tagger while its key difference from other rule-based taggers is that between the rules and the tokens it introduces a layer of token type; its rules work on token types and are independent of specific tokens.", "Moreover, SynTime designs rules in a heuristic way.", "Machine Learning based Method.", "Machine learning based methods extract features from the text and apply statistical models on the features for recognizing time expressions.", "Example features include character features, word features, syntactic features, semantic features, and gazetteer features (Llorens et al., 2010; Filannino et al., 2013; Bethard, 2013) .", "The statistical models include Markov logic network, logistic regression, support vector machines, maximum entropy, and conditional random fields (Llorens et al., 2010; Uz-Zaman and Allen, 2010; Filannino et al., 2013; Bethard, 2013) .", "Some models obtain good performance, and even achieve the highest F 1 of 82.71% on strict match in TempEval-3 (Bethard, 2013) .", "Outside TempEval exercises, Angeli et al.", "leverage compositional grammar and employ a EMstyle approach to learn a latent parser for time expression recognition (Angeli et al., 2012) .", "In the method named UWTime, Lee et al.", "handcraft a combinatory categorial grammar (CCG) (Steedman, 1996) to define a set of lexicon with rules and use L1-regularization to learn linguistic context (Lee et al., 2014) .", "The two methods explicitly use linguistic information.", "In (Lee et al., 2014) , especially, CCG could capture rich structure information of language, similar to the rule-based methods.", "Tabassum et al.", "focus on resolving the dates in tweets, and use distant supervision to recognize time expressions (Tabassum et al., 2016) .", "They use five time types and assign one of them to each word, which is similar to SynTime in the way of defining types over tokens.", "However, they focus only on the type of date, while SynTime recoginizes all the time expressions and does not involve learning and runs in real time.", "Time Expression Normalization.", "Methods in TempEval exercises design rules for time expression normalization (Verhagen et al., 2005; Strötgen and Gertz, 2010; Llorens et al., 2010; Uz-Zaman and Allen, 2010; Filannino et al., 2013; Bethard, 2013) .", "Because the rule systems have high similarity, Llorens et al.", "suggest to construct a large knowledge base as a public resource for the task (Llorens et al., 2012) .", "Some researchers treat the normalization process as a learning task and use machine learning methods (Lee et al., 2014; Tabassum et al., 2016) .", "Lee et al.", "(Lee et al., 2014) use AdaGrad algorithm (Duchi et al., 2011) and Tabassum et al.", "(Tabassum et al., 2016 ) use a loglinear algorithm to normalize time expressions.", "SynTime focuses only on the recognition task.", "The normalization could be achieved by using methods similar to the existing rule systems, because they are highly similar (Llorens et al., 2012) .", "We conduct an analysis on four datasets: Time-Bank, Gigaword, WikiWars, and Tweets.", "Time-Bank (Pustejovsky et al., 2003b ) is a benchmark dataset in TempEval series (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) , consisting of 183 news articles.", "Gigaword (Parker et al., 2011 ) is a large automatically labelled dataset with 2,452 news articles and used in TempEval-3.", "WikiWars dataset is derived from Wikipedia articles about wars (Mazur and Dale, 2010) .", "Tweets is our manually annotated dataset with 942 tweets of which each contains at least one time expression.", "Table 1 summarizes the datasets.", "Finding From the four datasets, we analyze their time expressions and make four findings.", "We will see that despite the four datasets vary in corpus sizes, in text types, and in domains, their time expressions demonstrate similar characteristics.", "Finding 1 Time expressions are very short.", "More than 80% of time expressions contain no more than three words and more than 90% contain no more than four words.", "Figure 1 plots the length distribution of time expressions.", "Although the texts are collected from different sources (i.e., news articles, Wikipedia articles, and tweets) and vary in sizes, the length Finding 2 More than 91% of time expressions contain at least one time token.", "The second column in Table 2 reports the percentage of time expressions that contain at least one time token.", "We find that at least 91.81% of time expressions contain time token(s).", "(Some time expressions have no time token but depend on other time expressions; in '2 to 8 days,' for example, '2' depends on '8 days.')", "This suggests that time tokens account for time expressions.", "Therefore, to recognize time expressions, it is essential to recognize their time tokens.", "Finding 3 Only a small group of time-related keywords are used to express time information.", "From the time expressions in all four datasets, we find that the group of keywords used to express time information is small.", "Table 3 reports the number of distinct words and of distinct time tokens.", "The words/tokens are manually normalized before counting and their variants are ignored.", "For example, 'year' and '5yrs' are counted as one token 'year.'", "Numerals in the counting are ignored.", "Despite the four datasets vary in sizes, domains, and text types, the numbers of their distinct time tokens are comparable.", "Across the four datasets, the number of distinct words is 350, about half of the simply summing of 675; the number of distinct time tokens is 123, less than half of the simply summing 282.", "Among the 123 distinct time tokens, 45 appear in all the four datasets, and 101 appear in at least two datasets.", "This indicates that time tokens, which account for time expressions, are highly overlapped across the four datasets.", "In other words, time expressions highly overlap at their time tokens.", "Finding 4 POS information could not distinguish time expressions from common words, but within time expressions, POS tags can help distinguish their constituents.", "For each dataset we list the top 10 POS tags that appear in time expressions, and their percentages over the whole text.", "Among the 40 tags (10 × 4 datasets), 37 have percentage lower than 20%; other 3 are CD.", "This indicates that POS could not provide enough information to distinguish time expressions from common words.", "However, the most common POS tags in time expressions are NN*, JJ, RB, CD, and DT.", "Within time expressions, the time tokens usually have NN* and RB, the modifiers have JJ and RB, and the numerals have CD.", "This finding indicates that for the time expressions, their similar constituents behave in similar syntactic way.", "When seeing this, we realize that this is exactly how linguists define part-of-speech for language.", "4 The definition of POS for language inspires us to define a syntactic type system for the time expression, part of language.", "The four findings all relate to the principle of least effort (Zipf, 1949) .", "That is, people tend to act with least effort so as to minimize the cost of energy at both individual and collective levels to the language usage (Zipf, 1949) .", "Time expression is part of language and acts as an interface of communication.", "Short expressions, occurrence, small vocabulary, and similar syntactic behaviour all reduce the cost of energy required to communicate.", "To summarize: on average, a time expression contains two tokens of which one is time token and the other is modifier/numeral, and the size of time tokens is small.", "To recognize a time expression, therefore, we first recognize the time token, then recognize the modifier/numeral.", "SynTime: Syntactic Token Types and General Heuristic Rules SynTime defines a syntactic type system for the tokens of time expressions, and designs heuristic rules working on the token types.", "Figure 2 shows the layout of SynTime, consisting of three levels: Token level, type level, and rule level.", "Token types at the type level group the tokens of time expressions.", "Heuristic rules lie at the rule level, working on token types rather than on tokens themselves.", "That is why the heuristic rules are general.", "For example, the heuristic rules do not work on tokens '1989' nor 'February,' but on their token types 'YEAR' and 'MONTH.'", "The heuristic rules are only relevant to token types, and are independent of specific tokens.", "For this reason, our token types and heuristic rules are independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domain (i.e., war domain) and specific text types (i.e., formal text and informal text) in English.", "The test for other languages simply needs to construct a set of token regular expressions in the target language under our defined token types.", "Figure 3 shows the overview of SynTime in practice.", "Shown on the left-hand side, SynTime is initialized with regular expressions over tokens.", "After initialization, SynTime can be directly applied on text.", "On the other hand, SynTime can be easily expanded by simply adding the time-related token regular expressions from training text under each defined token type.", "The expansion enables SynTime to recognize time expressions in text from different domains and different text types.", "Shown on the right-hand side of Figure 3 , Syn-Time recognizes time expression through three main steps.", "In the first step, SynTime identifies time tokens from the POS-tagged raw text.", "Then around the time tokens SynTime searches for modifiers and numerals to form time segments.", "In the last step, SynTime transforms the time segments to time expressions.", "SynTime Construction We define a syntactic type system for time expression, specifically, 15 token types for time tokens, 5 token types for modifiers, and 1 token type for numeral.", "Token types to tokens is like POS tags to words; for example, 'February' has a POS tag of NNP and a token type of MONTH.", "Time Token.", "We define 15 token types for the time tokens and use their names similar to Joda-Time classes: 5 DECADE (-), YEAR (-), SEA-SON (5), MONTH (12), WEEK (7), DATE (-), TIME (-), DAY TIME (27), TIMELINE (12), HOLIDAY (20), PERIOD (9), DURATION (-), TIME UNIT (15), TIME ZONE (6), and ERA (2).", "Number in '()' indicates the number of distinct tokens in this token type.", "'-' indicates that this token type involves changing digits and cannot be counted.", "Modifier.", "We define 3 token types for the modifiers according to their possible positions relative to time tokens.", "Modifiers that appear before time tokens are PREFIX (48); modifiers after time tokens are SUFFIX (2).", "LINKAGE (4) link two time tokens.", "Besides, we define 2 special modifier types, COMMA (1) for comma ',' and IN ARTICLE (2) for indefinite articles 'a' and 'an.'", "TimeML (Pustejovsky et al., 2003a) and Time-Bank (Pustejovsky et al., 2003b) do not treat most prepositions like 'on' as a part of time expressions.", "Thus SynTime does not collect those prepositions.", "Numeral.", "Number in time expressions can be a time token e.g., '10' in 'October 10, 2016,' or a modifier e.g., '10' in '10 days.'", "We define NU-MERAL (-) for the ordinals and numbers.", "SynTime Initialization.", "The token regular expressions for initializing SynTime are collected from SUTime, 6 a state-of-the-art rule-based tagger that achieved the highest recall in TempEval-3 (Chang and Manning, , 2013 .", "Specifically, we collect from SUTime only the tokens and the regular expressions over tokens, and discard its other rules of recognizing full time expressions.", "Time Expression Recognition On the token types, SynTime designs a small set of heuristic rules to recognize time expressions.", "The recognition process includes three main steps: (1) time token identification, (2) time segment identification, and (3) time expression extraction.", "Time Token Identification Identifying time tokens is simple, through matching of string and regular expressions.", "Some words might cause ambiguity.", "For example, 'May' could be a modal verb, or the fifth month of year.", "To filter out the ambiguous words, we use POS information.", "In implementation, we use Stanford POS Tagger; 7 and the POS tags for matching the instances of token types in SynTime are based on our Finding 4 in Section 3.2.", "Besides time tokens are identified, in this step, individual token is assigned with one token type of either modifier or numeral if it is matched with token regular expressions.", "In the next two steps, SynTime works on those token types.", "Time Segment Identification The task of time segment identification is to search the surrounding of each time token identified in previous step for modifiers and numerals, then gather the time token with its modifiers and numerals to form a time segment.", "The searching is under simple heuristic rules in which the key idea is to expand the time token's boundaries.", "At first, each time token is a time segment.", "If it is either a PERIOD or DURATION, then no need to further search.", "Otherwise, search its left and its right for modifiers and numerals.", "For the left searching, if encounter a PREFIX or NUMERAL or IN ARTICLE, then continue searching.", "For the right searching, if encounter a SUFFIX or NUMERAL, then continue searching.", "Both the left and the right searching stop when reaching a COMMA or LINK-AGE or a non-modifier/numeral word.", "The left searching does not exceed the previous time token; the right searching does not exceed the next time token.", "A time segment consists of exactly one time token, and zero or some modifiers/numerals.", "A special kind of time segments do not contain any time token; they depend on other time segments next to them.", "For example, in '8 to 20 days,' 'to 20 days' is a time segment, and '8 to' forms a dependent time segment.", "(See Figure 4(e) .)", "Time Expression Extraction The task of time expression extraction is to extract time expressions from the identified time segments in which the core step is to determine whether to merge two adjacent or overlapping time segments into a new time segment.", "We scan the time segments in a sentence from beginning to the end.", "A stand-alone time segment is a time expression.", "(See Figure 4(a) .)", "The focus is to deal with two or more time segments that are adjacent or overlapping.", "If two time segments s 1 and s 2 are adjacent, merge them to form a new time segment s 1 .", "(See Figure 4(b) .)", "Consider that s 1 and s 2 overlap at a shared boundary.", "According to our time segment identification, the shared boundary could be a modifier or a numeral.", "If the word at the shared boundary is neither a COMMA nor a LINKAGE, then merge s 1 and s 2 .", "(See Figure 4(c) .)", "If the word is a LINKAGE, then extract s 1 as a time expression and continue scanning.", "When the shared boundary is a COMMA, merge s 1 and s 2 only if the COMMA's previous token and its next token satisfy the three conditions: (1) the previous token is a time token or a NUMERAL; (2) the next token is a time token; and (3) the token types of the previous token and of the next token are not the same.", "(See Figure 4(d) .)", "Although Figure 4 shows the examples as token types together with the tokens, we should note that the heuristic rules only work on the token types.", "After the extraction step, time expressions are exported as a sequence of tokens from the sequence of token types.", "SynTime Expansion SynTime could be expanded by simply adding new words under each defined token type without changing any rule.", "The expansion requires the words to be added to be annotated manually.", "We apply the initial SynTime on the time expressions from training text and list the words that are not covered.", "Whether the uncovered words are added to SynTime is manually determined.", "The rule for determination is that the added words can not cause ambiguity and should be generic.", "Wiki-Wars dataset contains a few examples like this: 'The time Arnold reached Quebec City.'", "Words in this example are extremely descriptive, and we do not collect them.", "In tweets, on the other hand, people may use abbreviations and informal variants; for example, '2day' and 'tday' are popular spellings of 'today.'", "Such kind of abbreviations and informal variants will be collected.", "According to our findings, not many words are used to express time information, the manual addition of keywords thus will not cost much.", "In addition, we find that even in tweets people tend to use formal words.", "In the Twitter word clusters trained from 56 million English tweets, 8 the most often used words are the formal words, and their frequencies are much greater than the informal words'.", "The cluster of 'today,' 9 for example, its most often use is the formal one, 'today,' which appears 1,220,829 times; while its second most often use '2day' appears only 34,827 times.", "The low rate of informal words (e.g., about 3% in 'today' cluster) suggests that even in informal environment the manual keyword addition costs little.", "Experiments We evaluate SynTime against three state-of-theart baselines (i.e., HeidelTime, SUTime, and UW-Time) on three datasets (i.e., TimeBank, Wiki-Wars, and Tweets).", "WikiWars is a specific domain dataset about war; TimeBank and WikiWars are the datasets in formal text while Tweets dataset is in informal text.", "For SynTime we report the results of its two versions: SynTime-I and SynTime-E. SynTime-I is the initial version, and SynTime-E is the expanded version of SynTime-I.", "Experiment Setting Datasets.", "We use three datasets of which TimeBank and WikiWars are benchmark datasets whose details are shown in Section 3.1; Tweets is our manually labeled dataset that are collected from Twitter.", "For Tweets dataset, we randomly sample 4000 tweets and use SUTime to tag them.", "942 tweets of which each contains at least one time expression.", "From the remaining 3,058 tweets, we randomly sample 500 and manually annotate them, and find that only 15 tweets contain time expressions.", "We therefore roughly consider that SU-Time misses about 3% time expressions in tweets.", "Two annotators then manually annotate the 942 tweets with discussion to final agreement according to the standards of TimeML and TimeBank.", "We finally get 1,127 manually labeled time expressions.", "For the 942 tweets, we randomly sample 200 tweets as test set, and the rest 742 as training set, because a baseline UWTime requires training.", "Baseline Methods.", "We compare SynTime with methods: HeidelTime (Strötgen and Gertz, 2010) , SUTime (Chang and , and UW- Evaluation Metrics.", "We follow TempEval-3 and use their evaluation toolkit 10 to report P recision, Recall, and F 1 in terms of strict match and relaxed match (UzZaman et al., 2013).", "22, 1986' and 'February 01, 1989 ' at the level of word or of character.", "One suggestion is to consider a type-based learning method that could use type information.", "For example, the above two time expressions refer to the same pattern of 'MONTH NUMERAL COMMA Table 5 lists the number of time tokens and modifiers added to SynTime-I to get SynTime-E. On TimeBank and Tweets datasets, only a few tokens are added, the corresponding results are affected slightly.", "This confirms that the size of time words is small, and that SynTime-I covers most of time words.", "On WikiWars dataset, relatively more tokens are added, SynTime-E performs much better than SynTime-I, especially in recall.", "It improves the recall by 3.25% in strict match and by 2.98% in relaxed match.", "This indicates that with more words added from specific domains (e.g., WikiWars dataset about war), SynTime can significantly improve the performance.", "Experiment Result Limitations SynTime assumes that words are tokenized and POS tagged correctly.", "In reality, however, the tokenized and tagged words are not that perfect, due to the limitation of used tools.", "For example, Stanford POS Tagger assigns VBD to the word 'sat' in 'friday or sat' while whose tag should be NNP.", "The incorrect tokens and POS tags affect the result.", "Conclusion and future work We conduct an analysis on time expressions from four datasets, and find that time expressions in general are very short and expressed by a small vocabulary, and words in time expressions demonstrate similar syntactic behavior.", "Our findings provide evidence in terms of time expression for the principle of least effort (Zipf, 1949) .", "Inspired by part-of-speech, based on the findings, we define a syntactic type system for the time expression, and propose a type-based time expression tagger, named by SynTime.", "SynTime defines syntactic token types for tokens and on the token types it designs general heuristic rules based on the idea of boundary expansion.", "Experiments on three datasets show that SynTime outperforms the stateof-the-art baselines, including rule-based time taggers and machine learning based time tagger.", "Because our heuristic rules are quite simple, Syn-Time is light-weight and runs in real time.", "Our token types and heuristic rules are independent of specific tokens, SynTime therefore is independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domains and specific text types in English.", "The test for other languages needs only to construct a collection of token regular expressions in the target language under our defined token types.", "Time expression is part of language and follows the principle of least effort.", "Since language usage relates to human habits (Zipf, 1949; Chomsky, 1986; Pinker, 1995) , we might expect that humans would share some common habits, and therefore expect that other parts of language would more or less follow the same principle.", "In the future we will try our analytical method on other parts of language." ] }
{ "paper_header_number": [ "1", "2", "3.2", "4", "4.1", "4.2", "4.2.1", "4.2.2", "4.2.3", "4.3", "5", "5.3", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Finding", "SynTime: Syntactic Token Types and General Heuristic Rules", "SynTime Construction", "Time Expression Recognition", "Time Token Identification", "Time Segment Identification", "Time Expression Extraction", "SynTime Expansion", "Experiments", "Limitations", "Conclusion and future work" ] }
GEM-SciDuet-train-99#paper-1262#slide-0
Time Expression Analysis
the third quarter of 1984
the third quarter of 1984
[]
GEM-SciDuet-train-99#paper-1262#slide-1
1262
Time Expression Analysis and Recognition Using Syntactic Token Types and General Heuristic Rules
Extracting time expressions from free text is a fundamental task for many applications. We analyze time expressions from four different datasets and find that only a small group of words are used to express time information and that the words in time expressions demonstrate similar syntactic behaviour. Based on the findings, we propose a type-based approach named SynTime 1 for time expression recognition. Specifically, we define three main syntactic token types, namely time token, modifier, and numeral, to group time-related token regular expressions. On the types we design general heuristic rules to recognize time expressions. In recognition, SynTime first identifies time tokens from raw text, then searches their surroundings for modifiers and numerals to form time segments, and finally merges the time segments to time expressions. As a lightweight rule-based tagger, SynTime runs in real time, and can be easily expanded by simply adding keywords for the text from different domains and different text types. Experiments on benchmark datasets and tweets data show that SynTime outperforms state-of-the-art methods.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249 ], "paper_content_text": [ "Introduction Time expression plays an important role in information retrieval and many applications in natural language processing (Alonso et al., 2011; Campos et al., 2014) .", "Recognizing time expressions from free text has attracted considerable attention since last decade (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) .", "1 Source: https://github.com/zhongxiaoshi/syntime We analyze time expressions in four datasets: TimeBank (Pustejovsky et al., 2003b) , Gigaword (Parker et al., 2011) , WikiWars (Mazur and Dale, 2010) , and Tweets.", "From the analysis we make four findings about time expressions.", "First, most time expressions are very short, with 80% of time expressions containing no more than three tokens.", "Second, at least 91.8% of time expressions contain at least one time token.", "Third, the vocabulary used to express time information is very small, with a small group of keywords.", "Finally, words in time expressions demonstrate similar syntactic behaviour.", "All the findings relate to the principle of least effort (Zipf, 1949) .", "That is, people tend to act under the least effort in order to minimize the cost of energy at both individual level and collective level to language usage (Zipf, 1949) .", "Time expression is part of language and acts as an interface of communication.", "Short expressions, occurrence, small vocabulary, and similar syntactic behaviour all reduce the cost of energy required to communicate.", "According to the findings we propose a typebased approach named SynTime ('Syn' stands for syntactic) to recognize time expressions.", "Specifically, we define three main token types, namely time token, modifier, and numeral, to group timerelated token regular expressions.", "Time tokens are the words that explicitly express time information, such as time units (e.g., 'year').", "Modifiers modify time tokens; they appear before or after time tokens, e.g., 'several' and 'ago' in 'several years ago.'", "Numerals are ordinals and numbers.", "From free text SynTime first identifies time tokens, then recognizes modifiers and numerals.", "Naturally, SynTime is a rule-based tagger.", "The key difference between SynTime and other rulebased taggers lies in the way of defining token types and the way of designing rules.", "The definition of token type in SynTime is inspired by part-of-speech in which \"linguists group some words of language into classes (sets) which show similar syntactic behaviour.\"", "(Manning and Schutze, 1999) SynTime defines token types for tokens according to their syntactic behaviour.", "Other rulebased taggers define types for tokens based on their semantic meaning.", "For example, SUTime defines 5 semantic modifier types, such as frequency modifiers; 2 while SynTime defines 5 syntactic modifier types, such as modifiers that appear before time tokens.", "(See Section 4.1 for details.)", "Accordingly, other rule-based taggers design deterministic rules based on their meanings of tokens themselves.", "SynTime instead designs general rules on the token types rather than on the tokens themselves.", "For example, our general rules do not work on tokens 'February' nor '1989' but on their token types 'MONTH' and 'YEAR.'", "That is why we call SynTime a type-based approach.", "More importantly, other rule-based taggers design rules in a fixed method, including fixed length and fixed position.", "In contrast, SynTime designs general rules in a heuristic way, based on the idea of boundary expansion.", "The general heuristic rules are quite light-weight that it makes SynTime much more flexible and expansible, and leads SynTime to run in real time.", "The heuristic rules are designed on token types and are independent of specific tokens, SynTime therefore is independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domains and specific text types in English.", "(The test for other languages needs only to construct a collection of token regular expressions in the target language under our defined token types.)", "Specifically, we evaluate SynTime against three state-of-the-art methods (i.e., HeidelTime, SUTime, and UWTime) on three datasets: TimeBank, WikiWars, and Tweets.", "3 datasets.", "More importantly, SynTime achieves the best recalls on all three datasets and exceptionally good results on Tweets dataset.", "To sum up, we make the following contributions.", "• We analyze time expressions from four datasets and make four findings.", "The findings provide evidence in terms of time expression for the principle of least effort (Zipf, 1949) .", "• We propose a time tagger named SynTime to recognize time expressions using syntactic token types and general heuristic rules.", "Syn-Time is independent of specific tokens, and therefore independent of specific domains, specific text types, and specific languages.", "• We conduct experiments on three datasets, and the results demonstrate the effectiveness of SynTime against state-of-the-art baselines.", "Related Work Many research works on time expression identification are reported in TempEval exercises (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) .", "The task is divided into two subtasks: recognition and normalization.", "Rule-based Time Expression Recognition.", "Rule-based time taggers like GUTime, Heidel-Time, and SUTime, predefine time-related words and rules (Verhagen et al., 2005; Strötgen and Gertz, 2010; Chang and Manning, 2012) .", "Heidel-Time (Strötgen and Gertz, 2010) hand-crafts rules with time resources like weekdays and months, and leverages language clues like part-of-speech to identify time expression.", "SUTime (Chang and Manning, 2012) designs deterministic rules using a cascade finite automata (Hobbs et al., 1997) on regular expressions over tokens (Chang and Manning, 2014) .", "It first identifies individual words, then expands them to chunks, and finally to time expressions.", "Rule-based taggers achieve very good results in TempEval exercises.", "SynTime is also a rule-based tagger while its key difference from other rule-based taggers is that between the rules and the tokens it introduces a layer of token type; its rules work on token types and are independent of specific tokens.", "Moreover, SynTime designs rules in a heuristic way.", "Machine Learning based Method.", "Machine learning based methods extract features from the text and apply statistical models on the features for recognizing time expressions.", "Example features include character features, word features, syntactic features, semantic features, and gazetteer features (Llorens et al., 2010; Filannino et al., 2013; Bethard, 2013) .", "The statistical models include Markov logic network, logistic regression, support vector machines, maximum entropy, and conditional random fields (Llorens et al., 2010; Uz-Zaman and Allen, 2010; Filannino et al., 2013; Bethard, 2013) .", "Some models obtain good performance, and even achieve the highest F 1 of 82.71% on strict match in TempEval-3 (Bethard, 2013) .", "Outside TempEval exercises, Angeli et al.", "leverage compositional grammar and employ a EMstyle approach to learn a latent parser for time expression recognition (Angeli et al., 2012) .", "In the method named UWTime, Lee et al.", "handcraft a combinatory categorial grammar (CCG) (Steedman, 1996) to define a set of lexicon with rules and use L1-regularization to learn linguistic context (Lee et al., 2014) .", "The two methods explicitly use linguistic information.", "In (Lee et al., 2014) , especially, CCG could capture rich structure information of language, similar to the rule-based methods.", "Tabassum et al.", "focus on resolving the dates in tweets, and use distant supervision to recognize time expressions (Tabassum et al., 2016) .", "They use five time types and assign one of them to each word, which is similar to SynTime in the way of defining types over tokens.", "However, they focus only on the type of date, while SynTime recoginizes all the time expressions and does not involve learning and runs in real time.", "Time Expression Normalization.", "Methods in TempEval exercises design rules for time expression normalization (Verhagen et al., 2005; Strötgen and Gertz, 2010; Llorens et al., 2010; Uz-Zaman and Allen, 2010; Filannino et al., 2013; Bethard, 2013) .", "Because the rule systems have high similarity, Llorens et al.", "suggest to construct a large knowledge base as a public resource for the task (Llorens et al., 2012) .", "Some researchers treat the normalization process as a learning task and use machine learning methods (Lee et al., 2014; Tabassum et al., 2016) .", "Lee et al.", "(Lee et al., 2014) use AdaGrad algorithm (Duchi et al., 2011) and Tabassum et al.", "(Tabassum et al., 2016 ) use a loglinear algorithm to normalize time expressions.", "SynTime focuses only on the recognition task.", "The normalization could be achieved by using methods similar to the existing rule systems, because they are highly similar (Llorens et al., 2012) .", "We conduct an analysis on four datasets: Time-Bank, Gigaword, WikiWars, and Tweets.", "Time-Bank (Pustejovsky et al., 2003b ) is a benchmark dataset in TempEval series (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) , consisting of 183 news articles.", "Gigaword (Parker et al., 2011 ) is a large automatically labelled dataset with 2,452 news articles and used in TempEval-3.", "WikiWars dataset is derived from Wikipedia articles about wars (Mazur and Dale, 2010) .", "Tweets is our manually annotated dataset with 942 tweets of which each contains at least one time expression.", "Table 1 summarizes the datasets.", "Finding From the four datasets, we analyze their time expressions and make four findings.", "We will see that despite the four datasets vary in corpus sizes, in text types, and in domains, their time expressions demonstrate similar characteristics.", "Finding 1 Time expressions are very short.", "More than 80% of time expressions contain no more than three words and more than 90% contain no more than four words.", "Figure 1 plots the length distribution of time expressions.", "Although the texts are collected from different sources (i.e., news articles, Wikipedia articles, and tweets) and vary in sizes, the length Finding 2 More than 91% of time expressions contain at least one time token.", "The second column in Table 2 reports the percentage of time expressions that contain at least one time token.", "We find that at least 91.81% of time expressions contain time token(s).", "(Some time expressions have no time token but depend on other time expressions; in '2 to 8 days,' for example, '2' depends on '8 days.')", "This suggests that time tokens account for time expressions.", "Therefore, to recognize time expressions, it is essential to recognize their time tokens.", "Finding 3 Only a small group of time-related keywords are used to express time information.", "From the time expressions in all four datasets, we find that the group of keywords used to express time information is small.", "Table 3 reports the number of distinct words and of distinct time tokens.", "The words/tokens are manually normalized before counting and their variants are ignored.", "For example, 'year' and '5yrs' are counted as one token 'year.'", "Numerals in the counting are ignored.", "Despite the four datasets vary in sizes, domains, and text types, the numbers of their distinct time tokens are comparable.", "Across the four datasets, the number of distinct words is 350, about half of the simply summing of 675; the number of distinct time tokens is 123, less than half of the simply summing 282.", "Among the 123 distinct time tokens, 45 appear in all the four datasets, and 101 appear in at least two datasets.", "This indicates that time tokens, which account for time expressions, are highly overlapped across the four datasets.", "In other words, time expressions highly overlap at their time tokens.", "Finding 4 POS information could not distinguish time expressions from common words, but within time expressions, POS tags can help distinguish their constituents.", "For each dataset we list the top 10 POS tags that appear in time expressions, and their percentages over the whole text.", "Among the 40 tags (10 × 4 datasets), 37 have percentage lower than 20%; other 3 are CD.", "This indicates that POS could not provide enough information to distinguish time expressions from common words.", "However, the most common POS tags in time expressions are NN*, JJ, RB, CD, and DT.", "Within time expressions, the time tokens usually have NN* and RB, the modifiers have JJ and RB, and the numerals have CD.", "This finding indicates that for the time expressions, their similar constituents behave in similar syntactic way.", "When seeing this, we realize that this is exactly how linguists define part-of-speech for language.", "4 The definition of POS for language inspires us to define a syntactic type system for the time expression, part of language.", "The four findings all relate to the principle of least effort (Zipf, 1949) .", "That is, people tend to act with least effort so as to minimize the cost of energy at both individual and collective levels to the language usage (Zipf, 1949) .", "Time expression is part of language and acts as an interface of communication.", "Short expressions, occurrence, small vocabulary, and similar syntactic behaviour all reduce the cost of energy required to communicate.", "To summarize: on average, a time expression contains two tokens of which one is time token and the other is modifier/numeral, and the size of time tokens is small.", "To recognize a time expression, therefore, we first recognize the time token, then recognize the modifier/numeral.", "SynTime: Syntactic Token Types and General Heuristic Rules SynTime defines a syntactic type system for the tokens of time expressions, and designs heuristic rules working on the token types.", "Figure 2 shows the layout of SynTime, consisting of three levels: Token level, type level, and rule level.", "Token types at the type level group the tokens of time expressions.", "Heuristic rules lie at the rule level, working on token types rather than on tokens themselves.", "That is why the heuristic rules are general.", "For example, the heuristic rules do not work on tokens '1989' nor 'February,' but on their token types 'YEAR' and 'MONTH.'", "The heuristic rules are only relevant to token types, and are independent of specific tokens.", "For this reason, our token types and heuristic rules are independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domain (i.e., war domain) and specific text types (i.e., formal text and informal text) in English.", "The test for other languages simply needs to construct a set of token regular expressions in the target language under our defined token types.", "Figure 3 shows the overview of SynTime in practice.", "Shown on the left-hand side, SynTime is initialized with regular expressions over tokens.", "After initialization, SynTime can be directly applied on text.", "On the other hand, SynTime can be easily expanded by simply adding the time-related token regular expressions from training text under each defined token type.", "The expansion enables SynTime to recognize time expressions in text from different domains and different text types.", "Shown on the right-hand side of Figure 3 , Syn-Time recognizes time expression through three main steps.", "In the first step, SynTime identifies time tokens from the POS-tagged raw text.", "Then around the time tokens SynTime searches for modifiers and numerals to form time segments.", "In the last step, SynTime transforms the time segments to time expressions.", "SynTime Construction We define a syntactic type system for time expression, specifically, 15 token types for time tokens, 5 token types for modifiers, and 1 token type for numeral.", "Token types to tokens is like POS tags to words; for example, 'February' has a POS tag of NNP and a token type of MONTH.", "Time Token.", "We define 15 token types for the time tokens and use their names similar to Joda-Time classes: 5 DECADE (-), YEAR (-), SEA-SON (5), MONTH (12), WEEK (7), DATE (-), TIME (-), DAY TIME (27), TIMELINE (12), HOLIDAY (20), PERIOD (9), DURATION (-), TIME UNIT (15), TIME ZONE (6), and ERA (2).", "Number in '()' indicates the number of distinct tokens in this token type.", "'-' indicates that this token type involves changing digits and cannot be counted.", "Modifier.", "We define 3 token types for the modifiers according to their possible positions relative to time tokens.", "Modifiers that appear before time tokens are PREFIX (48); modifiers after time tokens are SUFFIX (2).", "LINKAGE (4) link two time tokens.", "Besides, we define 2 special modifier types, COMMA (1) for comma ',' and IN ARTICLE (2) for indefinite articles 'a' and 'an.'", "TimeML (Pustejovsky et al., 2003a) and Time-Bank (Pustejovsky et al., 2003b) do not treat most prepositions like 'on' as a part of time expressions.", "Thus SynTime does not collect those prepositions.", "Numeral.", "Number in time expressions can be a time token e.g., '10' in 'October 10, 2016,' or a modifier e.g., '10' in '10 days.'", "We define NU-MERAL (-) for the ordinals and numbers.", "SynTime Initialization.", "The token regular expressions for initializing SynTime are collected from SUTime, 6 a state-of-the-art rule-based tagger that achieved the highest recall in TempEval-3 (Chang and Manning, , 2013 .", "Specifically, we collect from SUTime only the tokens and the regular expressions over tokens, and discard its other rules of recognizing full time expressions.", "Time Expression Recognition On the token types, SynTime designs a small set of heuristic rules to recognize time expressions.", "The recognition process includes three main steps: (1) time token identification, (2) time segment identification, and (3) time expression extraction.", "Time Token Identification Identifying time tokens is simple, through matching of string and regular expressions.", "Some words might cause ambiguity.", "For example, 'May' could be a modal verb, or the fifth month of year.", "To filter out the ambiguous words, we use POS information.", "In implementation, we use Stanford POS Tagger; 7 and the POS tags for matching the instances of token types in SynTime are based on our Finding 4 in Section 3.2.", "Besides time tokens are identified, in this step, individual token is assigned with one token type of either modifier or numeral if it is matched with token regular expressions.", "In the next two steps, SynTime works on those token types.", "Time Segment Identification The task of time segment identification is to search the surrounding of each time token identified in previous step for modifiers and numerals, then gather the time token with its modifiers and numerals to form a time segment.", "The searching is under simple heuristic rules in which the key idea is to expand the time token's boundaries.", "At first, each time token is a time segment.", "If it is either a PERIOD or DURATION, then no need to further search.", "Otherwise, search its left and its right for modifiers and numerals.", "For the left searching, if encounter a PREFIX or NUMERAL or IN ARTICLE, then continue searching.", "For the right searching, if encounter a SUFFIX or NUMERAL, then continue searching.", "Both the left and the right searching stop when reaching a COMMA or LINK-AGE or a non-modifier/numeral word.", "The left searching does not exceed the previous time token; the right searching does not exceed the next time token.", "A time segment consists of exactly one time token, and zero or some modifiers/numerals.", "A special kind of time segments do not contain any time token; they depend on other time segments next to them.", "For example, in '8 to 20 days,' 'to 20 days' is a time segment, and '8 to' forms a dependent time segment.", "(See Figure 4(e) .)", "Time Expression Extraction The task of time expression extraction is to extract time expressions from the identified time segments in which the core step is to determine whether to merge two adjacent or overlapping time segments into a new time segment.", "We scan the time segments in a sentence from beginning to the end.", "A stand-alone time segment is a time expression.", "(See Figure 4(a) .)", "The focus is to deal with two or more time segments that are adjacent or overlapping.", "If two time segments s 1 and s 2 are adjacent, merge them to form a new time segment s 1 .", "(See Figure 4(b) .)", "Consider that s 1 and s 2 overlap at a shared boundary.", "According to our time segment identification, the shared boundary could be a modifier or a numeral.", "If the word at the shared boundary is neither a COMMA nor a LINKAGE, then merge s 1 and s 2 .", "(See Figure 4(c) .)", "If the word is a LINKAGE, then extract s 1 as a time expression and continue scanning.", "When the shared boundary is a COMMA, merge s 1 and s 2 only if the COMMA's previous token and its next token satisfy the three conditions: (1) the previous token is a time token or a NUMERAL; (2) the next token is a time token; and (3) the token types of the previous token and of the next token are not the same.", "(See Figure 4(d) .)", "Although Figure 4 shows the examples as token types together with the tokens, we should note that the heuristic rules only work on the token types.", "After the extraction step, time expressions are exported as a sequence of tokens from the sequence of token types.", "SynTime Expansion SynTime could be expanded by simply adding new words under each defined token type without changing any rule.", "The expansion requires the words to be added to be annotated manually.", "We apply the initial SynTime on the time expressions from training text and list the words that are not covered.", "Whether the uncovered words are added to SynTime is manually determined.", "The rule for determination is that the added words can not cause ambiguity and should be generic.", "Wiki-Wars dataset contains a few examples like this: 'The time Arnold reached Quebec City.'", "Words in this example are extremely descriptive, and we do not collect them.", "In tweets, on the other hand, people may use abbreviations and informal variants; for example, '2day' and 'tday' are popular spellings of 'today.'", "Such kind of abbreviations and informal variants will be collected.", "According to our findings, not many words are used to express time information, the manual addition of keywords thus will not cost much.", "In addition, we find that even in tweets people tend to use formal words.", "In the Twitter word clusters trained from 56 million English tweets, 8 the most often used words are the formal words, and their frequencies are much greater than the informal words'.", "The cluster of 'today,' 9 for example, its most often use is the formal one, 'today,' which appears 1,220,829 times; while its second most often use '2day' appears only 34,827 times.", "The low rate of informal words (e.g., about 3% in 'today' cluster) suggests that even in informal environment the manual keyword addition costs little.", "Experiments We evaluate SynTime against three state-of-theart baselines (i.e., HeidelTime, SUTime, and UW-Time) on three datasets (i.e., TimeBank, Wiki-Wars, and Tweets).", "WikiWars is a specific domain dataset about war; TimeBank and WikiWars are the datasets in formal text while Tweets dataset is in informal text.", "For SynTime we report the results of its two versions: SynTime-I and SynTime-E. SynTime-I is the initial version, and SynTime-E is the expanded version of SynTime-I.", "Experiment Setting Datasets.", "We use three datasets of which TimeBank and WikiWars are benchmark datasets whose details are shown in Section 3.1; Tweets is our manually labeled dataset that are collected from Twitter.", "For Tweets dataset, we randomly sample 4000 tweets and use SUTime to tag them.", "942 tweets of which each contains at least one time expression.", "From the remaining 3,058 tweets, we randomly sample 500 and manually annotate them, and find that only 15 tweets contain time expressions.", "We therefore roughly consider that SU-Time misses about 3% time expressions in tweets.", "Two annotators then manually annotate the 942 tweets with discussion to final agreement according to the standards of TimeML and TimeBank.", "We finally get 1,127 manually labeled time expressions.", "For the 942 tweets, we randomly sample 200 tweets as test set, and the rest 742 as training set, because a baseline UWTime requires training.", "Baseline Methods.", "We compare SynTime with methods: HeidelTime (Strötgen and Gertz, 2010) , SUTime (Chang and , and UW- Evaluation Metrics.", "We follow TempEval-3 and use their evaluation toolkit 10 to report P recision, Recall, and F 1 in terms of strict match and relaxed match (UzZaman et al., 2013).", "22, 1986' and 'February 01, 1989 ' at the level of word or of character.", "One suggestion is to consider a type-based learning method that could use type information.", "For example, the above two time expressions refer to the same pattern of 'MONTH NUMERAL COMMA Table 5 lists the number of time tokens and modifiers added to SynTime-I to get SynTime-E. On TimeBank and Tweets datasets, only a few tokens are added, the corresponding results are affected slightly.", "This confirms that the size of time words is small, and that SynTime-I covers most of time words.", "On WikiWars dataset, relatively more tokens are added, SynTime-E performs much better than SynTime-I, especially in recall.", "It improves the recall by 3.25% in strict match and by 2.98% in relaxed match.", "This indicates that with more words added from specific domains (e.g., WikiWars dataset about war), SynTime can significantly improve the performance.", "Experiment Result Limitations SynTime assumes that words are tokenized and POS tagged correctly.", "In reality, however, the tokenized and tagged words are not that perfect, due to the limitation of used tools.", "For example, Stanford POS Tagger assigns VBD to the word 'sat' in 'friday or sat' while whose tag should be NNP.", "The incorrect tokens and POS tags affect the result.", "Conclusion and future work We conduct an analysis on time expressions from four datasets, and find that time expressions in general are very short and expressed by a small vocabulary, and words in time expressions demonstrate similar syntactic behavior.", "Our findings provide evidence in terms of time expression for the principle of least effort (Zipf, 1949) .", "Inspired by part-of-speech, based on the findings, we define a syntactic type system for the time expression, and propose a type-based time expression tagger, named by SynTime.", "SynTime defines syntactic token types for tokens and on the token types it designs general heuristic rules based on the idea of boundary expansion.", "Experiments on three datasets show that SynTime outperforms the stateof-the-art baselines, including rule-based time taggers and machine learning based time tagger.", "Because our heuristic rules are quite simple, Syn-Time is light-weight and runs in real time.", "Our token types and heuristic rules are independent of specific tokens, SynTime therefore is independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domains and specific text types in English.", "The test for other languages needs only to construct a collection of token regular expressions in the target language under our defined token types.", "Time expression is part of language and follows the principle of least effort.", "Since language usage relates to human habits (Zipf, 1949; Chomsky, 1986; Pinker, 1995) , we might expect that humans would share some common habits, and therefore expect that other parts of language would more or less follow the same principle.", "In the future we will try our analytical method on other parts of language." ] }
{ "paper_header_number": [ "1", "2", "3.2", "4", "4.1", "4.2", "4.2.1", "4.2.2", "4.2.3", "4.3", "5", "5.3", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Finding", "SynTime: Syntactic Token Types and General Heuristic Rules", "SynTime Construction", "Time Expression Recognition", "Time Token Identification", "Time Segment Identification", "Time Expression Extraction", "SynTime Expansion", "Experiments", "Limitations", "Conclusion and future work" ] }
GEM-SciDuet-train-99#paper-1262#slide-1
Time Expression Analysis Datasets
TimeBank: a benchmark dataset used in TempEval series Gigaword: a large dataset with generated labels and used in TempEval-3 WikiWars: a specific domain dataset collected from Wikipedia about war Tweets: a manually labeled dataset with informal text collected from Twitter Statistics of the datasets Dataset #Docs #Words #TIMEX The four datasets vary in source, size, domain, and text type, but we will see that their time expressions demonstrate similar characteristics.
TimeBank: a benchmark dataset used in TempEval series Gigaword: a large dataset with generated labels and used in TempEval-3 WikiWars: a specific domain dataset collected from Wikipedia about war Tweets: a manually labeled dataset with informal text collected from Twitter Statistics of the datasets Dataset #Docs #Words #TIMEX The four datasets vary in source, size, domain, and text type, but we will see that their time expressions demonstrate similar characteristics.
[]
GEM-SciDuet-train-99#paper-1262#slide-2
1262
Time Expression Analysis and Recognition Using Syntactic Token Types and General Heuristic Rules
Extracting time expressions from free text is a fundamental task for many applications. We analyze time expressions from four different datasets and find that only a small group of words are used to express time information and that the words in time expressions demonstrate similar syntactic behaviour. Based on the findings, we propose a type-based approach named SynTime 1 for time expression recognition. Specifically, we define three main syntactic token types, namely time token, modifier, and numeral, to group time-related token regular expressions. On the types we design general heuristic rules to recognize time expressions. In recognition, SynTime first identifies time tokens from raw text, then searches their surroundings for modifiers and numerals to form time segments, and finally merges the time segments to time expressions. As a lightweight rule-based tagger, SynTime runs in real time, and can be easily expanded by simply adding keywords for the text from different domains and different text types. Experiments on benchmark datasets and tweets data show that SynTime outperforms state-of-the-art methods.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249 ], "paper_content_text": [ "Introduction Time expression plays an important role in information retrieval and many applications in natural language processing (Alonso et al., 2011; Campos et al., 2014) .", "Recognizing time expressions from free text has attracted considerable attention since last decade (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) .", "1 Source: https://github.com/zhongxiaoshi/syntime We analyze time expressions in four datasets: TimeBank (Pustejovsky et al., 2003b) , Gigaword (Parker et al., 2011) , WikiWars (Mazur and Dale, 2010) , and Tweets.", "From the analysis we make four findings about time expressions.", "First, most time expressions are very short, with 80% of time expressions containing no more than three tokens.", "Second, at least 91.8% of time expressions contain at least one time token.", "Third, the vocabulary used to express time information is very small, with a small group of keywords.", "Finally, words in time expressions demonstrate similar syntactic behaviour.", "All the findings relate to the principle of least effort (Zipf, 1949) .", "That is, people tend to act under the least effort in order to minimize the cost of energy at both individual level and collective level to language usage (Zipf, 1949) .", "Time expression is part of language and acts as an interface of communication.", "Short expressions, occurrence, small vocabulary, and similar syntactic behaviour all reduce the cost of energy required to communicate.", "According to the findings we propose a typebased approach named SynTime ('Syn' stands for syntactic) to recognize time expressions.", "Specifically, we define three main token types, namely time token, modifier, and numeral, to group timerelated token regular expressions.", "Time tokens are the words that explicitly express time information, such as time units (e.g., 'year').", "Modifiers modify time tokens; they appear before or after time tokens, e.g., 'several' and 'ago' in 'several years ago.'", "Numerals are ordinals and numbers.", "From free text SynTime first identifies time tokens, then recognizes modifiers and numerals.", "Naturally, SynTime is a rule-based tagger.", "The key difference between SynTime and other rulebased taggers lies in the way of defining token types and the way of designing rules.", "The definition of token type in SynTime is inspired by part-of-speech in which \"linguists group some words of language into classes (sets) which show similar syntactic behaviour.\"", "(Manning and Schutze, 1999) SynTime defines token types for tokens according to their syntactic behaviour.", "Other rulebased taggers define types for tokens based on their semantic meaning.", "For example, SUTime defines 5 semantic modifier types, such as frequency modifiers; 2 while SynTime defines 5 syntactic modifier types, such as modifiers that appear before time tokens.", "(See Section 4.1 for details.)", "Accordingly, other rule-based taggers design deterministic rules based on their meanings of tokens themselves.", "SynTime instead designs general rules on the token types rather than on the tokens themselves.", "For example, our general rules do not work on tokens 'February' nor '1989' but on their token types 'MONTH' and 'YEAR.'", "That is why we call SynTime a type-based approach.", "More importantly, other rule-based taggers design rules in a fixed method, including fixed length and fixed position.", "In contrast, SynTime designs general rules in a heuristic way, based on the idea of boundary expansion.", "The general heuristic rules are quite light-weight that it makes SynTime much more flexible and expansible, and leads SynTime to run in real time.", "The heuristic rules are designed on token types and are independent of specific tokens, SynTime therefore is independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domains and specific text types in English.", "(The test for other languages needs only to construct a collection of token regular expressions in the target language under our defined token types.)", "Specifically, we evaluate SynTime against three state-of-the-art methods (i.e., HeidelTime, SUTime, and UWTime) on three datasets: TimeBank, WikiWars, and Tweets.", "3 datasets.", "More importantly, SynTime achieves the best recalls on all three datasets and exceptionally good results on Tweets dataset.", "To sum up, we make the following contributions.", "• We analyze time expressions from four datasets and make four findings.", "The findings provide evidence in terms of time expression for the principle of least effort (Zipf, 1949) .", "• We propose a time tagger named SynTime to recognize time expressions using syntactic token types and general heuristic rules.", "Syn-Time is independent of specific tokens, and therefore independent of specific domains, specific text types, and specific languages.", "• We conduct experiments on three datasets, and the results demonstrate the effectiveness of SynTime against state-of-the-art baselines.", "Related Work Many research works on time expression identification are reported in TempEval exercises (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) .", "The task is divided into two subtasks: recognition and normalization.", "Rule-based Time Expression Recognition.", "Rule-based time taggers like GUTime, Heidel-Time, and SUTime, predefine time-related words and rules (Verhagen et al., 2005; Strötgen and Gertz, 2010; Chang and Manning, 2012) .", "Heidel-Time (Strötgen and Gertz, 2010) hand-crafts rules with time resources like weekdays and months, and leverages language clues like part-of-speech to identify time expression.", "SUTime (Chang and Manning, 2012) designs deterministic rules using a cascade finite automata (Hobbs et al., 1997) on regular expressions over tokens (Chang and Manning, 2014) .", "It first identifies individual words, then expands them to chunks, and finally to time expressions.", "Rule-based taggers achieve very good results in TempEval exercises.", "SynTime is also a rule-based tagger while its key difference from other rule-based taggers is that between the rules and the tokens it introduces a layer of token type; its rules work on token types and are independent of specific tokens.", "Moreover, SynTime designs rules in a heuristic way.", "Machine Learning based Method.", "Machine learning based methods extract features from the text and apply statistical models on the features for recognizing time expressions.", "Example features include character features, word features, syntactic features, semantic features, and gazetteer features (Llorens et al., 2010; Filannino et al., 2013; Bethard, 2013) .", "The statistical models include Markov logic network, logistic regression, support vector machines, maximum entropy, and conditional random fields (Llorens et al., 2010; Uz-Zaman and Allen, 2010; Filannino et al., 2013; Bethard, 2013) .", "Some models obtain good performance, and even achieve the highest F 1 of 82.71% on strict match in TempEval-3 (Bethard, 2013) .", "Outside TempEval exercises, Angeli et al.", "leverage compositional grammar and employ a EMstyle approach to learn a latent parser for time expression recognition (Angeli et al., 2012) .", "In the method named UWTime, Lee et al.", "handcraft a combinatory categorial grammar (CCG) (Steedman, 1996) to define a set of lexicon with rules and use L1-regularization to learn linguistic context (Lee et al., 2014) .", "The two methods explicitly use linguistic information.", "In (Lee et al., 2014) , especially, CCG could capture rich structure information of language, similar to the rule-based methods.", "Tabassum et al.", "focus on resolving the dates in tweets, and use distant supervision to recognize time expressions (Tabassum et al., 2016) .", "They use five time types and assign one of them to each word, which is similar to SynTime in the way of defining types over tokens.", "However, they focus only on the type of date, while SynTime recoginizes all the time expressions and does not involve learning and runs in real time.", "Time Expression Normalization.", "Methods in TempEval exercises design rules for time expression normalization (Verhagen et al., 2005; Strötgen and Gertz, 2010; Llorens et al., 2010; Uz-Zaman and Allen, 2010; Filannino et al., 2013; Bethard, 2013) .", "Because the rule systems have high similarity, Llorens et al.", "suggest to construct a large knowledge base as a public resource for the task (Llorens et al., 2012) .", "Some researchers treat the normalization process as a learning task and use machine learning methods (Lee et al., 2014; Tabassum et al., 2016) .", "Lee et al.", "(Lee et al., 2014) use AdaGrad algorithm (Duchi et al., 2011) and Tabassum et al.", "(Tabassum et al., 2016 ) use a loglinear algorithm to normalize time expressions.", "SynTime focuses only on the recognition task.", "The normalization could be achieved by using methods similar to the existing rule systems, because they are highly similar (Llorens et al., 2012) .", "We conduct an analysis on four datasets: Time-Bank, Gigaword, WikiWars, and Tweets.", "Time-Bank (Pustejovsky et al., 2003b ) is a benchmark dataset in TempEval series (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) , consisting of 183 news articles.", "Gigaword (Parker et al., 2011 ) is a large automatically labelled dataset with 2,452 news articles and used in TempEval-3.", "WikiWars dataset is derived from Wikipedia articles about wars (Mazur and Dale, 2010) .", "Tweets is our manually annotated dataset with 942 tweets of which each contains at least one time expression.", "Table 1 summarizes the datasets.", "Finding From the four datasets, we analyze their time expressions and make four findings.", "We will see that despite the four datasets vary in corpus sizes, in text types, and in domains, their time expressions demonstrate similar characteristics.", "Finding 1 Time expressions are very short.", "More than 80% of time expressions contain no more than three words and more than 90% contain no more than four words.", "Figure 1 plots the length distribution of time expressions.", "Although the texts are collected from different sources (i.e., news articles, Wikipedia articles, and tweets) and vary in sizes, the length Finding 2 More than 91% of time expressions contain at least one time token.", "The second column in Table 2 reports the percentage of time expressions that contain at least one time token.", "We find that at least 91.81% of time expressions contain time token(s).", "(Some time expressions have no time token but depend on other time expressions; in '2 to 8 days,' for example, '2' depends on '8 days.')", "This suggests that time tokens account for time expressions.", "Therefore, to recognize time expressions, it is essential to recognize their time tokens.", "Finding 3 Only a small group of time-related keywords are used to express time information.", "From the time expressions in all four datasets, we find that the group of keywords used to express time information is small.", "Table 3 reports the number of distinct words and of distinct time tokens.", "The words/tokens are manually normalized before counting and their variants are ignored.", "For example, 'year' and '5yrs' are counted as one token 'year.'", "Numerals in the counting are ignored.", "Despite the four datasets vary in sizes, domains, and text types, the numbers of their distinct time tokens are comparable.", "Across the four datasets, the number of distinct words is 350, about half of the simply summing of 675; the number of distinct time tokens is 123, less than half of the simply summing 282.", "Among the 123 distinct time tokens, 45 appear in all the four datasets, and 101 appear in at least two datasets.", "This indicates that time tokens, which account for time expressions, are highly overlapped across the four datasets.", "In other words, time expressions highly overlap at their time tokens.", "Finding 4 POS information could not distinguish time expressions from common words, but within time expressions, POS tags can help distinguish their constituents.", "For each dataset we list the top 10 POS tags that appear in time expressions, and their percentages over the whole text.", "Among the 40 tags (10 × 4 datasets), 37 have percentage lower than 20%; other 3 are CD.", "This indicates that POS could not provide enough information to distinguish time expressions from common words.", "However, the most common POS tags in time expressions are NN*, JJ, RB, CD, and DT.", "Within time expressions, the time tokens usually have NN* and RB, the modifiers have JJ and RB, and the numerals have CD.", "This finding indicates that for the time expressions, their similar constituents behave in similar syntactic way.", "When seeing this, we realize that this is exactly how linguists define part-of-speech for language.", "4 The definition of POS for language inspires us to define a syntactic type system for the time expression, part of language.", "The four findings all relate to the principle of least effort (Zipf, 1949) .", "That is, people tend to act with least effort so as to minimize the cost of energy at both individual and collective levels to the language usage (Zipf, 1949) .", "Time expression is part of language and acts as an interface of communication.", "Short expressions, occurrence, small vocabulary, and similar syntactic behaviour all reduce the cost of energy required to communicate.", "To summarize: on average, a time expression contains two tokens of which one is time token and the other is modifier/numeral, and the size of time tokens is small.", "To recognize a time expression, therefore, we first recognize the time token, then recognize the modifier/numeral.", "SynTime: Syntactic Token Types and General Heuristic Rules SynTime defines a syntactic type system for the tokens of time expressions, and designs heuristic rules working on the token types.", "Figure 2 shows the layout of SynTime, consisting of three levels: Token level, type level, and rule level.", "Token types at the type level group the tokens of time expressions.", "Heuristic rules lie at the rule level, working on token types rather than on tokens themselves.", "That is why the heuristic rules are general.", "For example, the heuristic rules do not work on tokens '1989' nor 'February,' but on their token types 'YEAR' and 'MONTH.'", "The heuristic rules are only relevant to token types, and are independent of specific tokens.", "For this reason, our token types and heuristic rules are independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domain (i.e., war domain) and specific text types (i.e., formal text and informal text) in English.", "The test for other languages simply needs to construct a set of token regular expressions in the target language under our defined token types.", "Figure 3 shows the overview of SynTime in practice.", "Shown on the left-hand side, SynTime is initialized with regular expressions over tokens.", "After initialization, SynTime can be directly applied on text.", "On the other hand, SynTime can be easily expanded by simply adding the time-related token regular expressions from training text under each defined token type.", "The expansion enables SynTime to recognize time expressions in text from different domains and different text types.", "Shown on the right-hand side of Figure 3 , Syn-Time recognizes time expression through three main steps.", "In the first step, SynTime identifies time tokens from the POS-tagged raw text.", "Then around the time tokens SynTime searches for modifiers and numerals to form time segments.", "In the last step, SynTime transforms the time segments to time expressions.", "SynTime Construction We define a syntactic type system for time expression, specifically, 15 token types for time tokens, 5 token types for modifiers, and 1 token type for numeral.", "Token types to tokens is like POS tags to words; for example, 'February' has a POS tag of NNP and a token type of MONTH.", "Time Token.", "We define 15 token types for the time tokens and use their names similar to Joda-Time classes: 5 DECADE (-), YEAR (-), SEA-SON (5), MONTH (12), WEEK (7), DATE (-), TIME (-), DAY TIME (27), TIMELINE (12), HOLIDAY (20), PERIOD (9), DURATION (-), TIME UNIT (15), TIME ZONE (6), and ERA (2).", "Number in '()' indicates the number of distinct tokens in this token type.", "'-' indicates that this token type involves changing digits and cannot be counted.", "Modifier.", "We define 3 token types for the modifiers according to their possible positions relative to time tokens.", "Modifiers that appear before time tokens are PREFIX (48); modifiers after time tokens are SUFFIX (2).", "LINKAGE (4) link two time tokens.", "Besides, we define 2 special modifier types, COMMA (1) for comma ',' and IN ARTICLE (2) for indefinite articles 'a' and 'an.'", "TimeML (Pustejovsky et al., 2003a) and Time-Bank (Pustejovsky et al., 2003b) do not treat most prepositions like 'on' as a part of time expressions.", "Thus SynTime does not collect those prepositions.", "Numeral.", "Number in time expressions can be a time token e.g., '10' in 'October 10, 2016,' or a modifier e.g., '10' in '10 days.'", "We define NU-MERAL (-) for the ordinals and numbers.", "SynTime Initialization.", "The token regular expressions for initializing SynTime are collected from SUTime, 6 a state-of-the-art rule-based tagger that achieved the highest recall in TempEval-3 (Chang and Manning, , 2013 .", "Specifically, we collect from SUTime only the tokens and the regular expressions over tokens, and discard its other rules of recognizing full time expressions.", "Time Expression Recognition On the token types, SynTime designs a small set of heuristic rules to recognize time expressions.", "The recognition process includes three main steps: (1) time token identification, (2) time segment identification, and (3) time expression extraction.", "Time Token Identification Identifying time tokens is simple, through matching of string and regular expressions.", "Some words might cause ambiguity.", "For example, 'May' could be a modal verb, or the fifth month of year.", "To filter out the ambiguous words, we use POS information.", "In implementation, we use Stanford POS Tagger; 7 and the POS tags for matching the instances of token types in SynTime are based on our Finding 4 in Section 3.2.", "Besides time tokens are identified, in this step, individual token is assigned with one token type of either modifier or numeral if it is matched with token regular expressions.", "In the next two steps, SynTime works on those token types.", "Time Segment Identification The task of time segment identification is to search the surrounding of each time token identified in previous step for modifiers and numerals, then gather the time token with its modifiers and numerals to form a time segment.", "The searching is under simple heuristic rules in which the key idea is to expand the time token's boundaries.", "At first, each time token is a time segment.", "If it is either a PERIOD or DURATION, then no need to further search.", "Otherwise, search its left and its right for modifiers and numerals.", "For the left searching, if encounter a PREFIX or NUMERAL or IN ARTICLE, then continue searching.", "For the right searching, if encounter a SUFFIX or NUMERAL, then continue searching.", "Both the left and the right searching stop when reaching a COMMA or LINK-AGE or a non-modifier/numeral word.", "The left searching does not exceed the previous time token; the right searching does not exceed the next time token.", "A time segment consists of exactly one time token, and zero or some modifiers/numerals.", "A special kind of time segments do not contain any time token; they depend on other time segments next to them.", "For example, in '8 to 20 days,' 'to 20 days' is a time segment, and '8 to' forms a dependent time segment.", "(See Figure 4(e) .)", "Time Expression Extraction The task of time expression extraction is to extract time expressions from the identified time segments in which the core step is to determine whether to merge two adjacent or overlapping time segments into a new time segment.", "We scan the time segments in a sentence from beginning to the end.", "A stand-alone time segment is a time expression.", "(See Figure 4(a) .)", "The focus is to deal with two or more time segments that are adjacent or overlapping.", "If two time segments s 1 and s 2 are adjacent, merge them to form a new time segment s 1 .", "(See Figure 4(b) .)", "Consider that s 1 and s 2 overlap at a shared boundary.", "According to our time segment identification, the shared boundary could be a modifier or a numeral.", "If the word at the shared boundary is neither a COMMA nor a LINKAGE, then merge s 1 and s 2 .", "(See Figure 4(c) .)", "If the word is a LINKAGE, then extract s 1 as a time expression and continue scanning.", "When the shared boundary is a COMMA, merge s 1 and s 2 only if the COMMA's previous token and its next token satisfy the three conditions: (1) the previous token is a time token or a NUMERAL; (2) the next token is a time token; and (3) the token types of the previous token and of the next token are not the same.", "(See Figure 4(d) .)", "Although Figure 4 shows the examples as token types together with the tokens, we should note that the heuristic rules only work on the token types.", "After the extraction step, time expressions are exported as a sequence of tokens from the sequence of token types.", "SynTime Expansion SynTime could be expanded by simply adding new words under each defined token type without changing any rule.", "The expansion requires the words to be added to be annotated manually.", "We apply the initial SynTime on the time expressions from training text and list the words that are not covered.", "Whether the uncovered words are added to SynTime is manually determined.", "The rule for determination is that the added words can not cause ambiguity and should be generic.", "Wiki-Wars dataset contains a few examples like this: 'The time Arnold reached Quebec City.'", "Words in this example are extremely descriptive, and we do not collect them.", "In tweets, on the other hand, people may use abbreviations and informal variants; for example, '2day' and 'tday' are popular spellings of 'today.'", "Such kind of abbreviations and informal variants will be collected.", "According to our findings, not many words are used to express time information, the manual addition of keywords thus will not cost much.", "In addition, we find that even in tweets people tend to use formal words.", "In the Twitter word clusters trained from 56 million English tweets, 8 the most often used words are the formal words, and their frequencies are much greater than the informal words'.", "The cluster of 'today,' 9 for example, its most often use is the formal one, 'today,' which appears 1,220,829 times; while its second most often use '2day' appears only 34,827 times.", "The low rate of informal words (e.g., about 3% in 'today' cluster) suggests that even in informal environment the manual keyword addition costs little.", "Experiments We evaluate SynTime against three state-of-theart baselines (i.e., HeidelTime, SUTime, and UW-Time) on three datasets (i.e., TimeBank, Wiki-Wars, and Tweets).", "WikiWars is a specific domain dataset about war; TimeBank and WikiWars are the datasets in formal text while Tweets dataset is in informal text.", "For SynTime we report the results of its two versions: SynTime-I and SynTime-E. SynTime-I is the initial version, and SynTime-E is the expanded version of SynTime-I.", "Experiment Setting Datasets.", "We use three datasets of which TimeBank and WikiWars are benchmark datasets whose details are shown in Section 3.1; Tweets is our manually labeled dataset that are collected from Twitter.", "For Tweets dataset, we randomly sample 4000 tweets and use SUTime to tag them.", "942 tweets of which each contains at least one time expression.", "From the remaining 3,058 tweets, we randomly sample 500 and manually annotate them, and find that only 15 tweets contain time expressions.", "We therefore roughly consider that SU-Time misses about 3% time expressions in tweets.", "Two annotators then manually annotate the 942 tweets with discussion to final agreement according to the standards of TimeML and TimeBank.", "We finally get 1,127 manually labeled time expressions.", "For the 942 tweets, we randomly sample 200 tweets as test set, and the rest 742 as training set, because a baseline UWTime requires training.", "Baseline Methods.", "We compare SynTime with methods: HeidelTime (Strötgen and Gertz, 2010) , SUTime (Chang and , and UW- Evaluation Metrics.", "We follow TempEval-3 and use their evaluation toolkit 10 to report P recision, Recall, and F 1 in terms of strict match and relaxed match (UzZaman et al., 2013).", "22, 1986' and 'February 01, 1989 ' at the level of word or of character.", "One suggestion is to consider a type-based learning method that could use type information.", "For example, the above two time expressions refer to the same pattern of 'MONTH NUMERAL COMMA Table 5 lists the number of time tokens and modifiers added to SynTime-I to get SynTime-E. On TimeBank and Tweets datasets, only a few tokens are added, the corresponding results are affected slightly.", "This confirms that the size of time words is small, and that SynTime-I covers most of time words.", "On WikiWars dataset, relatively more tokens are added, SynTime-E performs much better than SynTime-I, especially in recall.", "It improves the recall by 3.25% in strict match and by 2.98% in relaxed match.", "This indicates that with more words added from specific domains (e.g., WikiWars dataset about war), SynTime can significantly improve the performance.", "Experiment Result Limitations SynTime assumes that words are tokenized and POS tagged correctly.", "In reality, however, the tokenized and tagged words are not that perfect, due to the limitation of used tools.", "For example, Stanford POS Tagger assigns VBD to the word 'sat' in 'friday or sat' while whose tag should be NNP.", "The incorrect tokens and POS tags affect the result.", "Conclusion and future work We conduct an analysis on time expressions from four datasets, and find that time expressions in general are very short and expressed by a small vocabulary, and words in time expressions demonstrate similar syntactic behavior.", "Our findings provide evidence in terms of time expression for the principle of least effort (Zipf, 1949) .", "Inspired by part-of-speech, based on the findings, we define a syntactic type system for the time expression, and propose a type-based time expression tagger, named by SynTime.", "SynTime defines syntactic token types for tokens and on the token types it designs general heuristic rules based on the idea of boundary expansion.", "Experiments on three datasets show that SynTime outperforms the stateof-the-art baselines, including rule-based time taggers and machine learning based time tagger.", "Because our heuristic rules are quite simple, Syn-Time is light-weight and runs in real time.", "Our token types and heuristic rules are independent of specific tokens, SynTime therefore is independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domains and specific text types in English.", "The test for other languages needs only to construct a collection of token regular expressions in the target language under our defined token types.", "Time expression is part of language and follows the principle of least effort.", "Since language usage relates to human habits (Zipf, 1949; Chomsky, 1986; Pinker, 1995) , we might expect that humans would share some common habits, and therefore expect that other parts of language would more or less follow the same principle.", "In the future we will try our analytical method on other parts of language." ] }
{ "paper_header_number": [ "1", "2", "3.2", "4", "4.1", "4.2", "4.2.1", "4.2.2", "4.2.3", "4.3", "5", "5.3", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Finding", "SynTime: Syntactic Token Types and General Heuristic Rules", "SynTime Construction", "Time Expression Recognition", "Time Token Identification", "Time Segment Identification", "Time Expression Extraction", "SynTime Expansion", "Experiments", "Limitations", "Conclusion and future work" ] }
GEM-SciDuet-train-99#paper-1262#slide-2
Time Expression Analysis Finding 1
Short time expressions: time expressions are very short. 80% of time expressions contain 3 words Average length of time expressions Time expressions follow a similar length distribution Average length: about 2 words
Short time expressions: time expressions are very short. 80% of time expressions contain 3 words Average length of time expressions Time expressions follow a similar length distribution Average length: about 2 words
[]
GEM-SciDuet-train-99#paper-1262#slide-3
1262
Time Expression Analysis and Recognition Using Syntactic Token Types and General Heuristic Rules
Extracting time expressions from free text is a fundamental task for many applications. We analyze time expressions from four different datasets and find that only a small group of words are used to express time information and that the words in time expressions demonstrate similar syntactic behaviour. Based on the findings, we propose a type-based approach named SynTime 1 for time expression recognition. Specifically, we define three main syntactic token types, namely time token, modifier, and numeral, to group time-related token regular expressions. On the types we design general heuristic rules to recognize time expressions. In recognition, SynTime first identifies time tokens from raw text, then searches their surroundings for modifiers and numerals to form time segments, and finally merges the time segments to time expressions. As a lightweight rule-based tagger, SynTime runs in real time, and can be easily expanded by simply adding keywords for the text from different domains and different text types. Experiments on benchmark datasets and tweets data show that SynTime outperforms state-of-the-art methods.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249 ], "paper_content_text": [ "Introduction Time expression plays an important role in information retrieval and many applications in natural language processing (Alonso et al., 2011; Campos et al., 2014) .", "Recognizing time expressions from free text has attracted considerable attention since last decade (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) .", "1 Source: https://github.com/zhongxiaoshi/syntime We analyze time expressions in four datasets: TimeBank (Pustejovsky et al., 2003b) , Gigaword (Parker et al., 2011) , WikiWars (Mazur and Dale, 2010) , and Tweets.", "From the analysis we make four findings about time expressions.", "First, most time expressions are very short, with 80% of time expressions containing no more than three tokens.", "Second, at least 91.8% of time expressions contain at least one time token.", "Third, the vocabulary used to express time information is very small, with a small group of keywords.", "Finally, words in time expressions demonstrate similar syntactic behaviour.", "All the findings relate to the principle of least effort (Zipf, 1949) .", "That is, people tend to act under the least effort in order to minimize the cost of energy at both individual level and collective level to language usage (Zipf, 1949) .", "Time expression is part of language and acts as an interface of communication.", "Short expressions, occurrence, small vocabulary, and similar syntactic behaviour all reduce the cost of energy required to communicate.", "According to the findings we propose a typebased approach named SynTime ('Syn' stands for syntactic) to recognize time expressions.", "Specifically, we define three main token types, namely time token, modifier, and numeral, to group timerelated token regular expressions.", "Time tokens are the words that explicitly express time information, such as time units (e.g., 'year').", "Modifiers modify time tokens; they appear before or after time tokens, e.g., 'several' and 'ago' in 'several years ago.'", "Numerals are ordinals and numbers.", "From free text SynTime first identifies time tokens, then recognizes modifiers and numerals.", "Naturally, SynTime is a rule-based tagger.", "The key difference between SynTime and other rulebased taggers lies in the way of defining token types and the way of designing rules.", "The definition of token type in SynTime is inspired by part-of-speech in which \"linguists group some words of language into classes (sets) which show similar syntactic behaviour.\"", "(Manning and Schutze, 1999) SynTime defines token types for tokens according to their syntactic behaviour.", "Other rulebased taggers define types for tokens based on their semantic meaning.", "For example, SUTime defines 5 semantic modifier types, such as frequency modifiers; 2 while SynTime defines 5 syntactic modifier types, such as modifiers that appear before time tokens.", "(See Section 4.1 for details.)", "Accordingly, other rule-based taggers design deterministic rules based on their meanings of tokens themselves.", "SynTime instead designs general rules on the token types rather than on the tokens themselves.", "For example, our general rules do not work on tokens 'February' nor '1989' but on their token types 'MONTH' and 'YEAR.'", "That is why we call SynTime a type-based approach.", "More importantly, other rule-based taggers design rules in a fixed method, including fixed length and fixed position.", "In contrast, SynTime designs general rules in a heuristic way, based on the idea of boundary expansion.", "The general heuristic rules are quite light-weight that it makes SynTime much more flexible and expansible, and leads SynTime to run in real time.", "The heuristic rules are designed on token types and are independent of specific tokens, SynTime therefore is independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domains and specific text types in English.", "(The test for other languages needs only to construct a collection of token regular expressions in the target language under our defined token types.)", "Specifically, we evaluate SynTime against three state-of-the-art methods (i.e., HeidelTime, SUTime, and UWTime) on three datasets: TimeBank, WikiWars, and Tweets.", "3 datasets.", "More importantly, SynTime achieves the best recalls on all three datasets and exceptionally good results on Tweets dataset.", "To sum up, we make the following contributions.", "• We analyze time expressions from four datasets and make four findings.", "The findings provide evidence in terms of time expression for the principle of least effort (Zipf, 1949) .", "• We propose a time tagger named SynTime to recognize time expressions using syntactic token types and general heuristic rules.", "Syn-Time is independent of specific tokens, and therefore independent of specific domains, specific text types, and specific languages.", "• We conduct experiments on three datasets, and the results demonstrate the effectiveness of SynTime against state-of-the-art baselines.", "Related Work Many research works on time expression identification are reported in TempEval exercises (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) .", "The task is divided into two subtasks: recognition and normalization.", "Rule-based Time Expression Recognition.", "Rule-based time taggers like GUTime, Heidel-Time, and SUTime, predefine time-related words and rules (Verhagen et al., 2005; Strötgen and Gertz, 2010; Chang and Manning, 2012) .", "Heidel-Time (Strötgen and Gertz, 2010) hand-crafts rules with time resources like weekdays and months, and leverages language clues like part-of-speech to identify time expression.", "SUTime (Chang and Manning, 2012) designs deterministic rules using a cascade finite automata (Hobbs et al., 1997) on regular expressions over tokens (Chang and Manning, 2014) .", "It first identifies individual words, then expands them to chunks, and finally to time expressions.", "Rule-based taggers achieve very good results in TempEval exercises.", "SynTime is also a rule-based tagger while its key difference from other rule-based taggers is that between the rules and the tokens it introduces a layer of token type; its rules work on token types and are independent of specific tokens.", "Moreover, SynTime designs rules in a heuristic way.", "Machine Learning based Method.", "Machine learning based methods extract features from the text and apply statistical models on the features for recognizing time expressions.", "Example features include character features, word features, syntactic features, semantic features, and gazetteer features (Llorens et al., 2010; Filannino et al., 2013; Bethard, 2013) .", "The statistical models include Markov logic network, logistic regression, support vector machines, maximum entropy, and conditional random fields (Llorens et al., 2010; Uz-Zaman and Allen, 2010; Filannino et al., 2013; Bethard, 2013) .", "Some models obtain good performance, and even achieve the highest F 1 of 82.71% on strict match in TempEval-3 (Bethard, 2013) .", "Outside TempEval exercises, Angeli et al.", "leverage compositional grammar and employ a EMstyle approach to learn a latent parser for time expression recognition (Angeli et al., 2012) .", "In the method named UWTime, Lee et al.", "handcraft a combinatory categorial grammar (CCG) (Steedman, 1996) to define a set of lexicon with rules and use L1-regularization to learn linguistic context (Lee et al., 2014) .", "The two methods explicitly use linguistic information.", "In (Lee et al., 2014) , especially, CCG could capture rich structure information of language, similar to the rule-based methods.", "Tabassum et al.", "focus on resolving the dates in tweets, and use distant supervision to recognize time expressions (Tabassum et al., 2016) .", "They use five time types and assign one of them to each word, which is similar to SynTime in the way of defining types over tokens.", "However, they focus only on the type of date, while SynTime recoginizes all the time expressions and does not involve learning and runs in real time.", "Time Expression Normalization.", "Methods in TempEval exercises design rules for time expression normalization (Verhagen et al., 2005; Strötgen and Gertz, 2010; Llorens et al., 2010; Uz-Zaman and Allen, 2010; Filannino et al., 2013; Bethard, 2013) .", "Because the rule systems have high similarity, Llorens et al.", "suggest to construct a large knowledge base as a public resource for the task (Llorens et al., 2012) .", "Some researchers treat the normalization process as a learning task and use machine learning methods (Lee et al., 2014; Tabassum et al., 2016) .", "Lee et al.", "(Lee et al., 2014) use AdaGrad algorithm (Duchi et al., 2011) and Tabassum et al.", "(Tabassum et al., 2016 ) use a loglinear algorithm to normalize time expressions.", "SynTime focuses only on the recognition task.", "The normalization could be achieved by using methods similar to the existing rule systems, because they are highly similar (Llorens et al., 2012) .", "We conduct an analysis on four datasets: Time-Bank, Gigaword, WikiWars, and Tweets.", "Time-Bank (Pustejovsky et al., 2003b ) is a benchmark dataset in TempEval series (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) , consisting of 183 news articles.", "Gigaword (Parker et al., 2011 ) is a large automatically labelled dataset with 2,452 news articles and used in TempEval-3.", "WikiWars dataset is derived from Wikipedia articles about wars (Mazur and Dale, 2010) .", "Tweets is our manually annotated dataset with 942 tweets of which each contains at least one time expression.", "Table 1 summarizes the datasets.", "Finding From the four datasets, we analyze their time expressions and make four findings.", "We will see that despite the four datasets vary in corpus sizes, in text types, and in domains, their time expressions demonstrate similar characteristics.", "Finding 1 Time expressions are very short.", "More than 80% of time expressions contain no more than three words and more than 90% contain no more than four words.", "Figure 1 plots the length distribution of time expressions.", "Although the texts are collected from different sources (i.e., news articles, Wikipedia articles, and tweets) and vary in sizes, the length Finding 2 More than 91% of time expressions contain at least one time token.", "The second column in Table 2 reports the percentage of time expressions that contain at least one time token.", "We find that at least 91.81% of time expressions contain time token(s).", "(Some time expressions have no time token but depend on other time expressions; in '2 to 8 days,' for example, '2' depends on '8 days.')", "This suggests that time tokens account for time expressions.", "Therefore, to recognize time expressions, it is essential to recognize their time tokens.", "Finding 3 Only a small group of time-related keywords are used to express time information.", "From the time expressions in all four datasets, we find that the group of keywords used to express time information is small.", "Table 3 reports the number of distinct words and of distinct time tokens.", "The words/tokens are manually normalized before counting and their variants are ignored.", "For example, 'year' and '5yrs' are counted as one token 'year.'", "Numerals in the counting are ignored.", "Despite the four datasets vary in sizes, domains, and text types, the numbers of their distinct time tokens are comparable.", "Across the four datasets, the number of distinct words is 350, about half of the simply summing of 675; the number of distinct time tokens is 123, less than half of the simply summing 282.", "Among the 123 distinct time tokens, 45 appear in all the four datasets, and 101 appear in at least two datasets.", "This indicates that time tokens, which account for time expressions, are highly overlapped across the four datasets.", "In other words, time expressions highly overlap at their time tokens.", "Finding 4 POS information could not distinguish time expressions from common words, but within time expressions, POS tags can help distinguish their constituents.", "For each dataset we list the top 10 POS tags that appear in time expressions, and their percentages over the whole text.", "Among the 40 tags (10 × 4 datasets), 37 have percentage lower than 20%; other 3 are CD.", "This indicates that POS could not provide enough information to distinguish time expressions from common words.", "However, the most common POS tags in time expressions are NN*, JJ, RB, CD, and DT.", "Within time expressions, the time tokens usually have NN* and RB, the modifiers have JJ and RB, and the numerals have CD.", "This finding indicates that for the time expressions, their similar constituents behave in similar syntactic way.", "When seeing this, we realize that this is exactly how linguists define part-of-speech for language.", "4 The definition of POS for language inspires us to define a syntactic type system for the time expression, part of language.", "The four findings all relate to the principle of least effort (Zipf, 1949) .", "That is, people tend to act with least effort so as to minimize the cost of energy at both individual and collective levels to the language usage (Zipf, 1949) .", "Time expression is part of language and acts as an interface of communication.", "Short expressions, occurrence, small vocabulary, and similar syntactic behaviour all reduce the cost of energy required to communicate.", "To summarize: on average, a time expression contains two tokens of which one is time token and the other is modifier/numeral, and the size of time tokens is small.", "To recognize a time expression, therefore, we first recognize the time token, then recognize the modifier/numeral.", "SynTime: Syntactic Token Types and General Heuristic Rules SynTime defines a syntactic type system for the tokens of time expressions, and designs heuristic rules working on the token types.", "Figure 2 shows the layout of SynTime, consisting of three levels: Token level, type level, and rule level.", "Token types at the type level group the tokens of time expressions.", "Heuristic rules lie at the rule level, working on token types rather than on tokens themselves.", "That is why the heuristic rules are general.", "For example, the heuristic rules do not work on tokens '1989' nor 'February,' but on their token types 'YEAR' and 'MONTH.'", "The heuristic rules are only relevant to token types, and are independent of specific tokens.", "For this reason, our token types and heuristic rules are independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domain (i.e., war domain) and specific text types (i.e., formal text and informal text) in English.", "The test for other languages simply needs to construct a set of token regular expressions in the target language under our defined token types.", "Figure 3 shows the overview of SynTime in practice.", "Shown on the left-hand side, SynTime is initialized with regular expressions over tokens.", "After initialization, SynTime can be directly applied on text.", "On the other hand, SynTime can be easily expanded by simply adding the time-related token regular expressions from training text under each defined token type.", "The expansion enables SynTime to recognize time expressions in text from different domains and different text types.", "Shown on the right-hand side of Figure 3 , Syn-Time recognizes time expression through three main steps.", "In the first step, SynTime identifies time tokens from the POS-tagged raw text.", "Then around the time tokens SynTime searches for modifiers and numerals to form time segments.", "In the last step, SynTime transforms the time segments to time expressions.", "SynTime Construction We define a syntactic type system for time expression, specifically, 15 token types for time tokens, 5 token types for modifiers, and 1 token type for numeral.", "Token types to tokens is like POS tags to words; for example, 'February' has a POS tag of NNP and a token type of MONTH.", "Time Token.", "We define 15 token types for the time tokens and use their names similar to Joda-Time classes: 5 DECADE (-), YEAR (-), SEA-SON (5), MONTH (12), WEEK (7), DATE (-), TIME (-), DAY TIME (27), TIMELINE (12), HOLIDAY (20), PERIOD (9), DURATION (-), TIME UNIT (15), TIME ZONE (6), and ERA (2).", "Number in '()' indicates the number of distinct tokens in this token type.", "'-' indicates that this token type involves changing digits and cannot be counted.", "Modifier.", "We define 3 token types for the modifiers according to their possible positions relative to time tokens.", "Modifiers that appear before time tokens are PREFIX (48); modifiers after time tokens are SUFFIX (2).", "LINKAGE (4) link two time tokens.", "Besides, we define 2 special modifier types, COMMA (1) for comma ',' and IN ARTICLE (2) for indefinite articles 'a' and 'an.'", "TimeML (Pustejovsky et al., 2003a) and Time-Bank (Pustejovsky et al., 2003b) do not treat most prepositions like 'on' as a part of time expressions.", "Thus SynTime does not collect those prepositions.", "Numeral.", "Number in time expressions can be a time token e.g., '10' in 'October 10, 2016,' or a modifier e.g., '10' in '10 days.'", "We define NU-MERAL (-) for the ordinals and numbers.", "SynTime Initialization.", "The token regular expressions for initializing SynTime are collected from SUTime, 6 a state-of-the-art rule-based tagger that achieved the highest recall in TempEval-3 (Chang and Manning, , 2013 .", "Specifically, we collect from SUTime only the tokens and the regular expressions over tokens, and discard its other rules of recognizing full time expressions.", "Time Expression Recognition On the token types, SynTime designs a small set of heuristic rules to recognize time expressions.", "The recognition process includes three main steps: (1) time token identification, (2) time segment identification, and (3) time expression extraction.", "Time Token Identification Identifying time tokens is simple, through matching of string and regular expressions.", "Some words might cause ambiguity.", "For example, 'May' could be a modal verb, or the fifth month of year.", "To filter out the ambiguous words, we use POS information.", "In implementation, we use Stanford POS Tagger; 7 and the POS tags for matching the instances of token types in SynTime are based on our Finding 4 in Section 3.2.", "Besides time tokens are identified, in this step, individual token is assigned with one token type of either modifier or numeral if it is matched with token regular expressions.", "In the next two steps, SynTime works on those token types.", "Time Segment Identification The task of time segment identification is to search the surrounding of each time token identified in previous step for modifiers and numerals, then gather the time token with its modifiers and numerals to form a time segment.", "The searching is under simple heuristic rules in which the key idea is to expand the time token's boundaries.", "At first, each time token is a time segment.", "If it is either a PERIOD or DURATION, then no need to further search.", "Otherwise, search its left and its right for modifiers and numerals.", "For the left searching, if encounter a PREFIX or NUMERAL or IN ARTICLE, then continue searching.", "For the right searching, if encounter a SUFFIX or NUMERAL, then continue searching.", "Both the left and the right searching stop when reaching a COMMA or LINK-AGE or a non-modifier/numeral word.", "The left searching does not exceed the previous time token; the right searching does not exceed the next time token.", "A time segment consists of exactly one time token, and zero or some modifiers/numerals.", "A special kind of time segments do not contain any time token; they depend on other time segments next to them.", "For example, in '8 to 20 days,' 'to 20 days' is a time segment, and '8 to' forms a dependent time segment.", "(See Figure 4(e) .)", "Time Expression Extraction The task of time expression extraction is to extract time expressions from the identified time segments in which the core step is to determine whether to merge two adjacent or overlapping time segments into a new time segment.", "We scan the time segments in a sentence from beginning to the end.", "A stand-alone time segment is a time expression.", "(See Figure 4(a) .)", "The focus is to deal with two or more time segments that are adjacent or overlapping.", "If two time segments s 1 and s 2 are adjacent, merge them to form a new time segment s 1 .", "(See Figure 4(b) .)", "Consider that s 1 and s 2 overlap at a shared boundary.", "According to our time segment identification, the shared boundary could be a modifier or a numeral.", "If the word at the shared boundary is neither a COMMA nor a LINKAGE, then merge s 1 and s 2 .", "(See Figure 4(c) .)", "If the word is a LINKAGE, then extract s 1 as a time expression and continue scanning.", "When the shared boundary is a COMMA, merge s 1 and s 2 only if the COMMA's previous token and its next token satisfy the three conditions: (1) the previous token is a time token or a NUMERAL; (2) the next token is a time token; and (3) the token types of the previous token and of the next token are not the same.", "(See Figure 4(d) .)", "Although Figure 4 shows the examples as token types together with the tokens, we should note that the heuristic rules only work on the token types.", "After the extraction step, time expressions are exported as a sequence of tokens from the sequence of token types.", "SynTime Expansion SynTime could be expanded by simply adding new words under each defined token type without changing any rule.", "The expansion requires the words to be added to be annotated manually.", "We apply the initial SynTime on the time expressions from training text and list the words that are not covered.", "Whether the uncovered words are added to SynTime is manually determined.", "The rule for determination is that the added words can not cause ambiguity and should be generic.", "Wiki-Wars dataset contains a few examples like this: 'The time Arnold reached Quebec City.'", "Words in this example are extremely descriptive, and we do not collect them.", "In tweets, on the other hand, people may use abbreviations and informal variants; for example, '2day' and 'tday' are popular spellings of 'today.'", "Such kind of abbreviations and informal variants will be collected.", "According to our findings, not many words are used to express time information, the manual addition of keywords thus will not cost much.", "In addition, we find that even in tweets people tend to use formal words.", "In the Twitter word clusters trained from 56 million English tweets, 8 the most often used words are the formal words, and their frequencies are much greater than the informal words'.", "The cluster of 'today,' 9 for example, its most often use is the formal one, 'today,' which appears 1,220,829 times; while its second most often use '2day' appears only 34,827 times.", "The low rate of informal words (e.g., about 3% in 'today' cluster) suggests that even in informal environment the manual keyword addition costs little.", "Experiments We evaluate SynTime against three state-of-theart baselines (i.e., HeidelTime, SUTime, and UW-Time) on three datasets (i.e., TimeBank, Wiki-Wars, and Tweets).", "WikiWars is a specific domain dataset about war; TimeBank and WikiWars are the datasets in formal text while Tweets dataset is in informal text.", "For SynTime we report the results of its two versions: SynTime-I and SynTime-E. SynTime-I is the initial version, and SynTime-E is the expanded version of SynTime-I.", "Experiment Setting Datasets.", "We use three datasets of which TimeBank and WikiWars are benchmark datasets whose details are shown in Section 3.1; Tweets is our manually labeled dataset that are collected from Twitter.", "For Tweets dataset, we randomly sample 4000 tweets and use SUTime to tag them.", "942 tweets of which each contains at least one time expression.", "From the remaining 3,058 tweets, we randomly sample 500 and manually annotate them, and find that only 15 tweets contain time expressions.", "We therefore roughly consider that SU-Time misses about 3% time expressions in tweets.", "Two annotators then manually annotate the 942 tweets with discussion to final agreement according to the standards of TimeML and TimeBank.", "We finally get 1,127 manually labeled time expressions.", "For the 942 tweets, we randomly sample 200 tweets as test set, and the rest 742 as training set, because a baseline UWTime requires training.", "Baseline Methods.", "We compare SynTime with methods: HeidelTime (Strötgen and Gertz, 2010) , SUTime (Chang and , and UW- Evaluation Metrics.", "We follow TempEval-3 and use their evaluation toolkit 10 to report P recision, Recall, and F 1 in terms of strict match and relaxed match (UzZaman et al., 2013).", "22, 1986' and 'February 01, 1989 ' at the level of word or of character.", "One suggestion is to consider a type-based learning method that could use type information.", "For example, the above two time expressions refer to the same pattern of 'MONTH NUMERAL COMMA Table 5 lists the number of time tokens and modifiers added to SynTime-I to get SynTime-E. On TimeBank and Tweets datasets, only a few tokens are added, the corresponding results are affected slightly.", "This confirms that the size of time words is small, and that SynTime-I covers most of time words.", "On WikiWars dataset, relatively more tokens are added, SynTime-E performs much better than SynTime-I, especially in recall.", "It improves the recall by 3.25% in strict match and by 2.98% in relaxed match.", "This indicates that with more words added from specific domains (e.g., WikiWars dataset about war), SynTime can significantly improve the performance.", "Experiment Result Limitations SynTime assumes that words are tokenized and POS tagged correctly.", "In reality, however, the tokenized and tagged words are not that perfect, due to the limitation of used tools.", "For example, Stanford POS Tagger assigns VBD to the word 'sat' in 'friday or sat' while whose tag should be NNP.", "The incorrect tokens and POS tags affect the result.", "Conclusion and future work We conduct an analysis on time expressions from four datasets, and find that time expressions in general are very short and expressed by a small vocabulary, and words in time expressions demonstrate similar syntactic behavior.", "Our findings provide evidence in terms of time expression for the principle of least effort (Zipf, 1949) .", "Inspired by part-of-speech, based on the findings, we define a syntactic type system for the time expression, and propose a type-based time expression tagger, named by SynTime.", "SynTime defines syntactic token types for tokens and on the token types it designs general heuristic rules based on the idea of boundary expansion.", "Experiments on three datasets show that SynTime outperforms the stateof-the-art baselines, including rule-based time taggers and machine learning based time tagger.", "Because our heuristic rules are quite simple, Syn-Time is light-weight and runs in real time.", "Our token types and heuristic rules are independent of specific tokens, SynTime therefore is independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domains and specific text types in English.", "The test for other languages needs only to construct a collection of token regular expressions in the target language under our defined token types.", "Time expression is part of language and follows the principle of least effort.", "Since language usage relates to human habits (Zipf, 1949; Chomsky, 1986; Pinker, 1995) , we might expect that humans would share some common habits, and therefore expect that other parts of language would more or less follow the same principle.", "In the future we will try our analytical method on other parts of language." ] }
{ "paper_header_number": [ "1", "2", "3.2", "4", "4.1", "4.2", "4.2.1", "4.2.2", "4.2.3", "4.3", "5", "5.3", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Finding", "SynTime: Syntactic Token Types and General Heuristic Rules", "SynTime Construction", "Time Expression Recognition", "Time Token Identification", "Time Segment Identification", "Time Expression Extraction", "SynTime Expansion", "Experiments", "Limitations", "Conclusion and future work" ] }
GEM-SciDuet-train-99#paper-1262#slide-3
Time Expression Analysis Finding 2
Occurrence: most of time expressions contain time token(s). Example time tokens (red): Percentage of time expressions that the third quarter of
Occurrence: most of time expressions contain time token(s). Example time tokens (red): Percentage of time expressions that the third quarter of
[]
GEM-SciDuet-train-99#paper-1262#slide-4
1262
Time Expression Analysis and Recognition Using Syntactic Token Types and General Heuristic Rules
Extracting time expressions from free text is a fundamental task for many applications. We analyze time expressions from four different datasets and find that only a small group of words are used to express time information and that the words in time expressions demonstrate similar syntactic behaviour. Based on the findings, we propose a type-based approach named SynTime 1 for time expression recognition. Specifically, we define three main syntactic token types, namely time token, modifier, and numeral, to group time-related token regular expressions. On the types we design general heuristic rules to recognize time expressions. In recognition, SynTime first identifies time tokens from raw text, then searches their surroundings for modifiers and numerals to form time segments, and finally merges the time segments to time expressions. As a lightweight rule-based tagger, SynTime runs in real time, and can be easily expanded by simply adding keywords for the text from different domains and different text types. Experiments on benchmark datasets and tweets data show that SynTime outperforms state-of-the-art methods.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249 ], "paper_content_text": [ "Introduction Time expression plays an important role in information retrieval and many applications in natural language processing (Alonso et al., 2011; Campos et al., 2014) .", "Recognizing time expressions from free text has attracted considerable attention since last decade (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) .", "1 Source: https://github.com/zhongxiaoshi/syntime We analyze time expressions in four datasets: TimeBank (Pustejovsky et al., 2003b) , Gigaword (Parker et al., 2011) , WikiWars (Mazur and Dale, 2010) , and Tweets.", "From the analysis we make four findings about time expressions.", "First, most time expressions are very short, with 80% of time expressions containing no more than three tokens.", "Second, at least 91.8% of time expressions contain at least one time token.", "Third, the vocabulary used to express time information is very small, with a small group of keywords.", "Finally, words in time expressions demonstrate similar syntactic behaviour.", "All the findings relate to the principle of least effort (Zipf, 1949) .", "That is, people tend to act under the least effort in order to minimize the cost of energy at both individual level and collective level to language usage (Zipf, 1949) .", "Time expression is part of language and acts as an interface of communication.", "Short expressions, occurrence, small vocabulary, and similar syntactic behaviour all reduce the cost of energy required to communicate.", "According to the findings we propose a typebased approach named SynTime ('Syn' stands for syntactic) to recognize time expressions.", "Specifically, we define three main token types, namely time token, modifier, and numeral, to group timerelated token regular expressions.", "Time tokens are the words that explicitly express time information, such as time units (e.g., 'year').", "Modifiers modify time tokens; they appear before or after time tokens, e.g., 'several' and 'ago' in 'several years ago.'", "Numerals are ordinals and numbers.", "From free text SynTime first identifies time tokens, then recognizes modifiers and numerals.", "Naturally, SynTime is a rule-based tagger.", "The key difference between SynTime and other rulebased taggers lies in the way of defining token types and the way of designing rules.", "The definition of token type in SynTime is inspired by part-of-speech in which \"linguists group some words of language into classes (sets) which show similar syntactic behaviour.\"", "(Manning and Schutze, 1999) SynTime defines token types for tokens according to their syntactic behaviour.", "Other rulebased taggers define types for tokens based on their semantic meaning.", "For example, SUTime defines 5 semantic modifier types, such as frequency modifiers; 2 while SynTime defines 5 syntactic modifier types, such as modifiers that appear before time tokens.", "(See Section 4.1 for details.)", "Accordingly, other rule-based taggers design deterministic rules based on their meanings of tokens themselves.", "SynTime instead designs general rules on the token types rather than on the tokens themselves.", "For example, our general rules do not work on tokens 'February' nor '1989' but on their token types 'MONTH' and 'YEAR.'", "That is why we call SynTime a type-based approach.", "More importantly, other rule-based taggers design rules in a fixed method, including fixed length and fixed position.", "In contrast, SynTime designs general rules in a heuristic way, based on the idea of boundary expansion.", "The general heuristic rules are quite light-weight that it makes SynTime much more flexible and expansible, and leads SynTime to run in real time.", "The heuristic rules are designed on token types and are independent of specific tokens, SynTime therefore is independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domains and specific text types in English.", "(The test for other languages needs only to construct a collection of token regular expressions in the target language under our defined token types.)", "Specifically, we evaluate SynTime against three state-of-the-art methods (i.e., HeidelTime, SUTime, and UWTime) on three datasets: TimeBank, WikiWars, and Tweets.", "3 datasets.", "More importantly, SynTime achieves the best recalls on all three datasets and exceptionally good results on Tweets dataset.", "To sum up, we make the following contributions.", "• We analyze time expressions from four datasets and make four findings.", "The findings provide evidence in terms of time expression for the principle of least effort (Zipf, 1949) .", "• We propose a time tagger named SynTime to recognize time expressions using syntactic token types and general heuristic rules.", "Syn-Time is independent of specific tokens, and therefore independent of specific domains, specific text types, and specific languages.", "• We conduct experiments on three datasets, and the results demonstrate the effectiveness of SynTime against state-of-the-art baselines.", "Related Work Many research works on time expression identification are reported in TempEval exercises (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) .", "The task is divided into two subtasks: recognition and normalization.", "Rule-based Time Expression Recognition.", "Rule-based time taggers like GUTime, Heidel-Time, and SUTime, predefine time-related words and rules (Verhagen et al., 2005; Strötgen and Gertz, 2010; Chang and Manning, 2012) .", "Heidel-Time (Strötgen and Gertz, 2010) hand-crafts rules with time resources like weekdays and months, and leverages language clues like part-of-speech to identify time expression.", "SUTime (Chang and Manning, 2012) designs deterministic rules using a cascade finite automata (Hobbs et al., 1997) on regular expressions over tokens (Chang and Manning, 2014) .", "It first identifies individual words, then expands them to chunks, and finally to time expressions.", "Rule-based taggers achieve very good results in TempEval exercises.", "SynTime is also a rule-based tagger while its key difference from other rule-based taggers is that between the rules and the tokens it introduces a layer of token type; its rules work on token types and are independent of specific tokens.", "Moreover, SynTime designs rules in a heuristic way.", "Machine Learning based Method.", "Machine learning based methods extract features from the text and apply statistical models on the features for recognizing time expressions.", "Example features include character features, word features, syntactic features, semantic features, and gazetteer features (Llorens et al., 2010; Filannino et al., 2013; Bethard, 2013) .", "The statistical models include Markov logic network, logistic regression, support vector machines, maximum entropy, and conditional random fields (Llorens et al., 2010; Uz-Zaman and Allen, 2010; Filannino et al., 2013; Bethard, 2013) .", "Some models obtain good performance, and even achieve the highest F 1 of 82.71% on strict match in TempEval-3 (Bethard, 2013) .", "Outside TempEval exercises, Angeli et al.", "leverage compositional grammar and employ a EMstyle approach to learn a latent parser for time expression recognition (Angeli et al., 2012) .", "In the method named UWTime, Lee et al.", "handcraft a combinatory categorial grammar (CCG) (Steedman, 1996) to define a set of lexicon with rules and use L1-regularization to learn linguistic context (Lee et al., 2014) .", "The two methods explicitly use linguistic information.", "In (Lee et al., 2014) , especially, CCG could capture rich structure information of language, similar to the rule-based methods.", "Tabassum et al.", "focus on resolving the dates in tweets, and use distant supervision to recognize time expressions (Tabassum et al., 2016) .", "They use five time types and assign one of them to each word, which is similar to SynTime in the way of defining types over tokens.", "However, they focus only on the type of date, while SynTime recoginizes all the time expressions and does not involve learning and runs in real time.", "Time Expression Normalization.", "Methods in TempEval exercises design rules for time expression normalization (Verhagen et al., 2005; Strötgen and Gertz, 2010; Llorens et al., 2010; Uz-Zaman and Allen, 2010; Filannino et al., 2013; Bethard, 2013) .", "Because the rule systems have high similarity, Llorens et al.", "suggest to construct a large knowledge base as a public resource for the task (Llorens et al., 2012) .", "Some researchers treat the normalization process as a learning task and use machine learning methods (Lee et al., 2014; Tabassum et al., 2016) .", "Lee et al.", "(Lee et al., 2014) use AdaGrad algorithm (Duchi et al., 2011) and Tabassum et al.", "(Tabassum et al., 2016 ) use a loglinear algorithm to normalize time expressions.", "SynTime focuses only on the recognition task.", "The normalization could be achieved by using methods similar to the existing rule systems, because they are highly similar (Llorens et al., 2012) .", "We conduct an analysis on four datasets: Time-Bank, Gigaword, WikiWars, and Tweets.", "Time-Bank (Pustejovsky et al., 2003b ) is a benchmark dataset in TempEval series (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) , consisting of 183 news articles.", "Gigaword (Parker et al., 2011 ) is a large automatically labelled dataset with 2,452 news articles and used in TempEval-3.", "WikiWars dataset is derived from Wikipedia articles about wars (Mazur and Dale, 2010) .", "Tweets is our manually annotated dataset with 942 tweets of which each contains at least one time expression.", "Table 1 summarizes the datasets.", "Finding From the four datasets, we analyze their time expressions and make four findings.", "We will see that despite the four datasets vary in corpus sizes, in text types, and in domains, their time expressions demonstrate similar characteristics.", "Finding 1 Time expressions are very short.", "More than 80% of time expressions contain no more than three words and more than 90% contain no more than four words.", "Figure 1 plots the length distribution of time expressions.", "Although the texts are collected from different sources (i.e., news articles, Wikipedia articles, and tweets) and vary in sizes, the length Finding 2 More than 91% of time expressions contain at least one time token.", "The second column in Table 2 reports the percentage of time expressions that contain at least one time token.", "We find that at least 91.81% of time expressions contain time token(s).", "(Some time expressions have no time token but depend on other time expressions; in '2 to 8 days,' for example, '2' depends on '8 days.')", "This suggests that time tokens account for time expressions.", "Therefore, to recognize time expressions, it is essential to recognize their time tokens.", "Finding 3 Only a small group of time-related keywords are used to express time information.", "From the time expressions in all four datasets, we find that the group of keywords used to express time information is small.", "Table 3 reports the number of distinct words and of distinct time tokens.", "The words/tokens are manually normalized before counting and their variants are ignored.", "For example, 'year' and '5yrs' are counted as one token 'year.'", "Numerals in the counting are ignored.", "Despite the four datasets vary in sizes, domains, and text types, the numbers of their distinct time tokens are comparable.", "Across the four datasets, the number of distinct words is 350, about half of the simply summing of 675; the number of distinct time tokens is 123, less than half of the simply summing 282.", "Among the 123 distinct time tokens, 45 appear in all the four datasets, and 101 appear in at least two datasets.", "This indicates that time tokens, which account for time expressions, are highly overlapped across the four datasets.", "In other words, time expressions highly overlap at their time tokens.", "Finding 4 POS information could not distinguish time expressions from common words, but within time expressions, POS tags can help distinguish their constituents.", "For each dataset we list the top 10 POS tags that appear in time expressions, and their percentages over the whole text.", "Among the 40 tags (10 × 4 datasets), 37 have percentage lower than 20%; other 3 are CD.", "This indicates that POS could not provide enough information to distinguish time expressions from common words.", "However, the most common POS tags in time expressions are NN*, JJ, RB, CD, and DT.", "Within time expressions, the time tokens usually have NN* and RB, the modifiers have JJ and RB, and the numerals have CD.", "This finding indicates that for the time expressions, their similar constituents behave in similar syntactic way.", "When seeing this, we realize that this is exactly how linguists define part-of-speech for language.", "4 The definition of POS for language inspires us to define a syntactic type system for the time expression, part of language.", "The four findings all relate to the principle of least effort (Zipf, 1949) .", "That is, people tend to act with least effort so as to minimize the cost of energy at both individual and collective levels to the language usage (Zipf, 1949) .", "Time expression is part of language and acts as an interface of communication.", "Short expressions, occurrence, small vocabulary, and similar syntactic behaviour all reduce the cost of energy required to communicate.", "To summarize: on average, a time expression contains two tokens of which one is time token and the other is modifier/numeral, and the size of time tokens is small.", "To recognize a time expression, therefore, we first recognize the time token, then recognize the modifier/numeral.", "SynTime: Syntactic Token Types and General Heuristic Rules SynTime defines a syntactic type system for the tokens of time expressions, and designs heuristic rules working on the token types.", "Figure 2 shows the layout of SynTime, consisting of three levels: Token level, type level, and rule level.", "Token types at the type level group the tokens of time expressions.", "Heuristic rules lie at the rule level, working on token types rather than on tokens themselves.", "That is why the heuristic rules are general.", "For example, the heuristic rules do not work on tokens '1989' nor 'February,' but on their token types 'YEAR' and 'MONTH.'", "The heuristic rules are only relevant to token types, and are independent of specific tokens.", "For this reason, our token types and heuristic rules are independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domain (i.e., war domain) and specific text types (i.e., formal text and informal text) in English.", "The test for other languages simply needs to construct a set of token regular expressions in the target language under our defined token types.", "Figure 3 shows the overview of SynTime in practice.", "Shown on the left-hand side, SynTime is initialized with regular expressions over tokens.", "After initialization, SynTime can be directly applied on text.", "On the other hand, SynTime can be easily expanded by simply adding the time-related token regular expressions from training text under each defined token type.", "The expansion enables SynTime to recognize time expressions in text from different domains and different text types.", "Shown on the right-hand side of Figure 3 , Syn-Time recognizes time expression through three main steps.", "In the first step, SynTime identifies time tokens from the POS-tagged raw text.", "Then around the time tokens SynTime searches for modifiers and numerals to form time segments.", "In the last step, SynTime transforms the time segments to time expressions.", "SynTime Construction We define a syntactic type system for time expression, specifically, 15 token types for time tokens, 5 token types for modifiers, and 1 token type for numeral.", "Token types to tokens is like POS tags to words; for example, 'February' has a POS tag of NNP and a token type of MONTH.", "Time Token.", "We define 15 token types for the time tokens and use their names similar to Joda-Time classes: 5 DECADE (-), YEAR (-), SEA-SON (5), MONTH (12), WEEK (7), DATE (-), TIME (-), DAY TIME (27), TIMELINE (12), HOLIDAY (20), PERIOD (9), DURATION (-), TIME UNIT (15), TIME ZONE (6), and ERA (2).", "Number in '()' indicates the number of distinct tokens in this token type.", "'-' indicates that this token type involves changing digits and cannot be counted.", "Modifier.", "We define 3 token types for the modifiers according to their possible positions relative to time tokens.", "Modifiers that appear before time tokens are PREFIX (48); modifiers after time tokens are SUFFIX (2).", "LINKAGE (4) link two time tokens.", "Besides, we define 2 special modifier types, COMMA (1) for comma ',' and IN ARTICLE (2) for indefinite articles 'a' and 'an.'", "TimeML (Pustejovsky et al., 2003a) and Time-Bank (Pustejovsky et al., 2003b) do not treat most prepositions like 'on' as a part of time expressions.", "Thus SynTime does not collect those prepositions.", "Numeral.", "Number in time expressions can be a time token e.g., '10' in 'October 10, 2016,' or a modifier e.g., '10' in '10 days.'", "We define NU-MERAL (-) for the ordinals and numbers.", "SynTime Initialization.", "The token regular expressions for initializing SynTime are collected from SUTime, 6 a state-of-the-art rule-based tagger that achieved the highest recall in TempEval-3 (Chang and Manning, , 2013 .", "Specifically, we collect from SUTime only the tokens and the regular expressions over tokens, and discard its other rules of recognizing full time expressions.", "Time Expression Recognition On the token types, SynTime designs a small set of heuristic rules to recognize time expressions.", "The recognition process includes three main steps: (1) time token identification, (2) time segment identification, and (3) time expression extraction.", "Time Token Identification Identifying time tokens is simple, through matching of string and regular expressions.", "Some words might cause ambiguity.", "For example, 'May' could be a modal verb, or the fifth month of year.", "To filter out the ambiguous words, we use POS information.", "In implementation, we use Stanford POS Tagger; 7 and the POS tags for matching the instances of token types in SynTime are based on our Finding 4 in Section 3.2.", "Besides time tokens are identified, in this step, individual token is assigned with one token type of either modifier or numeral if it is matched with token regular expressions.", "In the next two steps, SynTime works on those token types.", "Time Segment Identification The task of time segment identification is to search the surrounding of each time token identified in previous step for modifiers and numerals, then gather the time token with its modifiers and numerals to form a time segment.", "The searching is under simple heuristic rules in which the key idea is to expand the time token's boundaries.", "At first, each time token is a time segment.", "If it is either a PERIOD or DURATION, then no need to further search.", "Otherwise, search its left and its right for modifiers and numerals.", "For the left searching, if encounter a PREFIX or NUMERAL or IN ARTICLE, then continue searching.", "For the right searching, if encounter a SUFFIX or NUMERAL, then continue searching.", "Both the left and the right searching stop when reaching a COMMA or LINK-AGE or a non-modifier/numeral word.", "The left searching does not exceed the previous time token; the right searching does not exceed the next time token.", "A time segment consists of exactly one time token, and zero or some modifiers/numerals.", "A special kind of time segments do not contain any time token; they depend on other time segments next to them.", "For example, in '8 to 20 days,' 'to 20 days' is a time segment, and '8 to' forms a dependent time segment.", "(See Figure 4(e) .)", "Time Expression Extraction The task of time expression extraction is to extract time expressions from the identified time segments in which the core step is to determine whether to merge two adjacent or overlapping time segments into a new time segment.", "We scan the time segments in a sentence from beginning to the end.", "A stand-alone time segment is a time expression.", "(See Figure 4(a) .)", "The focus is to deal with two or more time segments that are adjacent or overlapping.", "If two time segments s 1 and s 2 are adjacent, merge them to form a new time segment s 1 .", "(See Figure 4(b) .)", "Consider that s 1 and s 2 overlap at a shared boundary.", "According to our time segment identification, the shared boundary could be a modifier or a numeral.", "If the word at the shared boundary is neither a COMMA nor a LINKAGE, then merge s 1 and s 2 .", "(See Figure 4(c) .)", "If the word is a LINKAGE, then extract s 1 as a time expression and continue scanning.", "When the shared boundary is a COMMA, merge s 1 and s 2 only if the COMMA's previous token and its next token satisfy the three conditions: (1) the previous token is a time token or a NUMERAL; (2) the next token is a time token; and (3) the token types of the previous token and of the next token are not the same.", "(See Figure 4(d) .)", "Although Figure 4 shows the examples as token types together with the tokens, we should note that the heuristic rules only work on the token types.", "After the extraction step, time expressions are exported as a sequence of tokens from the sequence of token types.", "SynTime Expansion SynTime could be expanded by simply adding new words under each defined token type without changing any rule.", "The expansion requires the words to be added to be annotated manually.", "We apply the initial SynTime on the time expressions from training text and list the words that are not covered.", "Whether the uncovered words are added to SynTime is manually determined.", "The rule for determination is that the added words can not cause ambiguity and should be generic.", "Wiki-Wars dataset contains a few examples like this: 'The time Arnold reached Quebec City.'", "Words in this example are extremely descriptive, and we do not collect them.", "In tweets, on the other hand, people may use abbreviations and informal variants; for example, '2day' and 'tday' are popular spellings of 'today.'", "Such kind of abbreviations and informal variants will be collected.", "According to our findings, not many words are used to express time information, the manual addition of keywords thus will not cost much.", "In addition, we find that even in tweets people tend to use formal words.", "In the Twitter word clusters trained from 56 million English tweets, 8 the most often used words are the formal words, and their frequencies are much greater than the informal words'.", "The cluster of 'today,' 9 for example, its most often use is the formal one, 'today,' which appears 1,220,829 times; while its second most often use '2day' appears only 34,827 times.", "The low rate of informal words (e.g., about 3% in 'today' cluster) suggests that even in informal environment the manual keyword addition costs little.", "Experiments We evaluate SynTime against three state-of-theart baselines (i.e., HeidelTime, SUTime, and UW-Time) on three datasets (i.e., TimeBank, Wiki-Wars, and Tweets).", "WikiWars is a specific domain dataset about war; TimeBank and WikiWars are the datasets in formal text while Tweets dataset is in informal text.", "For SynTime we report the results of its two versions: SynTime-I and SynTime-E. SynTime-I is the initial version, and SynTime-E is the expanded version of SynTime-I.", "Experiment Setting Datasets.", "We use three datasets of which TimeBank and WikiWars are benchmark datasets whose details are shown in Section 3.1; Tweets is our manually labeled dataset that are collected from Twitter.", "For Tweets dataset, we randomly sample 4000 tweets and use SUTime to tag them.", "942 tweets of which each contains at least one time expression.", "From the remaining 3,058 tweets, we randomly sample 500 and manually annotate them, and find that only 15 tweets contain time expressions.", "We therefore roughly consider that SU-Time misses about 3% time expressions in tweets.", "Two annotators then manually annotate the 942 tweets with discussion to final agreement according to the standards of TimeML and TimeBank.", "We finally get 1,127 manually labeled time expressions.", "For the 942 tweets, we randomly sample 200 tweets as test set, and the rest 742 as training set, because a baseline UWTime requires training.", "Baseline Methods.", "We compare SynTime with methods: HeidelTime (Strötgen and Gertz, 2010) , SUTime (Chang and , and UW- Evaluation Metrics.", "We follow TempEval-3 and use their evaluation toolkit 10 to report P recision, Recall, and F 1 in terms of strict match and relaxed match (UzZaman et al., 2013).", "22, 1986' and 'February 01, 1989 ' at the level of word or of character.", "One suggestion is to consider a type-based learning method that could use type information.", "For example, the above two time expressions refer to the same pattern of 'MONTH NUMERAL COMMA Table 5 lists the number of time tokens and modifiers added to SynTime-I to get SynTime-E. On TimeBank and Tweets datasets, only a few tokens are added, the corresponding results are affected slightly.", "This confirms that the size of time words is small, and that SynTime-I covers most of time words.", "On WikiWars dataset, relatively more tokens are added, SynTime-E performs much better than SynTime-I, especially in recall.", "It improves the recall by 3.25% in strict match and by 2.98% in relaxed match.", "This indicates that with more words added from specific domains (e.g., WikiWars dataset about war), SynTime can significantly improve the performance.", "Experiment Result Limitations SynTime assumes that words are tokenized and POS tagged correctly.", "In reality, however, the tokenized and tagged words are not that perfect, due to the limitation of used tools.", "For example, Stanford POS Tagger assigns VBD to the word 'sat' in 'friday or sat' while whose tag should be NNP.", "The incorrect tokens and POS tags affect the result.", "Conclusion and future work We conduct an analysis on time expressions from four datasets, and find that time expressions in general are very short and expressed by a small vocabulary, and words in time expressions demonstrate similar syntactic behavior.", "Our findings provide evidence in terms of time expression for the principle of least effort (Zipf, 1949) .", "Inspired by part-of-speech, based on the findings, we define a syntactic type system for the time expression, and propose a type-based time expression tagger, named by SynTime.", "SynTime defines syntactic token types for tokens and on the token types it designs general heuristic rules based on the idea of boundary expansion.", "Experiments on three datasets show that SynTime outperforms the stateof-the-art baselines, including rule-based time taggers and machine learning based time tagger.", "Because our heuristic rules are quite simple, Syn-Time is light-weight and runs in real time.", "Our token types and heuristic rules are independent of specific tokens, SynTime therefore is independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domains and specific text types in English.", "The test for other languages needs only to construct a collection of token regular expressions in the target language under our defined token types.", "Time expression is part of language and follows the principle of least effort.", "Since language usage relates to human habits (Zipf, 1949; Chomsky, 1986; Pinker, 1995) , we might expect that humans would share some common habits, and therefore expect that other parts of language would more or less follow the same principle.", "In the future we will try our analytical method on other parts of language." ] }
{ "paper_header_number": [ "1", "2", "3.2", "4", "4.1", "4.2", "4.2.1", "4.2.2", "4.2.3", "4.3", "5", "5.3", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Finding", "SynTime: Syntactic Token Types and General Heuristic Rules", "SynTime Construction", "Time Expression Recognition", "Time Token Identification", "Time Segment Identification", "Time Expression Extraction", "SynTime Expansion", "Experiments", "Limitations", "Conclusion and future work" ] }
GEM-SciDuet-train-99#paper-1262#slide-4
Time Expression Analysis Finding 3
Small vocabulary: only a small group of time words are used to Number of distinct words and time tokens in time expressions Dataset #Words #Time tokens Number of distinct words and time tokens across four datasets next year #Words #Time tokens years year yrs ago 45 distinct time tokens appear in all the four datasets. That means, time expressions highly overlap at their time tokens. Overlap at year
Small vocabulary: only a small group of time words are used to Number of distinct words and time tokens in time expressions Dataset #Words #Time tokens Number of distinct words and time tokens across four datasets next year #Words #Time tokens years year yrs ago 45 distinct time tokens appear in all the four datasets. That means, time expressions highly overlap at their time tokens. Overlap at year
[]
GEM-SciDuet-train-99#paper-1262#slide-5
1262
Time Expression Analysis and Recognition Using Syntactic Token Types and General Heuristic Rules
Extracting time expressions from free text is a fundamental task for many applications. We analyze time expressions from four different datasets and find that only a small group of words are used to express time information and that the words in time expressions demonstrate similar syntactic behaviour. Based on the findings, we propose a type-based approach named SynTime 1 for time expression recognition. Specifically, we define three main syntactic token types, namely time token, modifier, and numeral, to group time-related token regular expressions. On the types we design general heuristic rules to recognize time expressions. In recognition, SynTime first identifies time tokens from raw text, then searches their surroundings for modifiers and numerals to form time segments, and finally merges the time segments to time expressions. As a lightweight rule-based tagger, SynTime runs in real time, and can be easily expanded by simply adding keywords for the text from different domains and different text types. Experiments on benchmark datasets and tweets data show that SynTime outperforms state-of-the-art methods.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249 ], "paper_content_text": [ "Introduction Time expression plays an important role in information retrieval and many applications in natural language processing (Alonso et al., 2011; Campos et al., 2014) .", "Recognizing time expressions from free text has attracted considerable attention since last decade (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) .", "1 Source: https://github.com/zhongxiaoshi/syntime We analyze time expressions in four datasets: TimeBank (Pustejovsky et al., 2003b) , Gigaword (Parker et al., 2011) , WikiWars (Mazur and Dale, 2010) , and Tweets.", "From the analysis we make four findings about time expressions.", "First, most time expressions are very short, with 80% of time expressions containing no more than three tokens.", "Second, at least 91.8% of time expressions contain at least one time token.", "Third, the vocabulary used to express time information is very small, with a small group of keywords.", "Finally, words in time expressions demonstrate similar syntactic behaviour.", "All the findings relate to the principle of least effort (Zipf, 1949) .", "That is, people tend to act under the least effort in order to minimize the cost of energy at both individual level and collective level to language usage (Zipf, 1949) .", "Time expression is part of language and acts as an interface of communication.", "Short expressions, occurrence, small vocabulary, and similar syntactic behaviour all reduce the cost of energy required to communicate.", "According to the findings we propose a typebased approach named SynTime ('Syn' stands for syntactic) to recognize time expressions.", "Specifically, we define three main token types, namely time token, modifier, and numeral, to group timerelated token regular expressions.", "Time tokens are the words that explicitly express time information, such as time units (e.g., 'year').", "Modifiers modify time tokens; they appear before or after time tokens, e.g., 'several' and 'ago' in 'several years ago.'", "Numerals are ordinals and numbers.", "From free text SynTime first identifies time tokens, then recognizes modifiers and numerals.", "Naturally, SynTime is a rule-based tagger.", "The key difference between SynTime and other rulebased taggers lies in the way of defining token types and the way of designing rules.", "The definition of token type in SynTime is inspired by part-of-speech in which \"linguists group some words of language into classes (sets) which show similar syntactic behaviour.\"", "(Manning and Schutze, 1999) SynTime defines token types for tokens according to their syntactic behaviour.", "Other rulebased taggers define types for tokens based on their semantic meaning.", "For example, SUTime defines 5 semantic modifier types, such as frequency modifiers; 2 while SynTime defines 5 syntactic modifier types, such as modifiers that appear before time tokens.", "(See Section 4.1 for details.)", "Accordingly, other rule-based taggers design deterministic rules based on their meanings of tokens themselves.", "SynTime instead designs general rules on the token types rather than on the tokens themselves.", "For example, our general rules do not work on tokens 'February' nor '1989' but on their token types 'MONTH' and 'YEAR.'", "That is why we call SynTime a type-based approach.", "More importantly, other rule-based taggers design rules in a fixed method, including fixed length and fixed position.", "In contrast, SynTime designs general rules in a heuristic way, based on the idea of boundary expansion.", "The general heuristic rules are quite light-weight that it makes SynTime much more flexible and expansible, and leads SynTime to run in real time.", "The heuristic rules are designed on token types and are independent of specific tokens, SynTime therefore is independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domains and specific text types in English.", "(The test for other languages needs only to construct a collection of token regular expressions in the target language under our defined token types.)", "Specifically, we evaluate SynTime against three state-of-the-art methods (i.e., HeidelTime, SUTime, and UWTime) on three datasets: TimeBank, WikiWars, and Tweets.", "3 datasets.", "More importantly, SynTime achieves the best recalls on all three datasets and exceptionally good results on Tweets dataset.", "To sum up, we make the following contributions.", "• We analyze time expressions from four datasets and make four findings.", "The findings provide evidence in terms of time expression for the principle of least effort (Zipf, 1949) .", "• We propose a time tagger named SynTime to recognize time expressions using syntactic token types and general heuristic rules.", "Syn-Time is independent of specific tokens, and therefore independent of specific domains, specific text types, and specific languages.", "• We conduct experiments on three datasets, and the results demonstrate the effectiveness of SynTime against state-of-the-art baselines.", "Related Work Many research works on time expression identification are reported in TempEval exercises (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) .", "The task is divided into two subtasks: recognition and normalization.", "Rule-based Time Expression Recognition.", "Rule-based time taggers like GUTime, Heidel-Time, and SUTime, predefine time-related words and rules (Verhagen et al., 2005; Strötgen and Gertz, 2010; Chang and Manning, 2012) .", "Heidel-Time (Strötgen and Gertz, 2010) hand-crafts rules with time resources like weekdays and months, and leverages language clues like part-of-speech to identify time expression.", "SUTime (Chang and Manning, 2012) designs deterministic rules using a cascade finite automata (Hobbs et al., 1997) on regular expressions over tokens (Chang and Manning, 2014) .", "It first identifies individual words, then expands them to chunks, and finally to time expressions.", "Rule-based taggers achieve very good results in TempEval exercises.", "SynTime is also a rule-based tagger while its key difference from other rule-based taggers is that between the rules and the tokens it introduces a layer of token type; its rules work on token types and are independent of specific tokens.", "Moreover, SynTime designs rules in a heuristic way.", "Machine Learning based Method.", "Machine learning based methods extract features from the text and apply statistical models on the features for recognizing time expressions.", "Example features include character features, word features, syntactic features, semantic features, and gazetteer features (Llorens et al., 2010; Filannino et al., 2013; Bethard, 2013) .", "The statistical models include Markov logic network, logistic regression, support vector machines, maximum entropy, and conditional random fields (Llorens et al., 2010; Uz-Zaman and Allen, 2010; Filannino et al., 2013; Bethard, 2013) .", "Some models obtain good performance, and even achieve the highest F 1 of 82.71% on strict match in TempEval-3 (Bethard, 2013) .", "Outside TempEval exercises, Angeli et al.", "leverage compositional grammar and employ a EMstyle approach to learn a latent parser for time expression recognition (Angeli et al., 2012) .", "In the method named UWTime, Lee et al.", "handcraft a combinatory categorial grammar (CCG) (Steedman, 1996) to define a set of lexicon with rules and use L1-regularization to learn linguistic context (Lee et al., 2014) .", "The two methods explicitly use linguistic information.", "In (Lee et al., 2014) , especially, CCG could capture rich structure information of language, similar to the rule-based methods.", "Tabassum et al.", "focus on resolving the dates in tweets, and use distant supervision to recognize time expressions (Tabassum et al., 2016) .", "They use five time types and assign one of them to each word, which is similar to SynTime in the way of defining types over tokens.", "However, they focus only on the type of date, while SynTime recoginizes all the time expressions and does not involve learning and runs in real time.", "Time Expression Normalization.", "Methods in TempEval exercises design rules for time expression normalization (Verhagen et al., 2005; Strötgen and Gertz, 2010; Llorens et al., 2010; Uz-Zaman and Allen, 2010; Filannino et al., 2013; Bethard, 2013) .", "Because the rule systems have high similarity, Llorens et al.", "suggest to construct a large knowledge base as a public resource for the task (Llorens et al., 2012) .", "Some researchers treat the normalization process as a learning task and use machine learning methods (Lee et al., 2014; Tabassum et al., 2016) .", "Lee et al.", "(Lee et al., 2014) use AdaGrad algorithm (Duchi et al., 2011) and Tabassum et al.", "(Tabassum et al., 2016 ) use a loglinear algorithm to normalize time expressions.", "SynTime focuses only on the recognition task.", "The normalization could be achieved by using methods similar to the existing rule systems, because they are highly similar (Llorens et al., 2012) .", "We conduct an analysis on four datasets: Time-Bank, Gigaword, WikiWars, and Tweets.", "Time-Bank (Pustejovsky et al., 2003b ) is a benchmark dataset in TempEval series (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) , consisting of 183 news articles.", "Gigaword (Parker et al., 2011 ) is a large automatically labelled dataset with 2,452 news articles and used in TempEval-3.", "WikiWars dataset is derived from Wikipedia articles about wars (Mazur and Dale, 2010) .", "Tweets is our manually annotated dataset with 942 tweets of which each contains at least one time expression.", "Table 1 summarizes the datasets.", "Finding From the four datasets, we analyze their time expressions and make four findings.", "We will see that despite the four datasets vary in corpus sizes, in text types, and in domains, their time expressions demonstrate similar characteristics.", "Finding 1 Time expressions are very short.", "More than 80% of time expressions contain no more than three words and more than 90% contain no more than four words.", "Figure 1 plots the length distribution of time expressions.", "Although the texts are collected from different sources (i.e., news articles, Wikipedia articles, and tweets) and vary in sizes, the length Finding 2 More than 91% of time expressions contain at least one time token.", "The second column in Table 2 reports the percentage of time expressions that contain at least one time token.", "We find that at least 91.81% of time expressions contain time token(s).", "(Some time expressions have no time token but depend on other time expressions; in '2 to 8 days,' for example, '2' depends on '8 days.')", "This suggests that time tokens account for time expressions.", "Therefore, to recognize time expressions, it is essential to recognize their time tokens.", "Finding 3 Only a small group of time-related keywords are used to express time information.", "From the time expressions in all four datasets, we find that the group of keywords used to express time information is small.", "Table 3 reports the number of distinct words and of distinct time tokens.", "The words/tokens are manually normalized before counting and their variants are ignored.", "For example, 'year' and '5yrs' are counted as one token 'year.'", "Numerals in the counting are ignored.", "Despite the four datasets vary in sizes, domains, and text types, the numbers of their distinct time tokens are comparable.", "Across the four datasets, the number of distinct words is 350, about half of the simply summing of 675; the number of distinct time tokens is 123, less than half of the simply summing 282.", "Among the 123 distinct time tokens, 45 appear in all the four datasets, and 101 appear in at least two datasets.", "This indicates that time tokens, which account for time expressions, are highly overlapped across the four datasets.", "In other words, time expressions highly overlap at their time tokens.", "Finding 4 POS information could not distinguish time expressions from common words, but within time expressions, POS tags can help distinguish their constituents.", "For each dataset we list the top 10 POS tags that appear in time expressions, and their percentages over the whole text.", "Among the 40 tags (10 × 4 datasets), 37 have percentage lower than 20%; other 3 are CD.", "This indicates that POS could not provide enough information to distinguish time expressions from common words.", "However, the most common POS tags in time expressions are NN*, JJ, RB, CD, and DT.", "Within time expressions, the time tokens usually have NN* and RB, the modifiers have JJ and RB, and the numerals have CD.", "This finding indicates that for the time expressions, their similar constituents behave in similar syntactic way.", "When seeing this, we realize that this is exactly how linguists define part-of-speech for language.", "4 The definition of POS for language inspires us to define a syntactic type system for the time expression, part of language.", "The four findings all relate to the principle of least effort (Zipf, 1949) .", "That is, people tend to act with least effort so as to minimize the cost of energy at both individual and collective levels to the language usage (Zipf, 1949) .", "Time expression is part of language and acts as an interface of communication.", "Short expressions, occurrence, small vocabulary, and similar syntactic behaviour all reduce the cost of energy required to communicate.", "To summarize: on average, a time expression contains two tokens of which one is time token and the other is modifier/numeral, and the size of time tokens is small.", "To recognize a time expression, therefore, we first recognize the time token, then recognize the modifier/numeral.", "SynTime: Syntactic Token Types and General Heuristic Rules SynTime defines a syntactic type system for the tokens of time expressions, and designs heuristic rules working on the token types.", "Figure 2 shows the layout of SynTime, consisting of three levels: Token level, type level, and rule level.", "Token types at the type level group the tokens of time expressions.", "Heuristic rules lie at the rule level, working on token types rather than on tokens themselves.", "That is why the heuristic rules are general.", "For example, the heuristic rules do not work on tokens '1989' nor 'February,' but on their token types 'YEAR' and 'MONTH.'", "The heuristic rules are only relevant to token types, and are independent of specific tokens.", "For this reason, our token types and heuristic rules are independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domain (i.e., war domain) and specific text types (i.e., formal text and informal text) in English.", "The test for other languages simply needs to construct a set of token regular expressions in the target language under our defined token types.", "Figure 3 shows the overview of SynTime in practice.", "Shown on the left-hand side, SynTime is initialized with regular expressions over tokens.", "After initialization, SynTime can be directly applied on text.", "On the other hand, SynTime can be easily expanded by simply adding the time-related token regular expressions from training text under each defined token type.", "The expansion enables SynTime to recognize time expressions in text from different domains and different text types.", "Shown on the right-hand side of Figure 3 , Syn-Time recognizes time expression through three main steps.", "In the first step, SynTime identifies time tokens from the POS-tagged raw text.", "Then around the time tokens SynTime searches for modifiers and numerals to form time segments.", "In the last step, SynTime transforms the time segments to time expressions.", "SynTime Construction We define a syntactic type system for time expression, specifically, 15 token types for time tokens, 5 token types for modifiers, and 1 token type for numeral.", "Token types to tokens is like POS tags to words; for example, 'February' has a POS tag of NNP and a token type of MONTH.", "Time Token.", "We define 15 token types for the time tokens and use their names similar to Joda-Time classes: 5 DECADE (-), YEAR (-), SEA-SON (5), MONTH (12), WEEK (7), DATE (-), TIME (-), DAY TIME (27), TIMELINE (12), HOLIDAY (20), PERIOD (9), DURATION (-), TIME UNIT (15), TIME ZONE (6), and ERA (2).", "Number in '()' indicates the number of distinct tokens in this token type.", "'-' indicates that this token type involves changing digits and cannot be counted.", "Modifier.", "We define 3 token types for the modifiers according to their possible positions relative to time tokens.", "Modifiers that appear before time tokens are PREFIX (48); modifiers after time tokens are SUFFIX (2).", "LINKAGE (4) link two time tokens.", "Besides, we define 2 special modifier types, COMMA (1) for comma ',' and IN ARTICLE (2) for indefinite articles 'a' and 'an.'", "TimeML (Pustejovsky et al., 2003a) and Time-Bank (Pustejovsky et al., 2003b) do not treat most prepositions like 'on' as a part of time expressions.", "Thus SynTime does not collect those prepositions.", "Numeral.", "Number in time expressions can be a time token e.g., '10' in 'October 10, 2016,' or a modifier e.g., '10' in '10 days.'", "We define NU-MERAL (-) for the ordinals and numbers.", "SynTime Initialization.", "The token regular expressions for initializing SynTime are collected from SUTime, 6 a state-of-the-art rule-based tagger that achieved the highest recall in TempEval-3 (Chang and Manning, , 2013 .", "Specifically, we collect from SUTime only the tokens and the regular expressions over tokens, and discard its other rules of recognizing full time expressions.", "Time Expression Recognition On the token types, SynTime designs a small set of heuristic rules to recognize time expressions.", "The recognition process includes three main steps: (1) time token identification, (2) time segment identification, and (3) time expression extraction.", "Time Token Identification Identifying time tokens is simple, through matching of string and regular expressions.", "Some words might cause ambiguity.", "For example, 'May' could be a modal verb, or the fifth month of year.", "To filter out the ambiguous words, we use POS information.", "In implementation, we use Stanford POS Tagger; 7 and the POS tags for matching the instances of token types in SynTime are based on our Finding 4 in Section 3.2.", "Besides time tokens are identified, in this step, individual token is assigned with one token type of either modifier or numeral if it is matched with token regular expressions.", "In the next two steps, SynTime works on those token types.", "Time Segment Identification The task of time segment identification is to search the surrounding of each time token identified in previous step for modifiers and numerals, then gather the time token with its modifiers and numerals to form a time segment.", "The searching is under simple heuristic rules in which the key idea is to expand the time token's boundaries.", "At first, each time token is a time segment.", "If it is either a PERIOD or DURATION, then no need to further search.", "Otherwise, search its left and its right for modifiers and numerals.", "For the left searching, if encounter a PREFIX or NUMERAL or IN ARTICLE, then continue searching.", "For the right searching, if encounter a SUFFIX or NUMERAL, then continue searching.", "Both the left and the right searching stop when reaching a COMMA or LINK-AGE or a non-modifier/numeral word.", "The left searching does not exceed the previous time token; the right searching does not exceed the next time token.", "A time segment consists of exactly one time token, and zero or some modifiers/numerals.", "A special kind of time segments do not contain any time token; they depend on other time segments next to them.", "For example, in '8 to 20 days,' 'to 20 days' is a time segment, and '8 to' forms a dependent time segment.", "(See Figure 4(e) .)", "Time Expression Extraction The task of time expression extraction is to extract time expressions from the identified time segments in which the core step is to determine whether to merge two adjacent or overlapping time segments into a new time segment.", "We scan the time segments in a sentence from beginning to the end.", "A stand-alone time segment is a time expression.", "(See Figure 4(a) .)", "The focus is to deal with two or more time segments that are adjacent or overlapping.", "If two time segments s 1 and s 2 are adjacent, merge them to form a new time segment s 1 .", "(See Figure 4(b) .)", "Consider that s 1 and s 2 overlap at a shared boundary.", "According to our time segment identification, the shared boundary could be a modifier or a numeral.", "If the word at the shared boundary is neither a COMMA nor a LINKAGE, then merge s 1 and s 2 .", "(See Figure 4(c) .)", "If the word is a LINKAGE, then extract s 1 as a time expression and continue scanning.", "When the shared boundary is a COMMA, merge s 1 and s 2 only if the COMMA's previous token and its next token satisfy the three conditions: (1) the previous token is a time token or a NUMERAL; (2) the next token is a time token; and (3) the token types of the previous token and of the next token are not the same.", "(See Figure 4(d) .)", "Although Figure 4 shows the examples as token types together with the tokens, we should note that the heuristic rules only work on the token types.", "After the extraction step, time expressions are exported as a sequence of tokens from the sequence of token types.", "SynTime Expansion SynTime could be expanded by simply adding new words under each defined token type without changing any rule.", "The expansion requires the words to be added to be annotated manually.", "We apply the initial SynTime on the time expressions from training text and list the words that are not covered.", "Whether the uncovered words are added to SynTime is manually determined.", "The rule for determination is that the added words can not cause ambiguity and should be generic.", "Wiki-Wars dataset contains a few examples like this: 'The time Arnold reached Quebec City.'", "Words in this example are extremely descriptive, and we do not collect them.", "In tweets, on the other hand, people may use abbreviations and informal variants; for example, '2day' and 'tday' are popular spellings of 'today.'", "Such kind of abbreviations and informal variants will be collected.", "According to our findings, not many words are used to express time information, the manual addition of keywords thus will not cost much.", "In addition, we find that even in tweets people tend to use formal words.", "In the Twitter word clusters trained from 56 million English tweets, 8 the most often used words are the formal words, and their frequencies are much greater than the informal words'.", "The cluster of 'today,' 9 for example, its most often use is the formal one, 'today,' which appears 1,220,829 times; while its second most often use '2day' appears only 34,827 times.", "The low rate of informal words (e.g., about 3% in 'today' cluster) suggests that even in informal environment the manual keyword addition costs little.", "Experiments We evaluate SynTime against three state-of-theart baselines (i.e., HeidelTime, SUTime, and UW-Time) on three datasets (i.e., TimeBank, Wiki-Wars, and Tweets).", "WikiWars is a specific domain dataset about war; TimeBank and WikiWars are the datasets in formal text while Tweets dataset is in informal text.", "For SynTime we report the results of its two versions: SynTime-I and SynTime-E. SynTime-I is the initial version, and SynTime-E is the expanded version of SynTime-I.", "Experiment Setting Datasets.", "We use three datasets of which TimeBank and WikiWars are benchmark datasets whose details are shown in Section 3.1; Tweets is our manually labeled dataset that are collected from Twitter.", "For Tweets dataset, we randomly sample 4000 tweets and use SUTime to tag them.", "942 tweets of which each contains at least one time expression.", "From the remaining 3,058 tweets, we randomly sample 500 and manually annotate them, and find that only 15 tweets contain time expressions.", "We therefore roughly consider that SU-Time misses about 3% time expressions in tweets.", "Two annotators then manually annotate the 942 tweets with discussion to final agreement according to the standards of TimeML and TimeBank.", "We finally get 1,127 manually labeled time expressions.", "For the 942 tweets, we randomly sample 200 tweets as test set, and the rest 742 as training set, because a baseline UWTime requires training.", "Baseline Methods.", "We compare SynTime with methods: HeidelTime (Strötgen and Gertz, 2010) , SUTime (Chang and , and UW- Evaluation Metrics.", "We follow TempEval-3 and use their evaluation toolkit 10 to report P recision, Recall, and F 1 in terms of strict match and relaxed match (UzZaman et al., 2013).", "22, 1986' and 'February 01, 1989 ' at the level of word or of character.", "One suggestion is to consider a type-based learning method that could use type information.", "For example, the above two time expressions refer to the same pattern of 'MONTH NUMERAL COMMA Table 5 lists the number of time tokens and modifiers added to SynTime-I to get SynTime-E. On TimeBank and Tweets datasets, only a few tokens are added, the corresponding results are affected slightly.", "This confirms that the size of time words is small, and that SynTime-I covers most of time words.", "On WikiWars dataset, relatively more tokens are added, SynTime-E performs much better than SynTime-I, especially in recall.", "It improves the recall by 3.25% in strict match and by 2.98% in relaxed match.", "This indicates that with more words added from specific domains (e.g., WikiWars dataset about war), SynTime can significantly improve the performance.", "Experiment Result Limitations SynTime assumes that words are tokenized and POS tagged correctly.", "In reality, however, the tokenized and tagged words are not that perfect, due to the limitation of used tools.", "For example, Stanford POS Tagger assigns VBD to the word 'sat' in 'friday or sat' while whose tag should be NNP.", "The incorrect tokens and POS tags affect the result.", "Conclusion and future work We conduct an analysis on time expressions from four datasets, and find that time expressions in general are very short and expressed by a small vocabulary, and words in time expressions demonstrate similar syntactic behavior.", "Our findings provide evidence in terms of time expression for the principle of least effort (Zipf, 1949) .", "Inspired by part-of-speech, based on the findings, we define a syntactic type system for the time expression, and propose a type-based time expression tagger, named by SynTime.", "SynTime defines syntactic token types for tokens and on the token types it designs general heuristic rules based on the idea of boundary expansion.", "Experiments on three datasets show that SynTime outperforms the stateof-the-art baselines, including rule-based time taggers and machine learning based time tagger.", "Because our heuristic rules are quite simple, Syn-Time is light-weight and runs in real time.", "Our token types and heuristic rules are independent of specific tokens, SynTime therefore is independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domains and specific text types in English.", "The test for other languages needs only to construct a collection of token regular expressions in the target language under our defined token types.", "Time expression is part of language and follows the principle of least effort.", "Since language usage relates to human habits (Zipf, 1949; Chomsky, 1986; Pinker, 1995) , we might expect that humans would share some common habits, and therefore expect that other parts of language would more or less follow the same principle.", "In the future we will try our analytical method on other parts of language." ] }
{ "paper_header_number": [ "1", "2", "3.2", "4", "4.1", "4.2", "4.2.1", "4.2.2", "4.2.3", "4.3", "5", "5.3", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Finding", "SynTime: Syntactic Token Types and General Heuristic Rules", "SynTime Construction", "Time Expression Recognition", "Time Token Identification", "Time Segment Identification", "Time Expression Extraction", "SynTime Expansion", "Experiments", "Limitations", "Conclusion and future work" ] }
GEM-SciDuet-train-99#paper-1262#slide-5
Time Expression Analysis Finding 4
Similar syntactic behaviour: (1) POS information cannot distinguish time expressions from common text, but (2) within time expressions, POS tags can help distinguish their constituents. (1) For the top 40 POS tags (10 4 datasets), 37 have percentage lower than (2) Time tokens mainly have NN* and RB, modifiers have JJ and RB, and numerals have CD.
Similar syntactic behaviour: (1) POS information cannot distinguish time expressions from common text, but (2) within time expressions, POS tags can help distinguish their constituents. (1) For the top 40 POS tags (10 4 datasets), 37 have percentage lower than (2) Time tokens mainly have NN* and RB, modifiers have JJ and RB, and numerals have CD.
[]
GEM-SciDuet-train-99#paper-1262#slide-6
1262
Time Expression Analysis and Recognition Using Syntactic Token Types and General Heuristic Rules
Extracting time expressions from free text is a fundamental task for many applications. We analyze time expressions from four different datasets and find that only a small group of words are used to express time information and that the words in time expressions demonstrate similar syntactic behaviour. Based on the findings, we propose a type-based approach named SynTime 1 for time expression recognition. Specifically, we define three main syntactic token types, namely time token, modifier, and numeral, to group time-related token regular expressions. On the types we design general heuristic rules to recognize time expressions. In recognition, SynTime first identifies time tokens from raw text, then searches their surroundings for modifiers and numerals to form time segments, and finally merges the time segments to time expressions. As a lightweight rule-based tagger, SynTime runs in real time, and can be easily expanded by simply adding keywords for the text from different domains and different text types. Experiments on benchmark datasets and tweets data show that SynTime outperforms state-of-the-art methods.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249 ], "paper_content_text": [ "Introduction Time expression plays an important role in information retrieval and many applications in natural language processing (Alonso et al., 2011; Campos et al., 2014) .", "Recognizing time expressions from free text has attracted considerable attention since last decade (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) .", "1 Source: https://github.com/zhongxiaoshi/syntime We analyze time expressions in four datasets: TimeBank (Pustejovsky et al., 2003b) , Gigaword (Parker et al., 2011) , WikiWars (Mazur and Dale, 2010) , and Tweets.", "From the analysis we make four findings about time expressions.", "First, most time expressions are very short, with 80% of time expressions containing no more than three tokens.", "Second, at least 91.8% of time expressions contain at least one time token.", "Third, the vocabulary used to express time information is very small, with a small group of keywords.", "Finally, words in time expressions demonstrate similar syntactic behaviour.", "All the findings relate to the principle of least effort (Zipf, 1949) .", "That is, people tend to act under the least effort in order to minimize the cost of energy at both individual level and collective level to language usage (Zipf, 1949) .", "Time expression is part of language and acts as an interface of communication.", "Short expressions, occurrence, small vocabulary, and similar syntactic behaviour all reduce the cost of energy required to communicate.", "According to the findings we propose a typebased approach named SynTime ('Syn' stands for syntactic) to recognize time expressions.", "Specifically, we define three main token types, namely time token, modifier, and numeral, to group timerelated token regular expressions.", "Time tokens are the words that explicitly express time information, such as time units (e.g., 'year').", "Modifiers modify time tokens; they appear before or after time tokens, e.g., 'several' and 'ago' in 'several years ago.'", "Numerals are ordinals and numbers.", "From free text SynTime first identifies time tokens, then recognizes modifiers and numerals.", "Naturally, SynTime is a rule-based tagger.", "The key difference between SynTime and other rulebased taggers lies in the way of defining token types and the way of designing rules.", "The definition of token type in SynTime is inspired by part-of-speech in which \"linguists group some words of language into classes (sets) which show similar syntactic behaviour.\"", "(Manning and Schutze, 1999) SynTime defines token types for tokens according to their syntactic behaviour.", "Other rulebased taggers define types for tokens based on their semantic meaning.", "For example, SUTime defines 5 semantic modifier types, such as frequency modifiers; 2 while SynTime defines 5 syntactic modifier types, such as modifiers that appear before time tokens.", "(See Section 4.1 for details.)", "Accordingly, other rule-based taggers design deterministic rules based on their meanings of tokens themselves.", "SynTime instead designs general rules on the token types rather than on the tokens themselves.", "For example, our general rules do not work on tokens 'February' nor '1989' but on their token types 'MONTH' and 'YEAR.'", "That is why we call SynTime a type-based approach.", "More importantly, other rule-based taggers design rules in a fixed method, including fixed length and fixed position.", "In contrast, SynTime designs general rules in a heuristic way, based on the idea of boundary expansion.", "The general heuristic rules are quite light-weight that it makes SynTime much more flexible and expansible, and leads SynTime to run in real time.", "The heuristic rules are designed on token types and are independent of specific tokens, SynTime therefore is independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domains and specific text types in English.", "(The test for other languages needs only to construct a collection of token regular expressions in the target language under our defined token types.)", "Specifically, we evaluate SynTime against three state-of-the-art methods (i.e., HeidelTime, SUTime, and UWTime) on three datasets: TimeBank, WikiWars, and Tweets.", "3 datasets.", "More importantly, SynTime achieves the best recalls on all three datasets and exceptionally good results on Tweets dataset.", "To sum up, we make the following contributions.", "• We analyze time expressions from four datasets and make four findings.", "The findings provide evidence in terms of time expression for the principle of least effort (Zipf, 1949) .", "• We propose a time tagger named SynTime to recognize time expressions using syntactic token types and general heuristic rules.", "Syn-Time is independent of specific tokens, and therefore independent of specific domains, specific text types, and specific languages.", "• We conduct experiments on three datasets, and the results demonstrate the effectiveness of SynTime against state-of-the-art baselines.", "Related Work Many research works on time expression identification are reported in TempEval exercises (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) .", "The task is divided into two subtasks: recognition and normalization.", "Rule-based Time Expression Recognition.", "Rule-based time taggers like GUTime, Heidel-Time, and SUTime, predefine time-related words and rules (Verhagen et al., 2005; Strötgen and Gertz, 2010; Chang and Manning, 2012) .", "Heidel-Time (Strötgen and Gertz, 2010) hand-crafts rules with time resources like weekdays and months, and leverages language clues like part-of-speech to identify time expression.", "SUTime (Chang and Manning, 2012) designs deterministic rules using a cascade finite automata (Hobbs et al., 1997) on regular expressions over tokens (Chang and Manning, 2014) .", "It first identifies individual words, then expands them to chunks, and finally to time expressions.", "Rule-based taggers achieve very good results in TempEval exercises.", "SynTime is also a rule-based tagger while its key difference from other rule-based taggers is that between the rules and the tokens it introduces a layer of token type; its rules work on token types and are independent of specific tokens.", "Moreover, SynTime designs rules in a heuristic way.", "Machine Learning based Method.", "Machine learning based methods extract features from the text and apply statistical models on the features for recognizing time expressions.", "Example features include character features, word features, syntactic features, semantic features, and gazetteer features (Llorens et al., 2010; Filannino et al., 2013; Bethard, 2013) .", "The statistical models include Markov logic network, logistic regression, support vector machines, maximum entropy, and conditional random fields (Llorens et al., 2010; Uz-Zaman and Allen, 2010; Filannino et al., 2013; Bethard, 2013) .", "Some models obtain good performance, and even achieve the highest F 1 of 82.71% on strict match in TempEval-3 (Bethard, 2013) .", "Outside TempEval exercises, Angeli et al.", "leverage compositional grammar and employ a EMstyle approach to learn a latent parser for time expression recognition (Angeli et al., 2012) .", "In the method named UWTime, Lee et al.", "handcraft a combinatory categorial grammar (CCG) (Steedman, 1996) to define a set of lexicon with rules and use L1-regularization to learn linguistic context (Lee et al., 2014) .", "The two methods explicitly use linguistic information.", "In (Lee et al., 2014) , especially, CCG could capture rich structure information of language, similar to the rule-based methods.", "Tabassum et al.", "focus on resolving the dates in tweets, and use distant supervision to recognize time expressions (Tabassum et al., 2016) .", "They use five time types and assign one of them to each word, which is similar to SynTime in the way of defining types over tokens.", "However, they focus only on the type of date, while SynTime recoginizes all the time expressions and does not involve learning and runs in real time.", "Time Expression Normalization.", "Methods in TempEval exercises design rules for time expression normalization (Verhagen et al., 2005; Strötgen and Gertz, 2010; Llorens et al., 2010; Uz-Zaman and Allen, 2010; Filannino et al., 2013; Bethard, 2013) .", "Because the rule systems have high similarity, Llorens et al.", "suggest to construct a large knowledge base as a public resource for the task (Llorens et al., 2012) .", "Some researchers treat the normalization process as a learning task and use machine learning methods (Lee et al., 2014; Tabassum et al., 2016) .", "Lee et al.", "(Lee et al., 2014) use AdaGrad algorithm (Duchi et al., 2011) and Tabassum et al.", "(Tabassum et al., 2016 ) use a loglinear algorithm to normalize time expressions.", "SynTime focuses only on the recognition task.", "The normalization could be achieved by using methods similar to the existing rule systems, because they are highly similar (Llorens et al., 2012) .", "We conduct an analysis on four datasets: Time-Bank, Gigaword, WikiWars, and Tweets.", "Time-Bank (Pustejovsky et al., 2003b ) is a benchmark dataset in TempEval series (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) , consisting of 183 news articles.", "Gigaword (Parker et al., 2011 ) is a large automatically labelled dataset with 2,452 news articles and used in TempEval-3.", "WikiWars dataset is derived from Wikipedia articles about wars (Mazur and Dale, 2010) .", "Tweets is our manually annotated dataset with 942 tweets of which each contains at least one time expression.", "Table 1 summarizes the datasets.", "Finding From the four datasets, we analyze their time expressions and make four findings.", "We will see that despite the four datasets vary in corpus sizes, in text types, and in domains, their time expressions demonstrate similar characteristics.", "Finding 1 Time expressions are very short.", "More than 80% of time expressions contain no more than three words and more than 90% contain no more than four words.", "Figure 1 plots the length distribution of time expressions.", "Although the texts are collected from different sources (i.e., news articles, Wikipedia articles, and tweets) and vary in sizes, the length Finding 2 More than 91% of time expressions contain at least one time token.", "The second column in Table 2 reports the percentage of time expressions that contain at least one time token.", "We find that at least 91.81% of time expressions contain time token(s).", "(Some time expressions have no time token but depend on other time expressions; in '2 to 8 days,' for example, '2' depends on '8 days.')", "This suggests that time tokens account for time expressions.", "Therefore, to recognize time expressions, it is essential to recognize their time tokens.", "Finding 3 Only a small group of time-related keywords are used to express time information.", "From the time expressions in all four datasets, we find that the group of keywords used to express time information is small.", "Table 3 reports the number of distinct words and of distinct time tokens.", "The words/tokens are manually normalized before counting and their variants are ignored.", "For example, 'year' and '5yrs' are counted as one token 'year.'", "Numerals in the counting are ignored.", "Despite the four datasets vary in sizes, domains, and text types, the numbers of their distinct time tokens are comparable.", "Across the four datasets, the number of distinct words is 350, about half of the simply summing of 675; the number of distinct time tokens is 123, less than half of the simply summing 282.", "Among the 123 distinct time tokens, 45 appear in all the four datasets, and 101 appear in at least two datasets.", "This indicates that time tokens, which account for time expressions, are highly overlapped across the four datasets.", "In other words, time expressions highly overlap at their time tokens.", "Finding 4 POS information could not distinguish time expressions from common words, but within time expressions, POS tags can help distinguish their constituents.", "For each dataset we list the top 10 POS tags that appear in time expressions, and their percentages over the whole text.", "Among the 40 tags (10 × 4 datasets), 37 have percentage lower than 20%; other 3 are CD.", "This indicates that POS could not provide enough information to distinguish time expressions from common words.", "However, the most common POS tags in time expressions are NN*, JJ, RB, CD, and DT.", "Within time expressions, the time tokens usually have NN* and RB, the modifiers have JJ and RB, and the numerals have CD.", "This finding indicates that for the time expressions, their similar constituents behave in similar syntactic way.", "When seeing this, we realize that this is exactly how linguists define part-of-speech for language.", "4 The definition of POS for language inspires us to define a syntactic type system for the time expression, part of language.", "The four findings all relate to the principle of least effort (Zipf, 1949) .", "That is, people tend to act with least effort so as to minimize the cost of energy at both individual and collective levels to the language usage (Zipf, 1949) .", "Time expression is part of language and acts as an interface of communication.", "Short expressions, occurrence, small vocabulary, and similar syntactic behaviour all reduce the cost of energy required to communicate.", "To summarize: on average, a time expression contains two tokens of which one is time token and the other is modifier/numeral, and the size of time tokens is small.", "To recognize a time expression, therefore, we first recognize the time token, then recognize the modifier/numeral.", "SynTime: Syntactic Token Types and General Heuristic Rules SynTime defines a syntactic type system for the tokens of time expressions, and designs heuristic rules working on the token types.", "Figure 2 shows the layout of SynTime, consisting of three levels: Token level, type level, and rule level.", "Token types at the type level group the tokens of time expressions.", "Heuristic rules lie at the rule level, working on token types rather than on tokens themselves.", "That is why the heuristic rules are general.", "For example, the heuristic rules do not work on tokens '1989' nor 'February,' but on their token types 'YEAR' and 'MONTH.'", "The heuristic rules are only relevant to token types, and are independent of specific tokens.", "For this reason, our token types and heuristic rules are independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domain (i.e., war domain) and specific text types (i.e., formal text and informal text) in English.", "The test for other languages simply needs to construct a set of token regular expressions in the target language under our defined token types.", "Figure 3 shows the overview of SynTime in practice.", "Shown on the left-hand side, SynTime is initialized with regular expressions over tokens.", "After initialization, SynTime can be directly applied on text.", "On the other hand, SynTime can be easily expanded by simply adding the time-related token regular expressions from training text under each defined token type.", "The expansion enables SynTime to recognize time expressions in text from different domains and different text types.", "Shown on the right-hand side of Figure 3 , Syn-Time recognizes time expression through three main steps.", "In the first step, SynTime identifies time tokens from the POS-tagged raw text.", "Then around the time tokens SynTime searches for modifiers and numerals to form time segments.", "In the last step, SynTime transforms the time segments to time expressions.", "SynTime Construction We define a syntactic type system for time expression, specifically, 15 token types for time tokens, 5 token types for modifiers, and 1 token type for numeral.", "Token types to tokens is like POS tags to words; for example, 'February' has a POS tag of NNP and a token type of MONTH.", "Time Token.", "We define 15 token types for the time tokens and use their names similar to Joda-Time classes: 5 DECADE (-), YEAR (-), SEA-SON (5), MONTH (12), WEEK (7), DATE (-), TIME (-), DAY TIME (27), TIMELINE (12), HOLIDAY (20), PERIOD (9), DURATION (-), TIME UNIT (15), TIME ZONE (6), and ERA (2).", "Number in '()' indicates the number of distinct tokens in this token type.", "'-' indicates that this token type involves changing digits and cannot be counted.", "Modifier.", "We define 3 token types for the modifiers according to their possible positions relative to time tokens.", "Modifiers that appear before time tokens are PREFIX (48); modifiers after time tokens are SUFFIX (2).", "LINKAGE (4) link two time tokens.", "Besides, we define 2 special modifier types, COMMA (1) for comma ',' and IN ARTICLE (2) for indefinite articles 'a' and 'an.'", "TimeML (Pustejovsky et al., 2003a) and Time-Bank (Pustejovsky et al., 2003b) do not treat most prepositions like 'on' as a part of time expressions.", "Thus SynTime does not collect those prepositions.", "Numeral.", "Number in time expressions can be a time token e.g., '10' in 'October 10, 2016,' or a modifier e.g., '10' in '10 days.'", "We define NU-MERAL (-) for the ordinals and numbers.", "SynTime Initialization.", "The token regular expressions for initializing SynTime are collected from SUTime, 6 a state-of-the-art rule-based tagger that achieved the highest recall in TempEval-3 (Chang and Manning, , 2013 .", "Specifically, we collect from SUTime only the tokens and the regular expressions over tokens, and discard its other rules of recognizing full time expressions.", "Time Expression Recognition On the token types, SynTime designs a small set of heuristic rules to recognize time expressions.", "The recognition process includes three main steps: (1) time token identification, (2) time segment identification, and (3) time expression extraction.", "Time Token Identification Identifying time tokens is simple, through matching of string and regular expressions.", "Some words might cause ambiguity.", "For example, 'May' could be a modal verb, or the fifth month of year.", "To filter out the ambiguous words, we use POS information.", "In implementation, we use Stanford POS Tagger; 7 and the POS tags for matching the instances of token types in SynTime are based on our Finding 4 in Section 3.2.", "Besides time tokens are identified, in this step, individual token is assigned with one token type of either modifier or numeral if it is matched with token regular expressions.", "In the next two steps, SynTime works on those token types.", "Time Segment Identification The task of time segment identification is to search the surrounding of each time token identified in previous step for modifiers and numerals, then gather the time token with its modifiers and numerals to form a time segment.", "The searching is under simple heuristic rules in which the key idea is to expand the time token's boundaries.", "At first, each time token is a time segment.", "If it is either a PERIOD or DURATION, then no need to further search.", "Otherwise, search its left and its right for modifiers and numerals.", "For the left searching, if encounter a PREFIX or NUMERAL or IN ARTICLE, then continue searching.", "For the right searching, if encounter a SUFFIX or NUMERAL, then continue searching.", "Both the left and the right searching stop when reaching a COMMA or LINK-AGE or a non-modifier/numeral word.", "The left searching does not exceed the previous time token; the right searching does not exceed the next time token.", "A time segment consists of exactly one time token, and zero or some modifiers/numerals.", "A special kind of time segments do not contain any time token; they depend on other time segments next to them.", "For example, in '8 to 20 days,' 'to 20 days' is a time segment, and '8 to' forms a dependent time segment.", "(See Figure 4(e) .)", "Time Expression Extraction The task of time expression extraction is to extract time expressions from the identified time segments in which the core step is to determine whether to merge two adjacent or overlapping time segments into a new time segment.", "We scan the time segments in a sentence from beginning to the end.", "A stand-alone time segment is a time expression.", "(See Figure 4(a) .)", "The focus is to deal with two or more time segments that are adjacent or overlapping.", "If two time segments s 1 and s 2 are adjacent, merge them to form a new time segment s 1 .", "(See Figure 4(b) .)", "Consider that s 1 and s 2 overlap at a shared boundary.", "According to our time segment identification, the shared boundary could be a modifier or a numeral.", "If the word at the shared boundary is neither a COMMA nor a LINKAGE, then merge s 1 and s 2 .", "(See Figure 4(c) .)", "If the word is a LINKAGE, then extract s 1 as a time expression and continue scanning.", "When the shared boundary is a COMMA, merge s 1 and s 2 only if the COMMA's previous token and its next token satisfy the three conditions: (1) the previous token is a time token or a NUMERAL; (2) the next token is a time token; and (3) the token types of the previous token and of the next token are not the same.", "(See Figure 4(d) .)", "Although Figure 4 shows the examples as token types together with the tokens, we should note that the heuristic rules only work on the token types.", "After the extraction step, time expressions are exported as a sequence of tokens from the sequence of token types.", "SynTime Expansion SynTime could be expanded by simply adding new words under each defined token type without changing any rule.", "The expansion requires the words to be added to be annotated manually.", "We apply the initial SynTime on the time expressions from training text and list the words that are not covered.", "Whether the uncovered words are added to SynTime is manually determined.", "The rule for determination is that the added words can not cause ambiguity and should be generic.", "Wiki-Wars dataset contains a few examples like this: 'The time Arnold reached Quebec City.'", "Words in this example are extremely descriptive, and we do not collect them.", "In tweets, on the other hand, people may use abbreviations and informal variants; for example, '2day' and 'tday' are popular spellings of 'today.'", "Such kind of abbreviations and informal variants will be collected.", "According to our findings, not many words are used to express time information, the manual addition of keywords thus will not cost much.", "In addition, we find that even in tweets people tend to use formal words.", "In the Twitter word clusters trained from 56 million English tweets, 8 the most often used words are the formal words, and their frequencies are much greater than the informal words'.", "The cluster of 'today,' 9 for example, its most often use is the formal one, 'today,' which appears 1,220,829 times; while its second most often use '2day' appears only 34,827 times.", "The low rate of informal words (e.g., about 3% in 'today' cluster) suggests that even in informal environment the manual keyword addition costs little.", "Experiments We evaluate SynTime against three state-of-theart baselines (i.e., HeidelTime, SUTime, and UW-Time) on three datasets (i.e., TimeBank, Wiki-Wars, and Tweets).", "WikiWars is a specific domain dataset about war; TimeBank and WikiWars are the datasets in formal text while Tweets dataset is in informal text.", "For SynTime we report the results of its two versions: SynTime-I and SynTime-E. SynTime-I is the initial version, and SynTime-E is the expanded version of SynTime-I.", "Experiment Setting Datasets.", "We use three datasets of which TimeBank and WikiWars are benchmark datasets whose details are shown in Section 3.1; Tweets is our manually labeled dataset that are collected from Twitter.", "For Tweets dataset, we randomly sample 4000 tweets and use SUTime to tag them.", "942 tweets of which each contains at least one time expression.", "From the remaining 3,058 tweets, we randomly sample 500 and manually annotate them, and find that only 15 tweets contain time expressions.", "We therefore roughly consider that SU-Time misses about 3% time expressions in tweets.", "Two annotators then manually annotate the 942 tweets with discussion to final agreement according to the standards of TimeML and TimeBank.", "We finally get 1,127 manually labeled time expressions.", "For the 942 tweets, we randomly sample 200 tweets as test set, and the rest 742 as training set, because a baseline UWTime requires training.", "Baseline Methods.", "We compare SynTime with methods: HeidelTime (Strötgen and Gertz, 2010) , SUTime (Chang and , and UW- Evaluation Metrics.", "We follow TempEval-3 and use their evaluation toolkit 10 to report P recision, Recall, and F 1 in terms of strict match and relaxed match (UzZaman et al., 2013).", "22, 1986' and 'February 01, 1989 ' at the level of word or of character.", "One suggestion is to consider a type-based learning method that could use type information.", "For example, the above two time expressions refer to the same pattern of 'MONTH NUMERAL COMMA Table 5 lists the number of time tokens and modifiers added to SynTime-I to get SynTime-E. On TimeBank and Tweets datasets, only a few tokens are added, the corresponding results are affected slightly.", "This confirms that the size of time words is small, and that SynTime-I covers most of time words.", "On WikiWars dataset, relatively more tokens are added, SynTime-E performs much better than SynTime-I, especially in recall.", "It improves the recall by 3.25% in strict match and by 2.98% in relaxed match.", "This indicates that with more words added from specific domains (e.g., WikiWars dataset about war), SynTime can significantly improve the performance.", "Experiment Result Limitations SynTime assumes that words are tokenized and POS tagged correctly.", "In reality, however, the tokenized and tagged words are not that perfect, due to the limitation of used tools.", "For example, Stanford POS Tagger assigns VBD to the word 'sat' in 'friday or sat' while whose tag should be NNP.", "The incorrect tokens and POS tags affect the result.", "Conclusion and future work We conduct an analysis on time expressions from four datasets, and find that time expressions in general are very short and expressed by a small vocabulary, and words in time expressions demonstrate similar syntactic behavior.", "Our findings provide evidence in terms of time expression for the principle of least effort (Zipf, 1949) .", "Inspired by part-of-speech, based on the findings, we define a syntactic type system for the time expression, and propose a type-based time expression tagger, named by SynTime.", "SynTime defines syntactic token types for tokens and on the token types it designs general heuristic rules based on the idea of boundary expansion.", "Experiments on three datasets show that SynTime outperforms the stateof-the-art baselines, including rule-based time taggers and machine learning based time tagger.", "Because our heuristic rules are quite simple, Syn-Time is light-weight and runs in real time.", "Our token types and heuristic rules are independent of specific tokens, SynTime therefore is independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domains and specific text types in English.", "The test for other languages needs only to construct a collection of token regular expressions in the target language under our defined token types.", "Time expression is part of language and follows the principle of least effort.", "Since language usage relates to human habits (Zipf, 1949; Chomsky, 1986; Pinker, 1995) , we might expect that humans would share some common habits, and therefore expect that other parts of language would more or less follow the same principle.", "In the future we will try our analytical method on other parts of language." ] }
{ "paper_header_number": [ "1", "2", "3.2", "4", "4.1", "4.2", "4.2.1", "4.2.2", "4.2.3", "4.3", "5", "5.3", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Finding", "SynTime: Syntactic Token Types and General Heuristic Rules", "SynTime Construction", "Time Expression Recognition", "Time Token Identification", "Time Segment Identification", "Time Expression Extraction", "SynTime Expansion", "Experiments", "Limitations", "Conclusion and future work" ] }
GEM-SciDuet-train-99#paper-1262#slide-6
Time Expression Analysis Eureka
Similar syntactic behaviour: (1) POS information cannot distinguish time expressions from common text, but (2) within time expressions, POS tags can help distinguish their constituents. (1) For the top 40 POS tags (10 4 datasets), 37 have percentage lower than (2) Time tokens mainly have NN* and RB, modifiers have JJ and RB, and numerals have CD. When seeing (2), we realize that this is exactly how linguists define part-of-speech for language; similar words have similar syntactic behaviour. The definition of part-of-speech for language inspires us to define a type system for the time expression, part of language.
Similar syntactic behaviour: (1) POS information cannot distinguish time expressions from common text, but (2) within time expressions, POS tags can help distinguish their constituents. (1) For the top 40 POS tags (10 4 datasets), 37 have percentage lower than (2) Time tokens mainly have NN* and RB, modifiers have JJ and RB, and numerals have CD. When seeing (2), we realize that this is exactly how linguists define part-of-speech for language; similar words have similar syntactic behaviour. The definition of part-of-speech for language inspires us to define a type system for the time expression, part of language.
[]
GEM-SciDuet-train-99#paper-1262#slide-7
1262
Time Expression Analysis and Recognition Using Syntactic Token Types and General Heuristic Rules
Extracting time expressions from free text is a fundamental task for many applications. We analyze time expressions from four different datasets and find that only a small group of words are used to express time information and that the words in time expressions demonstrate similar syntactic behaviour. Based on the findings, we propose a type-based approach named SynTime 1 for time expression recognition. Specifically, we define three main syntactic token types, namely time token, modifier, and numeral, to group time-related token regular expressions. On the types we design general heuristic rules to recognize time expressions. In recognition, SynTime first identifies time tokens from raw text, then searches their surroundings for modifiers and numerals to form time segments, and finally merges the time segments to time expressions. As a lightweight rule-based tagger, SynTime runs in real time, and can be easily expanded by simply adding keywords for the text from different domains and different text types. Experiments on benchmark datasets and tweets data show that SynTime outperforms state-of-the-art methods.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249 ], "paper_content_text": [ "Introduction Time expression plays an important role in information retrieval and many applications in natural language processing (Alonso et al., 2011; Campos et al., 2014) .", "Recognizing time expressions from free text has attracted considerable attention since last decade (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) .", "1 Source: https://github.com/zhongxiaoshi/syntime We analyze time expressions in four datasets: TimeBank (Pustejovsky et al., 2003b) , Gigaword (Parker et al., 2011) , WikiWars (Mazur and Dale, 2010) , and Tweets.", "From the analysis we make four findings about time expressions.", "First, most time expressions are very short, with 80% of time expressions containing no more than three tokens.", "Second, at least 91.8% of time expressions contain at least one time token.", "Third, the vocabulary used to express time information is very small, with a small group of keywords.", "Finally, words in time expressions demonstrate similar syntactic behaviour.", "All the findings relate to the principle of least effort (Zipf, 1949) .", "That is, people tend to act under the least effort in order to minimize the cost of energy at both individual level and collective level to language usage (Zipf, 1949) .", "Time expression is part of language and acts as an interface of communication.", "Short expressions, occurrence, small vocabulary, and similar syntactic behaviour all reduce the cost of energy required to communicate.", "According to the findings we propose a typebased approach named SynTime ('Syn' stands for syntactic) to recognize time expressions.", "Specifically, we define three main token types, namely time token, modifier, and numeral, to group timerelated token regular expressions.", "Time tokens are the words that explicitly express time information, such as time units (e.g., 'year').", "Modifiers modify time tokens; they appear before or after time tokens, e.g., 'several' and 'ago' in 'several years ago.'", "Numerals are ordinals and numbers.", "From free text SynTime first identifies time tokens, then recognizes modifiers and numerals.", "Naturally, SynTime is a rule-based tagger.", "The key difference between SynTime and other rulebased taggers lies in the way of defining token types and the way of designing rules.", "The definition of token type in SynTime is inspired by part-of-speech in which \"linguists group some words of language into classes (sets) which show similar syntactic behaviour.\"", "(Manning and Schutze, 1999) SynTime defines token types for tokens according to their syntactic behaviour.", "Other rulebased taggers define types for tokens based on their semantic meaning.", "For example, SUTime defines 5 semantic modifier types, such as frequency modifiers; 2 while SynTime defines 5 syntactic modifier types, such as modifiers that appear before time tokens.", "(See Section 4.1 for details.)", "Accordingly, other rule-based taggers design deterministic rules based on their meanings of tokens themselves.", "SynTime instead designs general rules on the token types rather than on the tokens themselves.", "For example, our general rules do not work on tokens 'February' nor '1989' but on their token types 'MONTH' and 'YEAR.'", "That is why we call SynTime a type-based approach.", "More importantly, other rule-based taggers design rules in a fixed method, including fixed length and fixed position.", "In contrast, SynTime designs general rules in a heuristic way, based on the idea of boundary expansion.", "The general heuristic rules are quite light-weight that it makes SynTime much more flexible and expansible, and leads SynTime to run in real time.", "The heuristic rules are designed on token types and are independent of specific tokens, SynTime therefore is independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domains and specific text types in English.", "(The test for other languages needs only to construct a collection of token regular expressions in the target language under our defined token types.)", "Specifically, we evaluate SynTime against three state-of-the-art methods (i.e., HeidelTime, SUTime, and UWTime) on three datasets: TimeBank, WikiWars, and Tweets.", "3 datasets.", "More importantly, SynTime achieves the best recalls on all three datasets and exceptionally good results on Tweets dataset.", "To sum up, we make the following contributions.", "• We analyze time expressions from four datasets and make four findings.", "The findings provide evidence in terms of time expression for the principle of least effort (Zipf, 1949) .", "• We propose a time tagger named SynTime to recognize time expressions using syntactic token types and general heuristic rules.", "Syn-Time is independent of specific tokens, and therefore independent of specific domains, specific text types, and specific languages.", "• We conduct experiments on three datasets, and the results demonstrate the effectiveness of SynTime against state-of-the-art baselines.", "Related Work Many research works on time expression identification are reported in TempEval exercises (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) .", "The task is divided into two subtasks: recognition and normalization.", "Rule-based Time Expression Recognition.", "Rule-based time taggers like GUTime, Heidel-Time, and SUTime, predefine time-related words and rules (Verhagen et al., 2005; Strötgen and Gertz, 2010; Chang and Manning, 2012) .", "Heidel-Time (Strötgen and Gertz, 2010) hand-crafts rules with time resources like weekdays and months, and leverages language clues like part-of-speech to identify time expression.", "SUTime (Chang and Manning, 2012) designs deterministic rules using a cascade finite automata (Hobbs et al., 1997) on regular expressions over tokens (Chang and Manning, 2014) .", "It first identifies individual words, then expands them to chunks, and finally to time expressions.", "Rule-based taggers achieve very good results in TempEval exercises.", "SynTime is also a rule-based tagger while its key difference from other rule-based taggers is that between the rules and the tokens it introduces a layer of token type; its rules work on token types and are independent of specific tokens.", "Moreover, SynTime designs rules in a heuristic way.", "Machine Learning based Method.", "Machine learning based methods extract features from the text and apply statistical models on the features for recognizing time expressions.", "Example features include character features, word features, syntactic features, semantic features, and gazetteer features (Llorens et al., 2010; Filannino et al., 2013; Bethard, 2013) .", "The statistical models include Markov logic network, logistic regression, support vector machines, maximum entropy, and conditional random fields (Llorens et al., 2010; Uz-Zaman and Allen, 2010; Filannino et al., 2013; Bethard, 2013) .", "Some models obtain good performance, and even achieve the highest F 1 of 82.71% on strict match in TempEval-3 (Bethard, 2013) .", "Outside TempEval exercises, Angeli et al.", "leverage compositional grammar and employ a EMstyle approach to learn a latent parser for time expression recognition (Angeli et al., 2012) .", "In the method named UWTime, Lee et al.", "handcraft a combinatory categorial grammar (CCG) (Steedman, 1996) to define a set of lexicon with rules and use L1-regularization to learn linguistic context (Lee et al., 2014) .", "The two methods explicitly use linguistic information.", "In (Lee et al., 2014) , especially, CCG could capture rich structure information of language, similar to the rule-based methods.", "Tabassum et al.", "focus on resolving the dates in tweets, and use distant supervision to recognize time expressions (Tabassum et al., 2016) .", "They use five time types and assign one of them to each word, which is similar to SynTime in the way of defining types over tokens.", "However, they focus only on the type of date, while SynTime recoginizes all the time expressions and does not involve learning and runs in real time.", "Time Expression Normalization.", "Methods in TempEval exercises design rules for time expression normalization (Verhagen et al., 2005; Strötgen and Gertz, 2010; Llorens et al., 2010; Uz-Zaman and Allen, 2010; Filannino et al., 2013; Bethard, 2013) .", "Because the rule systems have high similarity, Llorens et al.", "suggest to construct a large knowledge base as a public resource for the task (Llorens et al., 2012) .", "Some researchers treat the normalization process as a learning task and use machine learning methods (Lee et al., 2014; Tabassum et al., 2016) .", "Lee et al.", "(Lee et al., 2014) use AdaGrad algorithm (Duchi et al., 2011) and Tabassum et al.", "(Tabassum et al., 2016 ) use a loglinear algorithm to normalize time expressions.", "SynTime focuses only on the recognition task.", "The normalization could be achieved by using methods similar to the existing rule systems, because they are highly similar (Llorens et al., 2012) .", "We conduct an analysis on four datasets: Time-Bank, Gigaword, WikiWars, and Tweets.", "Time-Bank (Pustejovsky et al., 2003b ) is a benchmark dataset in TempEval series (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) , consisting of 183 news articles.", "Gigaword (Parker et al., 2011 ) is a large automatically labelled dataset with 2,452 news articles and used in TempEval-3.", "WikiWars dataset is derived from Wikipedia articles about wars (Mazur and Dale, 2010) .", "Tweets is our manually annotated dataset with 942 tweets of which each contains at least one time expression.", "Table 1 summarizes the datasets.", "Finding From the four datasets, we analyze their time expressions and make four findings.", "We will see that despite the four datasets vary in corpus sizes, in text types, and in domains, their time expressions demonstrate similar characteristics.", "Finding 1 Time expressions are very short.", "More than 80% of time expressions contain no more than three words and more than 90% contain no more than four words.", "Figure 1 plots the length distribution of time expressions.", "Although the texts are collected from different sources (i.e., news articles, Wikipedia articles, and tweets) and vary in sizes, the length Finding 2 More than 91% of time expressions contain at least one time token.", "The second column in Table 2 reports the percentage of time expressions that contain at least one time token.", "We find that at least 91.81% of time expressions contain time token(s).", "(Some time expressions have no time token but depend on other time expressions; in '2 to 8 days,' for example, '2' depends on '8 days.')", "This suggests that time tokens account for time expressions.", "Therefore, to recognize time expressions, it is essential to recognize their time tokens.", "Finding 3 Only a small group of time-related keywords are used to express time information.", "From the time expressions in all four datasets, we find that the group of keywords used to express time information is small.", "Table 3 reports the number of distinct words and of distinct time tokens.", "The words/tokens are manually normalized before counting and their variants are ignored.", "For example, 'year' and '5yrs' are counted as one token 'year.'", "Numerals in the counting are ignored.", "Despite the four datasets vary in sizes, domains, and text types, the numbers of their distinct time tokens are comparable.", "Across the four datasets, the number of distinct words is 350, about half of the simply summing of 675; the number of distinct time tokens is 123, less than half of the simply summing 282.", "Among the 123 distinct time tokens, 45 appear in all the four datasets, and 101 appear in at least two datasets.", "This indicates that time tokens, which account for time expressions, are highly overlapped across the four datasets.", "In other words, time expressions highly overlap at their time tokens.", "Finding 4 POS information could not distinguish time expressions from common words, but within time expressions, POS tags can help distinguish their constituents.", "For each dataset we list the top 10 POS tags that appear in time expressions, and their percentages over the whole text.", "Among the 40 tags (10 × 4 datasets), 37 have percentage lower than 20%; other 3 are CD.", "This indicates that POS could not provide enough information to distinguish time expressions from common words.", "However, the most common POS tags in time expressions are NN*, JJ, RB, CD, and DT.", "Within time expressions, the time tokens usually have NN* and RB, the modifiers have JJ and RB, and the numerals have CD.", "This finding indicates that for the time expressions, their similar constituents behave in similar syntactic way.", "When seeing this, we realize that this is exactly how linguists define part-of-speech for language.", "4 The definition of POS for language inspires us to define a syntactic type system for the time expression, part of language.", "The four findings all relate to the principle of least effort (Zipf, 1949) .", "That is, people tend to act with least effort so as to minimize the cost of energy at both individual and collective levels to the language usage (Zipf, 1949) .", "Time expression is part of language and acts as an interface of communication.", "Short expressions, occurrence, small vocabulary, and similar syntactic behaviour all reduce the cost of energy required to communicate.", "To summarize: on average, a time expression contains two tokens of which one is time token and the other is modifier/numeral, and the size of time tokens is small.", "To recognize a time expression, therefore, we first recognize the time token, then recognize the modifier/numeral.", "SynTime: Syntactic Token Types and General Heuristic Rules SynTime defines a syntactic type system for the tokens of time expressions, and designs heuristic rules working on the token types.", "Figure 2 shows the layout of SynTime, consisting of three levels: Token level, type level, and rule level.", "Token types at the type level group the tokens of time expressions.", "Heuristic rules lie at the rule level, working on token types rather than on tokens themselves.", "That is why the heuristic rules are general.", "For example, the heuristic rules do not work on tokens '1989' nor 'February,' but on their token types 'YEAR' and 'MONTH.'", "The heuristic rules are only relevant to token types, and are independent of specific tokens.", "For this reason, our token types and heuristic rules are independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domain (i.e., war domain) and specific text types (i.e., formal text and informal text) in English.", "The test for other languages simply needs to construct a set of token regular expressions in the target language under our defined token types.", "Figure 3 shows the overview of SynTime in practice.", "Shown on the left-hand side, SynTime is initialized with regular expressions over tokens.", "After initialization, SynTime can be directly applied on text.", "On the other hand, SynTime can be easily expanded by simply adding the time-related token regular expressions from training text under each defined token type.", "The expansion enables SynTime to recognize time expressions in text from different domains and different text types.", "Shown on the right-hand side of Figure 3 , Syn-Time recognizes time expression through three main steps.", "In the first step, SynTime identifies time tokens from the POS-tagged raw text.", "Then around the time tokens SynTime searches for modifiers and numerals to form time segments.", "In the last step, SynTime transforms the time segments to time expressions.", "SynTime Construction We define a syntactic type system for time expression, specifically, 15 token types for time tokens, 5 token types for modifiers, and 1 token type for numeral.", "Token types to tokens is like POS tags to words; for example, 'February' has a POS tag of NNP and a token type of MONTH.", "Time Token.", "We define 15 token types for the time tokens and use their names similar to Joda-Time classes: 5 DECADE (-), YEAR (-), SEA-SON (5), MONTH (12), WEEK (7), DATE (-), TIME (-), DAY TIME (27), TIMELINE (12), HOLIDAY (20), PERIOD (9), DURATION (-), TIME UNIT (15), TIME ZONE (6), and ERA (2).", "Number in '()' indicates the number of distinct tokens in this token type.", "'-' indicates that this token type involves changing digits and cannot be counted.", "Modifier.", "We define 3 token types for the modifiers according to their possible positions relative to time tokens.", "Modifiers that appear before time tokens are PREFIX (48); modifiers after time tokens are SUFFIX (2).", "LINKAGE (4) link two time tokens.", "Besides, we define 2 special modifier types, COMMA (1) for comma ',' and IN ARTICLE (2) for indefinite articles 'a' and 'an.'", "TimeML (Pustejovsky et al., 2003a) and Time-Bank (Pustejovsky et al., 2003b) do not treat most prepositions like 'on' as a part of time expressions.", "Thus SynTime does not collect those prepositions.", "Numeral.", "Number in time expressions can be a time token e.g., '10' in 'October 10, 2016,' or a modifier e.g., '10' in '10 days.'", "We define NU-MERAL (-) for the ordinals and numbers.", "SynTime Initialization.", "The token regular expressions for initializing SynTime are collected from SUTime, 6 a state-of-the-art rule-based tagger that achieved the highest recall in TempEval-3 (Chang and Manning, , 2013 .", "Specifically, we collect from SUTime only the tokens and the regular expressions over tokens, and discard its other rules of recognizing full time expressions.", "Time Expression Recognition On the token types, SynTime designs a small set of heuristic rules to recognize time expressions.", "The recognition process includes three main steps: (1) time token identification, (2) time segment identification, and (3) time expression extraction.", "Time Token Identification Identifying time tokens is simple, through matching of string and regular expressions.", "Some words might cause ambiguity.", "For example, 'May' could be a modal verb, or the fifth month of year.", "To filter out the ambiguous words, we use POS information.", "In implementation, we use Stanford POS Tagger; 7 and the POS tags for matching the instances of token types in SynTime are based on our Finding 4 in Section 3.2.", "Besides time tokens are identified, in this step, individual token is assigned with one token type of either modifier or numeral if it is matched with token regular expressions.", "In the next two steps, SynTime works on those token types.", "Time Segment Identification The task of time segment identification is to search the surrounding of each time token identified in previous step for modifiers and numerals, then gather the time token with its modifiers and numerals to form a time segment.", "The searching is under simple heuristic rules in which the key idea is to expand the time token's boundaries.", "At first, each time token is a time segment.", "If it is either a PERIOD or DURATION, then no need to further search.", "Otherwise, search its left and its right for modifiers and numerals.", "For the left searching, if encounter a PREFIX or NUMERAL or IN ARTICLE, then continue searching.", "For the right searching, if encounter a SUFFIX or NUMERAL, then continue searching.", "Both the left and the right searching stop when reaching a COMMA or LINK-AGE or a non-modifier/numeral word.", "The left searching does not exceed the previous time token; the right searching does not exceed the next time token.", "A time segment consists of exactly one time token, and zero or some modifiers/numerals.", "A special kind of time segments do not contain any time token; they depend on other time segments next to them.", "For example, in '8 to 20 days,' 'to 20 days' is a time segment, and '8 to' forms a dependent time segment.", "(See Figure 4(e) .)", "Time Expression Extraction The task of time expression extraction is to extract time expressions from the identified time segments in which the core step is to determine whether to merge two adjacent or overlapping time segments into a new time segment.", "We scan the time segments in a sentence from beginning to the end.", "A stand-alone time segment is a time expression.", "(See Figure 4(a) .)", "The focus is to deal with two or more time segments that are adjacent or overlapping.", "If two time segments s 1 and s 2 are adjacent, merge them to form a new time segment s 1 .", "(See Figure 4(b) .)", "Consider that s 1 and s 2 overlap at a shared boundary.", "According to our time segment identification, the shared boundary could be a modifier or a numeral.", "If the word at the shared boundary is neither a COMMA nor a LINKAGE, then merge s 1 and s 2 .", "(See Figure 4(c) .)", "If the word is a LINKAGE, then extract s 1 as a time expression and continue scanning.", "When the shared boundary is a COMMA, merge s 1 and s 2 only if the COMMA's previous token and its next token satisfy the three conditions: (1) the previous token is a time token or a NUMERAL; (2) the next token is a time token; and (3) the token types of the previous token and of the next token are not the same.", "(See Figure 4(d) .)", "Although Figure 4 shows the examples as token types together with the tokens, we should note that the heuristic rules only work on the token types.", "After the extraction step, time expressions are exported as a sequence of tokens from the sequence of token types.", "SynTime Expansion SynTime could be expanded by simply adding new words under each defined token type without changing any rule.", "The expansion requires the words to be added to be annotated manually.", "We apply the initial SynTime on the time expressions from training text and list the words that are not covered.", "Whether the uncovered words are added to SynTime is manually determined.", "The rule for determination is that the added words can not cause ambiguity and should be generic.", "Wiki-Wars dataset contains a few examples like this: 'The time Arnold reached Quebec City.'", "Words in this example are extremely descriptive, and we do not collect them.", "In tweets, on the other hand, people may use abbreviations and informal variants; for example, '2day' and 'tday' are popular spellings of 'today.'", "Such kind of abbreviations and informal variants will be collected.", "According to our findings, not many words are used to express time information, the manual addition of keywords thus will not cost much.", "In addition, we find that even in tweets people tend to use formal words.", "In the Twitter word clusters trained from 56 million English tweets, 8 the most often used words are the formal words, and their frequencies are much greater than the informal words'.", "The cluster of 'today,' 9 for example, its most often use is the formal one, 'today,' which appears 1,220,829 times; while its second most often use '2day' appears only 34,827 times.", "The low rate of informal words (e.g., about 3% in 'today' cluster) suggests that even in informal environment the manual keyword addition costs little.", "Experiments We evaluate SynTime against three state-of-theart baselines (i.e., HeidelTime, SUTime, and UW-Time) on three datasets (i.e., TimeBank, Wiki-Wars, and Tweets).", "WikiWars is a specific domain dataset about war; TimeBank and WikiWars are the datasets in formal text while Tweets dataset is in informal text.", "For SynTime we report the results of its two versions: SynTime-I and SynTime-E. SynTime-I is the initial version, and SynTime-E is the expanded version of SynTime-I.", "Experiment Setting Datasets.", "We use three datasets of which TimeBank and WikiWars are benchmark datasets whose details are shown in Section 3.1; Tweets is our manually labeled dataset that are collected from Twitter.", "For Tweets dataset, we randomly sample 4000 tweets and use SUTime to tag them.", "942 tweets of which each contains at least one time expression.", "From the remaining 3,058 tweets, we randomly sample 500 and manually annotate them, and find that only 15 tweets contain time expressions.", "We therefore roughly consider that SU-Time misses about 3% time expressions in tweets.", "Two annotators then manually annotate the 942 tweets with discussion to final agreement according to the standards of TimeML and TimeBank.", "We finally get 1,127 manually labeled time expressions.", "For the 942 tweets, we randomly sample 200 tweets as test set, and the rest 742 as training set, because a baseline UWTime requires training.", "Baseline Methods.", "We compare SynTime with methods: HeidelTime (Strötgen and Gertz, 2010) , SUTime (Chang and , and UW- Evaluation Metrics.", "We follow TempEval-3 and use their evaluation toolkit 10 to report P recision, Recall, and F 1 in terms of strict match and relaxed match (UzZaman et al., 2013).", "22, 1986' and 'February 01, 1989 ' at the level of word or of character.", "One suggestion is to consider a type-based learning method that could use type information.", "For example, the above two time expressions refer to the same pattern of 'MONTH NUMERAL COMMA Table 5 lists the number of time tokens and modifiers added to SynTime-I to get SynTime-E. On TimeBank and Tweets datasets, only a few tokens are added, the corresponding results are affected slightly.", "This confirms that the size of time words is small, and that SynTime-I covers most of time words.", "On WikiWars dataset, relatively more tokens are added, SynTime-E performs much better than SynTime-I, especially in recall.", "It improves the recall by 3.25% in strict match and by 2.98% in relaxed match.", "This indicates that with more words added from specific domains (e.g., WikiWars dataset about war), SynTime can significantly improve the performance.", "Experiment Result Limitations SynTime assumes that words are tokenized and POS tagged correctly.", "In reality, however, the tokenized and tagged words are not that perfect, due to the limitation of used tools.", "For example, Stanford POS Tagger assigns VBD to the word 'sat' in 'friday or sat' while whose tag should be NNP.", "The incorrect tokens and POS tags affect the result.", "Conclusion and future work We conduct an analysis on time expressions from four datasets, and find that time expressions in general are very short and expressed by a small vocabulary, and words in time expressions demonstrate similar syntactic behavior.", "Our findings provide evidence in terms of time expression for the principle of least effort (Zipf, 1949) .", "Inspired by part-of-speech, based on the findings, we define a syntactic type system for the time expression, and propose a type-based time expression tagger, named by SynTime.", "SynTime defines syntactic token types for tokens and on the token types it designs general heuristic rules based on the idea of boundary expansion.", "Experiments on three datasets show that SynTime outperforms the stateof-the-art baselines, including rule-based time taggers and machine learning based time tagger.", "Because our heuristic rules are quite simple, Syn-Time is light-weight and runs in real time.", "Our token types and heuristic rules are independent of specific tokens, SynTime therefore is independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domains and specific text types in English.", "The test for other languages needs only to construct a collection of token regular expressions in the target language under our defined token types.", "Time expression is part of language and follows the principle of least effort.", "Since language usage relates to human habits (Zipf, 1949; Chomsky, 1986; Pinker, 1995) , we might expect that humans would share some common habits, and therefore expect that other parts of language would more or less follow the same principle.", "In the future we will try our analytical method on other parts of language." ] }
{ "paper_header_number": [ "1", "2", "3.2", "4", "4.1", "4.2", "4.2.1", "4.2.2", "4.2.3", "4.3", "5", "5.3", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Finding", "SynTime: Syntactic Token Types and General Heuristic Rules", "SynTime Construction", "Time Expression Recognition", "Time Token Identification", "Time Segment Identification", "Time Expression Extraction", "SynTime Expansion", "Experiments", "Limitations", "Conclusion and future work" ] }
GEM-SciDuet-train-99#paper-1262#slide-7
Time Expression Analysis Summary
On average, a time expression contains two tokens; one is time token and the other is modifier/numeral. And the time tokens are in small size. To recognize a time expression, we first recognize the time token, then recognize the modifier/numeral.
On average, a time expression contains two tokens; one is time token and the other is modifier/numeral. And the time tokens are in small size. To recognize a time expression, we first recognize the time token, then recognize the modifier/numeral.
[]
GEM-SciDuet-train-99#paper-1262#slide-8
1262
Time Expression Analysis and Recognition Using Syntactic Token Types and General Heuristic Rules
Extracting time expressions from free text is a fundamental task for many applications. We analyze time expressions from four different datasets and find that only a small group of words are used to express time information and that the words in time expressions demonstrate similar syntactic behaviour. Based on the findings, we propose a type-based approach named SynTime 1 for time expression recognition. Specifically, we define three main syntactic token types, namely time token, modifier, and numeral, to group time-related token regular expressions. On the types we design general heuristic rules to recognize time expressions. In recognition, SynTime first identifies time tokens from raw text, then searches their surroundings for modifiers and numerals to form time segments, and finally merges the time segments to time expressions. As a lightweight rule-based tagger, SynTime runs in real time, and can be easily expanded by simply adding keywords for the text from different domains and different text types. Experiments on benchmark datasets and tweets data show that SynTime outperforms state-of-the-art methods.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249 ], "paper_content_text": [ "Introduction Time expression plays an important role in information retrieval and many applications in natural language processing (Alonso et al., 2011; Campos et al., 2014) .", "Recognizing time expressions from free text has attracted considerable attention since last decade (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) .", "1 Source: https://github.com/zhongxiaoshi/syntime We analyze time expressions in four datasets: TimeBank (Pustejovsky et al., 2003b) , Gigaword (Parker et al., 2011) , WikiWars (Mazur and Dale, 2010) , and Tweets.", "From the analysis we make four findings about time expressions.", "First, most time expressions are very short, with 80% of time expressions containing no more than three tokens.", "Second, at least 91.8% of time expressions contain at least one time token.", "Third, the vocabulary used to express time information is very small, with a small group of keywords.", "Finally, words in time expressions demonstrate similar syntactic behaviour.", "All the findings relate to the principle of least effort (Zipf, 1949) .", "That is, people tend to act under the least effort in order to minimize the cost of energy at both individual level and collective level to language usage (Zipf, 1949) .", "Time expression is part of language and acts as an interface of communication.", "Short expressions, occurrence, small vocabulary, and similar syntactic behaviour all reduce the cost of energy required to communicate.", "According to the findings we propose a typebased approach named SynTime ('Syn' stands for syntactic) to recognize time expressions.", "Specifically, we define three main token types, namely time token, modifier, and numeral, to group timerelated token regular expressions.", "Time tokens are the words that explicitly express time information, such as time units (e.g., 'year').", "Modifiers modify time tokens; they appear before or after time tokens, e.g., 'several' and 'ago' in 'several years ago.'", "Numerals are ordinals and numbers.", "From free text SynTime first identifies time tokens, then recognizes modifiers and numerals.", "Naturally, SynTime is a rule-based tagger.", "The key difference between SynTime and other rulebased taggers lies in the way of defining token types and the way of designing rules.", "The definition of token type in SynTime is inspired by part-of-speech in which \"linguists group some words of language into classes (sets) which show similar syntactic behaviour.\"", "(Manning and Schutze, 1999) SynTime defines token types for tokens according to their syntactic behaviour.", "Other rulebased taggers define types for tokens based on their semantic meaning.", "For example, SUTime defines 5 semantic modifier types, such as frequency modifiers; 2 while SynTime defines 5 syntactic modifier types, such as modifiers that appear before time tokens.", "(See Section 4.1 for details.)", "Accordingly, other rule-based taggers design deterministic rules based on their meanings of tokens themselves.", "SynTime instead designs general rules on the token types rather than on the tokens themselves.", "For example, our general rules do not work on tokens 'February' nor '1989' but on their token types 'MONTH' and 'YEAR.'", "That is why we call SynTime a type-based approach.", "More importantly, other rule-based taggers design rules in a fixed method, including fixed length and fixed position.", "In contrast, SynTime designs general rules in a heuristic way, based on the idea of boundary expansion.", "The general heuristic rules are quite light-weight that it makes SynTime much more flexible and expansible, and leads SynTime to run in real time.", "The heuristic rules are designed on token types and are independent of specific tokens, SynTime therefore is independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domains and specific text types in English.", "(The test for other languages needs only to construct a collection of token regular expressions in the target language under our defined token types.)", "Specifically, we evaluate SynTime against three state-of-the-art methods (i.e., HeidelTime, SUTime, and UWTime) on three datasets: TimeBank, WikiWars, and Tweets.", "3 datasets.", "More importantly, SynTime achieves the best recalls on all three datasets and exceptionally good results on Tweets dataset.", "To sum up, we make the following contributions.", "• We analyze time expressions from four datasets and make four findings.", "The findings provide evidence in terms of time expression for the principle of least effort (Zipf, 1949) .", "• We propose a time tagger named SynTime to recognize time expressions using syntactic token types and general heuristic rules.", "Syn-Time is independent of specific tokens, and therefore independent of specific domains, specific text types, and specific languages.", "• We conduct experiments on three datasets, and the results demonstrate the effectiveness of SynTime against state-of-the-art baselines.", "Related Work Many research works on time expression identification are reported in TempEval exercises (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) .", "The task is divided into two subtasks: recognition and normalization.", "Rule-based Time Expression Recognition.", "Rule-based time taggers like GUTime, Heidel-Time, and SUTime, predefine time-related words and rules (Verhagen et al., 2005; Strötgen and Gertz, 2010; Chang and Manning, 2012) .", "Heidel-Time (Strötgen and Gertz, 2010) hand-crafts rules with time resources like weekdays and months, and leverages language clues like part-of-speech to identify time expression.", "SUTime (Chang and Manning, 2012) designs deterministic rules using a cascade finite automata (Hobbs et al., 1997) on regular expressions over tokens (Chang and Manning, 2014) .", "It first identifies individual words, then expands them to chunks, and finally to time expressions.", "Rule-based taggers achieve very good results in TempEval exercises.", "SynTime is also a rule-based tagger while its key difference from other rule-based taggers is that between the rules and the tokens it introduces a layer of token type; its rules work on token types and are independent of specific tokens.", "Moreover, SynTime designs rules in a heuristic way.", "Machine Learning based Method.", "Machine learning based methods extract features from the text and apply statistical models on the features for recognizing time expressions.", "Example features include character features, word features, syntactic features, semantic features, and gazetteer features (Llorens et al., 2010; Filannino et al., 2013; Bethard, 2013) .", "The statistical models include Markov logic network, logistic regression, support vector machines, maximum entropy, and conditional random fields (Llorens et al., 2010; Uz-Zaman and Allen, 2010; Filannino et al., 2013; Bethard, 2013) .", "Some models obtain good performance, and even achieve the highest F 1 of 82.71% on strict match in TempEval-3 (Bethard, 2013) .", "Outside TempEval exercises, Angeli et al.", "leverage compositional grammar and employ a EMstyle approach to learn a latent parser for time expression recognition (Angeli et al., 2012) .", "In the method named UWTime, Lee et al.", "handcraft a combinatory categorial grammar (CCG) (Steedman, 1996) to define a set of lexicon with rules and use L1-regularization to learn linguistic context (Lee et al., 2014) .", "The two methods explicitly use linguistic information.", "In (Lee et al., 2014) , especially, CCG could capture rich structure information of language, similar to the rule-based methods.", "Tabassum et al.", "focus on resolving the dates in tweets, and use distant supervision to recognize time expressions (Tabassum et al., 2016) .", "They use five time types and assign one of them to each word, which is similar to SynTime in the way of defining types over tokens.", "However, they focus only on the type of date, while SynTime recoginizes all the time expressions and does not involve learning and runs in real time.", "Time Expression Normalization.", "Methods in TempEval exercises design rules for time expression normalization (Verhagen et al., 2005; Strötgen and Gertz, 2010; Llorens et al., 2010; Uz-Zaman and Allen, 2010; Filannino et al., 2013; Bethard, 2013) .", "Because the rule systems have high similarity, Llorens et al.", "suggest to construct a large knowledge base as a public resource for the task (Llorens et al., 2012) .", "Some researchers treat the normalization process as a learning task and use machine learning methods (Lee et al., 2014; Tabassum et al., 2016) .", "Lee et al.", "(Lee et al., 2014) use AdaGrad algorithm (Duchi et al., 2011) and Tabassum et al.", "(Tabassum et al., 2016 ) use a loglinear algorithm to normalize time expressions.", "SynTime focuses only on the recognition task.", "The normalization could be achieved by using methods similar to the existing rule systems, because they are highly similar (Llorens et al., 2012) .", "We conduct an analysis on four datasets: Time-Bank, Gigaword, WikiWars, and Tweets.", "Time-Bank (Pustejovsky et al., 2003b ) is a benchmark dataset in TempEval series (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) , consisting of 183 news articles.", "Gigaword (Parker et al., 2011 ) is a large automatically labelled dataset with 2,452 news articles and used in TempEval-3.", "WikiWars dataset is derived from Wikipedia articles about wars (Mazur and Dale, 2010) .", "Tweets is our manually annotated dataset with 942 tweets of which each contains at least one time expression.", "Table 1 summarizes the datasets.", "Finding From the four datasets, we analyze their time expressions and make four findings.", "We will see that despite the four datasets vary in corpus sizes, in text types, and in domains, their time expressions demonstrate similar characteristics.", "Finding 1 Time expressions are very short.", "More than 80% of time expressions contain no more than three words and more than 90% contain no more than four words.", "Figure 1 plots the length distribution of time expressions.", "Although the texts are collected from different sources (i.e., news articles, Wikipedia articles, and tweets) and vary in sizes, the length Finding 2 More than 91% of time expressions contain at least one time token.", "The second column in Table 2 reports the percentage of time expressions that contain at least one time token.", "We find that at least 91.81% of time expressions contain time token(s).", "(Some time expressions have no time token but depend on other time expressions; in '2 to 8 days,' for example, '2' depends on '8 days.')", "This suggests that time tokens account for time expressions.", "Therefore, to recognize time expressions, it is essential to recognize their time tokens.", "Finding 3 Only a small group of time-related keywords are used to express time information.", "From the time expressions in all four datasets, we find that the group of keywords used to express time information is small.", "Table 3 reports the number of distinct words and of distinct time tokens.", "The words/tokens are manually normalized before counting and their variants are ignored.", "For example, 'year' and '5yrs' are counted as one token 'year.'", "Numerals in the counting are ignored.", "Despite the four datasets vary in sizes, domains, and text types, the numbers of their distinct time tokens are comparable.", "Across the four datasets, the number of distinct words is 350, about half of the simply summing of 675; the number of distinct time tokens is 123, less than half of the simply summing 282.", "Among the 123 distinct time tokens, 45 appear in all the four datasets, and 101 appear in at least two datasets.", "This indicates that time tokens, which account for time expressions, are highly overlapped across the four datasets.", "In other words, time expressions highly overlap at their time tokens.", "Finding 4 POS information could not distinguish time expressions from common words, but within time expressions, POS tags can help distinguish their constituents.", "For each dataset we list the top 10 POS tags that appear in time expressions, and their percentages over the whole text.", "Among the 40 tags (10 × 4 datasets), 37 have percentage lower than 20%; other 3 are CD.", "This indicates that POS could not provide enough information to distinguish time expressions from common words.", "However, the most common POS tags in time expressions are NN*, JJ, RB, CD, and DT.", "Within time expressions, the time tokens usually have NN* and RB, the modifiers have JJ and RB, and the numerals have CD.", "This finding indicates that for the time expressions, their similar constituents behave in similar syntactic way.", "When seeing this, we realize that this is exactly how linguists define part-of-speech for language.", "4 The definition of POS for language inspires us to define a syntactic type system for the time expression, part of language.", "The four findings all relate to the principle of least effort (Zipf, 1949) .", "That is, people tend to act with least effort so as to minimize the cost of energy at both individual and collective levels to the language usage (Zipf, 1949) .", "Time expression is part of language and acts as an interface of communication.", "Short expressions, occurrence, small vocabulary, and similar syntactic behaviour all reduce the cost of energy required to communicate.", "To summarize: on average, a time expression contains two tokens of which one is time token and the other is modifier/numeral, and the size of time tokens is small.", "To recognize a time expression, therefore, we first recognize the time token, then recognize the modifier/numeral.", "SynTime: Syntactic Token Types and General Heuristic Rules SynTime defines a syntactic type system for the tokens of time expressions, and designs heuristic rules working on the token types.", "Figure 2 shows the layout of SynTime, consisting of three levels: Token level, type level, and rule level.", "Token types at the type level group the tokens of time expressions.", "Heuristic rules lie at the rule level, working on token types rather than on tokens themselves.", "That is why the heuristic rules are general.", "For example, the heuristic rules do not work on tokens '1989' nor 'February,' but on their token types 'YEAR' and 'MONTH.'", "The heuristic rules are only relevant to token types, and are independent of specific tokens.", "For this reason, our token types and heuristic rules are independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domain (i.e., war domain) and specific text types (i.e., formal text and informal text) in English.", "The test for other languages simply needs to construct a set of token regular expressions in the target language under our defined token types.", "Figure 3 shows the overview of SynTime in practice.", "Shown on the left-hand side, SynTime is initialized with regular expressions over tokens.", "After initialization, SynTime can be directly applied on text.", "On the other hand, SynTime can be easily expanded by simply adding the time-related token regular expressions from training text under each defined token type.", "The expansion enables SynTime to recognize time expressions in text from different domains and different text types.", "Shown on the right-hand side of Figure 3 , Syn-Time recognizes time expression through three main steps.", "In the first step, SynTime identifies time tokens from the POS-tagged raw text.", "Then around the time tokens SynTime searches for modifiers and numerals to form time segments.", "In the last step, SynTime transforms the time segments to time expressions.", "SynTime Construction We define a syntactic type system for time expression, specifically, 15 token types for time tokens, 5 token types for modifiers, and 1 token type for numeral.", "Token types to tokens is like POS tags to words; for example, 'February' has a POS tag of NNP and a token type of MONTH.", "Time Token.", "We define 15 token types for the time tokens and use their names similar to Joda-Time classes: 5 DECADE (-), YEAR (-), SEA-SON (5), MONTH (12), WEEK (7), DATE (-), TIME (-), DAY TIME (27), TIMELINE (12), HOLIDAY (20), PERIOD (9), DURATION (-), TIME UNIT (15), TIME ZONE (6), and ERA (2).", "Number in '()' indicates the number of distinct tokens in this token type.", "'-' indicates that this token type involves changing digits and cannot be counted.", "Modifier.", "We define 3 token types for the modifiers according to their possible positions relative to time tokens.", "Modifiers that appear before time tokens are PREFIX (48); modifiers after time tokens are SUFFIX (2).", "LINKAGE (4) link two time tokens.", "Besides, we define 2 special modifier types, COMMA (1) for comma ',' and IN ARTICLE (2) for indefinite articles 'a' and 'an.'", "TimeML (Pustejovsky et al., 2003a) and Time-Bank (Pustejovsky et al., 2003b) do not treat most prepositions like 'on' as a part of time expressions.", "Thus SynTime does not collect those prepositions.", "Numeral.", "Number in time expressions can be a time token e.g., '10' in 'October 10, 2016,' or a modifier e.g., '10' in '10 days.'", "We define NU-MERAL (-) for the ordinals and numbers.", "SynTime Initialization.", "The token regular expressions for initializing SynTime are collected from SUTime, 6 a state-of-the-art rule-based tagger that achieved the highest recall in TempEval-3 (Chang and Manning, , 2013 .", "Specifically, we collect from SUTime only the tokens and the regular expressions over tokens, and discard its other rules of recognizing full time expressions.", "Time Expression Recognition On the token types, SynTime designs a small set of heuristic rules to recognize time expressions.", "The recognition process includes three main steps: (1) time token identification, (2) time segment identification, and (3) time expression extraction.", "Time Token Identification Identifying time tokens is simple, through matching of string and regular expressions.", "Some words might cause ambiguity.", "For example, 'May' could be a modal verb, or the fifth month of year.", "To filter out the ambiguous words, we use POS information.", "In implementation, we use Stanford POS Tagger; 7 and the POS tags for matching the instances of token types in SynTime are based on our Finding 4 in Section 3.2.", "Besides time tokens are identified, in this step, individual token is assigned with one token type of either modifier or numeral if it is matched with token regular expressions.", "In the next two steps, SynTime works on those token types.", "Time Segment Identification The task of time segment identification is to search the surrounding of each time token identified in previous step for modifiers and numerals, then gather the time token with its modifiers and numerals to form a time segment.", "The searching is under simple heuristic rules in which the key idea is to expand the time token's boundaries.", "At first, each time token is a time segment.", "If it is either a PERIOD or DURATION, then no need to further search.", "Otherwise, search its left and its right for modifiers and numerals.", "For the left searching, if encounter a PREFIX or NUMERAL or IN ARTICLE, then continue searching.", "For the right searching, if encounter a SUFFIX or NUMERAL, then continue searching.", "Both the left and the right searching stop when reaching a COMMA or LINK-AGE or a non-modifier/numeral word.", "The left searching does not exceed the previous time token; the right searching does not exceed the next time token.", "A time segment consists of exactly one time token, and zero or some modifiers/numerals.", "A special kind of time segments do not contain any time token; they depend on other time segments next to them.", "For example, in '8 to 20 days,' 'to 20 days' is a time segment, and '8 to' forms a dependent time segment.", "(See Figure 4(e) .)", "Time Expression Extraction The task of time expression extraction is to extract time expressions from the identified time segments in which the core step is to determine whether to merge two adjacent or overlapping time segments into a new time segment.", "We scan the time segments in a sentence from beginning to the end.", "A stand-alone time segment is a time expression.", "(See Figure 4(a) .)", "The focus is to deal with two or more time segments that are adjacent or overlapping.", "If two time segments s 1 and s 2 are adjacent, merge them to form a new time segment s 1 .", "(See Figure 4(b) .)", "Consider that s 1 and s 2 overlap at a shared boundary.", "According to our time segment identification, the shared boundary could be a modifier or a numeral.", "If the word at the shared boundary is neither a COMMA nor a LINKAGE, then merge s 1 and s 2 .", "(See Figure 4(c) .)", "If the word is a LINKAGE, then extract s 1 as a time expression and continue scanning.", "When the shared boundary is a COMMA, merge s 1 and s 2 only if the COMMA's previous token and its next token satisfy the three conditions: (1) the previous token is a time token or a NUMERAL; (2) the next token is a time token; and (3) the token types of the previous token and of the next token are not the same.", "(See Figure 4(d) .)", "Although Figure 4 shows the examples as token types together with the tokens, we should note that the heuristic rules only work on the token types.", "After the extraction step, time expressions are exported as a sequence of tokens from the sequence of token types.", "SynTime Expansion SynTime could be expanded by simply adding new words under each defined token type without changing any rule.", "The expansion requires the words to be added to be annotated manually.", "We apply the initial SynTime on the time expressions from training text and list the words that are not covered.", "Whether the uncovered words are added to SynTime is manually determined.", "The rule for determination is that the added words can not cause ambiguity and should be generic.", "Wiki-Wars dataset contains a few examples like this: 'The time Arnold reached Quebec City.'", "Words in this example are extremely descriptive, and we do not collect them.", "In tweets, on the other hand, people may use abbreviations and informal variants; for example, '2day' and 'tday' are popular spellings of 'today.'", "Such kind of abbreviations and informal variants will be collected.", "According to our findings, not many words are used to express time information, the manual addition of keywords thus will not cost much.", "In addition, we find that even in tweets people tend to use formal words.", "In the Twitter word clusters trained from 56 million English tweets, 8 the most often used words are the formal words, and their frequencies are much greater than the informal words'.", "The cluster of 'today,' 9 for example, its most often use is the formal one, 'today,' which appears 1,220,829 times; while its second most often use '2day' appears only 34,827 times.", "The low rate of informal words (e.g., about 3% in 'today' cluster) suggests that even in informal environment the manual keyword addition costs little.", "Experiments We evaluate SynTime against three state-of-theart baselines (i.e., HeidelTime, SUTime, and UW-Time) on three datasets (i.e., TimeBank, Wiki-Wars, and Tweets).", "WikiWars is a specific domain dataset about war; TimeBank and WikiWars are the datasets in formal text while Tweets dataset is in informal text.", "For SynTime we report the results of its two versions: SynTime-I and SynTime-E. SynTime-I is the initial version, and SynTime-E is the expanded version of SynTime-I.", "Experiment Setting Datasets.", "We use three datasets of which TimeBank and WikiWars are benchmark datasets whose details are shown in Section 3.1; Tweets is our manually labeled dataset that are collected from Twitter.", "For Tweets dataset, we randomly sample 4000 tweets and use SUTime to tag them.", "942 tweets of which each contains at least one time expression.", "From the remaining 3,058 tweets, we randomly sample 500 and manually annotate them, and find that only 15 tweets contain time expressions.", "We therefore roughly consider that SU-Time misses about 3% time expressions in tweets.", "Two annotators then manually annotate the 942 tweets with discussion to final agreement according to the standards of TimeML and TimeBank.", "We finally get 1,127 manually labeled time expressions.", "For the 942 tweets, we randomly sample 200 tweets as test set, and the rest 742 as training set, because a baseline UWTime requires training.", "Baseline Methods.", "We compare SynTime with methods: HeidelTime (Strötgen and Gertz, 2010) , SUTime (Chang and , and UW- Evaluation Metrics.", "We follow TempEval-3 and use their evaluation toolkit 10 to report P recision, Recall, and F 1 in terms of strict match and relaxed match (UzZaman et al., 2013).", "22, 1986' and 'February 01, 1989 ' at the level of word or of character.", "One suggestion is to consider a type-based learning method that could use type information.", "For example, the above two time expressions refer to the same pattern of 'MONTH NUMERAL COMMA Table 5 lists the number of time tokens and modifiers added to SynTime-I to get SynTime-E. On TimeBank and Tweets datasets, only a few tokens are added, the corresponding results are affected slightly.", "This confirms that the size of time words is small, and that SynTime-I covers most of time words.", "On WikiWars dataset, relatively more tokens are added, SynTime-E performs much better than SynTime-I, especially in recall.", "It improves the recall by 3.25% in strict match and by 2.98% in relaxed match.", "This indicates that with more words added from specific domains (e.g., WikiWars dataset about war), SynTime can significantly improve the performance.", "Experiment Result Limitations SynTime assumes that words are tokenized and POS tagged correctly.", "In reality, however, the tokenized and tagged words are not that perfect, due to the limitation of used tools.", "For example, Stanford POS Tagger assigns VBD to the word 'sat' in 'friday or sat' while whose tag should be NNP.", "The incorrect tokens and POS tags affect the result.", "Conclusion and future work We conduct an analysis on time expressions from four datasets, and find that time expressions in general are very short and expressed by a small vocabulary, and words in time expressions demonstrate similar syntactic behavior.", "Our findings provide evidence in terms of time expression for the principle of least effort (Zipf, 1949) .", "Inspired by part-of-speech, based on the findings, we define a syntactic type system for the time expression, and propose a type-based time expression tagger, named by SynTime.", "SynTime defines syntactic token types for tokens and on the token types it designs general heuristic rules based on the idea of boundary expansion.", "Experiments on three datasets show that SynTime outperforms the stateof-the-art baselines, including rule-based time taggers and machine learning based time tagger.", "Because our heuristic rules are quite simple, Syn-Time is light-weight and runs in real time.", "Our token types and heuristic rules are independent of specific tokens, SynTime therefore is independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domains and specific text types in English.", "The test for other languages needs only to construct a collection of token regular expressions in the target language under our defined token types.", "Time expression is part of language and follows the principle of least effort.", "Since language usage relates to human habits (Zipf, 1949; Chomsky, 1986; Pinker, 1995) , we might expect that humans would share some common habits, and therefore expect that other parts of language would more or less follow the same principle.", "In the future we will try our analytical method on other parts of language." ] }
{ "paper_header_number": [ "1", "2", "3.2", "4", "4.1", "4.2", "4.2.1", "4.2.2", "4.2.3", "4.3", "5", "5.3", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Finding", "SynTime: Syntactic Token Types and General Heuristic Rules", "SynTime Construction", "Time Expression Recognition", "Time Token Identification", "Time Segment Identification", "Time Expression Extraction", "SynTime Expansion", "Experiments", "Limitations", "Conclusion and future work" ] }
GEM-SciDuet-train-99#paper-1262#slide-8
Time Expression Analysis Idea
On average, a time expression contains two tokens; one is time token and the other is modifier/numeral. And the time tokens are in small size. To recognize a time expression, we first recognize the time token, then recognize the modifier/numeral. 20 days; this week; next year; July 29;
On average, a time expression contains two tokens; one is time token and the other is modifier/numeral. And the time tokens are in small size. To recognize a time expression, we first recognize the time token, then recognize the modifier/numeral. 20 days; this week; next year; July 29;
[]
GEM-SciDuet-train-99#paper-1262#slide-10
1262
Time Expression Analysis and Recognition Using Syntactic Token Types and General Heuristic Rules
Extracting time expressions from free text is a fundamental task for many applications. We analyze time expressions from four different datasets and find that only a small group of words are used to express time information and that the words in time expressions demonstrate similar syntactic behaviour. Based on the findings, we propose a type-based approach named SynTime 1 for time expression recognition. Specifically, we define three main syntactic token types, namely time token, modifier, and numeral, to group time-related token regular expressions. On the types we design general heuristic rules to recognize time expressions. In recognition, SynTime first identifies time tokens from raw text, then searches their surroundings for modifiers and numerals to form time segments, and finally merges the time segments to time expressions. As a lightweight rule-based tagger, SynTime runs in real time, and can be easily expanded by simply adding keywords for the text from different domains and different text types. Experiments on benchmark datasets and tweets data show that SynTime outperforms state-of-the-art methods.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249 ], "paper_content_text": [ "Introduction Time expression plays an important role in information retrieval and many applications in natural language processing (Alonso et al., 2011; Campos et al., 2014) .", "Recognizing time expressions from free text has attracted considerable attention since last decade (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) .", "1 Source: https://github.com/zhongxiaoshi/syntime We analyze time expressions in four datasets: TimeBank (Pustejovsky et al., 2003b) , Gigaword (Parker et al., 2011) , WikiWars (Mazur and Dale, 2010) , and Tweets.", "From the analysis we make four findings about time expressions.", "First, most time expressions are very short, with 80% of time expressions containing no more than three tokens.", "Second, at least 91.8% of time expressions contain at least one time token.", "Third, the vocabulary used to express time information is very small, with a small group of keywords.", "Finally, words in time expressions demonstrate similar syntactic behaviour.", "All the findings relate to the principle of least effort (Zipf, 1949) .", "That is, people tend to act under the least effort in order to minimize the cost of energy at both individual level and collective level to language usage (Zipf, 1949) .", "Time expression is part of language and acts as an interface of communication.", "Short expressions, occurrence, small vocabulary, and similar syntactic behaviour all reduce the cost of energy required to communicate.", "According to the findings we propose a typebased approach named SynTime ('Syn' stands for syntactic) to recognize time expressions.", "Specifically, we define three main token types, namely time token, modifier, and numeral, to group timerelated token regular expressions.", "Time tokens are the words that explicitly express time information, such as time units (e.g., 'year').", "Modifiers modify time tokens; they appear before or after time tokens, e.g., 'several' and 'ago' in 'several years ago.'", "Numerals are ordinals and numbers.", "From free text SynTime first identifies time tokens, then recognizes modifiers and numerals.", "Naturally, SynTime is a rule-based tagger.", "The key difference between SynTime and other rulebased taggers lies in the way of defining token types and the way of designing rules.", "The definition of token type in SynTime is inspired by part-of-speech in which \"linguists group some words of language into classes (sets) which show similar syntactic behaviour.\"", "(Manning and Schutze, 1999) SynTime defines token types for tokens according to their syntactic behaviour.", "Other rulebased taggers define types for tokens based on their semantic meaning.", "For example, SUTime defines 5 semantic modifier types, such as frequency modifiers; 2 while SynTime defines 5 syntactic modifier types, such as modifiers that appear before time tokens.", "(See Section 4.1 for details.)", "Accordingly, other rule-based taggers design deterministic rules based on their meanings of tokens themselves.", "SynTime instead designs general rules on the token types rather than on the tokens themselves.", "For example, our general rules do not work on tokens 'February' nor '1989' but on their token types 'MONTH' and 'YEAR.'", "That is why we call SynTime a type-based approach.", "More importantly, other rule-based taggers design rules in a fixed method, including fixed length and fixed position.", "In contrast, SynTime designs general rules in a heuristic way, based on the idea of boundary expansion.", "The general heuristic rules are quite light-weight that it makes SynTime much more flexible and expansible, and leads SynTime to run in real time.", "The heuristic rules are designed on token types and are independent of specific tokens, SynTime therefore is independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domains and specific text types in English.", "(The test for other languages needs only to construct a collection of token regular expressions in the target language under our defined token types.)", "Specifically, we evaluate SynTime against three state-of-the-art methods (i.e., HeidelTime, SUTime, and UWTime) on three datasets: TimeBank, WikiWars, and Tweets.", "3 datasets.", "More importantly, SynTime achieves the best recalls on all three datasets and exceptionally good results on Tweets dataset.", "To sum up, we make the following contributions.", "• We analyze time expressions from four datasets and make four findings.", "The findings provide evidence in terms of time expression for the principle of least effort (Zipf, 1949) .", "• We propose a time tagger named SynTime to recognize time expressions using syntactic token types and general heuristic rules.", "Syn-Time is independent of specific tokens, and therefore independent of specific domains, specific text types, and specific languages.", "• We conduct experiments on three datasets, and the results demonstrate the effectiveness of SynTime against state-of-the-art baselines.", "Related Work Many research works on time expression identification are reported in TempEval exercises (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) .", "The task is divided into two subtasks: recognition and normalization.", "Rule-based Time Expression Recognition.", "Rule-based time taggers like GUTime, Heidel-Time, and SUTime, predefine time-related words and rules (Verhagen et al., 2005; Strötgen and Gertz, 2010; Chang and Manning, 2012) .", "Heidel-Time (Strötgen and Gertz, 2010) hand-crafts rules with time resources like weekdays and months, and leverages language clues like part-of-speech to identify time expression.", "SUTime (Chang and Manning, 2012) designs deterministic rules using a cascade finite automata (Hobbs et al., 1997) on regular expressions over tokens (Chang and Manning, 2014) .", "It first identifies individual words, then expands them to chunks, and finally to time expressions.", "Rule-based taggers achieve very good results in TempEval exercises.", "SynTime is also a rule-based tagger while its key difference from other rule-based taggers is that between the rules and the tokens it introduces a layer of token type; its rules work on token types and are independent of specific tokens.", "Moreover, SynTime designs rules in a heuristic way.", "Machine Learning based Method.", "Machine learning based methods extract features from the text and apply statistical models on the features for recognizing time expressions.", "Example features include character features, word features, syntactic features, semantic features, and gazetteer features (Llorens et al., 2010; Filannino et al., 2013; Bethard, 2013) .", "The statistical models include Markov logic network, logistic regression, support vector machines, maximum entropy, and conditional random fields (Llorens et al., 2010; Uz-Zaman and Allen, 2010; Filannino et al., 2013; Bethard, 2013) .", "Some models obtain good performance, and even achieve the highest F 1 of 82.71% on strict match in TempEval-3 (Bethard, 2013) .", "Outside TempEval exercises, Angeli et al.", "leverage compositional grammar and employ a EMstyle approach to learn a latent parser for time expression recognition (Angeli et al., 2012) .", "In the method named UWTime, Lee et al.", "handcraft a combinatory categorial grammar (CCG) (Steedman, 1996) to define a set of lexicon with rules and use L1-regularization to learn linguistic context (Lee et al., 2014) .", "The two methods explicitly use linguistic information.", "In (Lee et al., 2014) , especially, CCG could capture rich structure information of language, similar to the rule-based methods.", "Tabassum et al.", "focus on resolving the dates in tweets, and use distant supervision to recognize time expressions (Tabassum et al., 2016) .", "They use five time types and assign one of them to each word, which is similar to SynTime in the way of defining types over tokens.", "However, they focus only on the type of date, while SynTime recoginizes all the time expressions and does not involve learning and runs in real time.", "Time Expression Normalization.", "Methods in TempEval exercises design rules for time expression normalization (Verhagen et al., 2005; Strötgen and Gertz, 2010; Llorens et al., 2010; Uz-Zaman and Allen, 2010; Filannino et al., 2013; Bethard, 2013) .", "Because the rule systems have high similarity, Llorens et al.", "suggest to construct a large knowledge base as a public resource for the task (Llorens et al., 2012) .", "Some researchers treat the normalization process as a learning task and use machine learning methods (Lee et al., 2014; Tabassum et al., 2016) .", "Lee et al.", "(Lee et al., 2014) use AdaGrad algorithm (Duchi et al., 2011) and Tabassum et al.", "(Tabassum et al., 2016 ) use a loglinear algorithm to normalize time expressions.", "SynTime focuses only on the recognition task.", "The normalization could be achieved by using methods similar to the existing rule systems, because they are highly similar (Llorens et al., 2012) .", "We conduct an analysis on four datasets: Time-Bank, Gigaword, WikiWars, and Tweets.", "Time-Bank (Pustejovsky et al., 2003b ) is a benchmark dataset in TempEval series (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) , consisting of 183 news articles.", "Gigaword (Parker et al., 2011 ) is a large automatically labelled dataset with 2,452 news articles and used in TempEval-3.", "WikiWars dataset is derived from Wikipedia articles about wars (Mazur and Dale, 2010) .", "Tweets is our manually annotated dataset with 942 tweets of which each contains at least one time expression.", "Table 1 summarizes the datasets.", "Finding From the four datasets, we analyze their time expressions and make four findings.", "We will see that despite the four datasets vary in corpus sizes, in text types, and in domains, their time expressions demonstrate similar characteristics.", "Finding 1 Time expressions are very short.", "More than 80% of time expressions contain no more than three words and more than 90% contain no more than four words.", "Figure 1 plots the length distribution of time expressions.", "Although the texts are collected from different sources (i.e., news articles, Wikipedia articles, and tweets) and vary in sizes, the length Finding 2 More than 91% of time expressions contain at least one time token.", "The second column in Table 2 reports the percentage of time expressions that contain at least one time token.", "We find that at least 91.81% of time expressions contain time token(s).", "(Some time expressions have no time token but depend on other time expressions; in '2 to 8 days,' for example, '2' depends on '8 days.')", "This suggests that time tokens account for time expressions.", "Therefore, to recognize time expressions, it is essential to recognize their time tokens.", "Finding 3 Only a small group of time-related keywords are used to express time information.", "From the time expressions in all four datasets, we find that the group of keywords used to express time information is small.", "Table 3 reports the number of distinct words and of distinct time tokens.", "The words/tokens are manually normalized before counting and their variants are ignored.", "For example, 'year' and '5yrs' are counted as one token 'year.'", "Numerals in the counting are ignored.", "Despite the four datasets vary in sizes, domains, and text types, the numbers of their distinct time tokens are comparable.", "Across the four datasets, the number of distinct words is 350, about half of the simply summing of 675; the number of distinct time tokens is 123, less than half of the simply summing 282.", "Among the 123 distinct time tokens, 45 appear in all the four datasets, and 101 appear in at least two datasets.", "This indicates that time tokens, which account for time expressions, are highly overlapped across the four datasets.", "In other words, time expressions highly overlap at their time tokens.", "Finding 4 POS information could not distinguish time expressions from common words, but within time expressions, POS tags can help distinguish their constituents.", "For each dataset we list the top 10 POS tags that appear in time expressions, and their percentages over the whole text.", "Among the 40 tags (10 × 4 datasets), 37 have percentage lower than 20%; other 3 are CD.", "This indicates that POS could not provide enough information to distinguish time expressions from common words.", "However, the most common POS tags in time expressions are NN*, JJ, RB, CD, and DT.", "Within time expressions, the time tokens usually have NN* and RB, the modifiers have JJ and RB, and the numerals have CD.", "This finding indicates that for the time expressions, their similar constituents behave in similar syntactic way.", "When seeing this, we realize that this is exactly how linguists define part-of-speech for language.", "4 The definition of POS for language inspires us to define a syntactic type system for the time expression, part of language.", "The four findings all relate to the principle of least effort (Zipf, 1949) .", "That is, people tend to act with least effort so as to minimize the cost of energy at both individual and collective levels to the language usage (Zipf, 1949) .", "Time expression is part of language and acts as an interface of communication.", "Short expressions, occurrence, small vocabulary, and similar syntactic behaviour all reduce the cost of energy required to communicate.", "To summarize: on average, a time expression contains two tokens of which one is time token and the other is modifier/numeral, and the size of time tokens is small.", "To recognize a time expression, therefore, we first recognize the time token, then recognize the modifier/numeral.", "SynTime: Syntactic Token Types and General Heuristic Rules SynTime defines a syntactic type system for the tokens of time expressions, and designs heuristic rules working on the token types.", "Figure 2 shows the layout of SynTime, consisting of three levels: Token level, type level, and rule level.", "Token types at the type level group the tokens of time expressions.", "Heuristic rules lie at the rule level, working on token types rather than on tokens themselves.", "That is why the heuristic rules are general.", "For example, the heuristic rules do not work on tokens '1989' nor 'February,' but on their token types 'YEAR' and 'MONTH.'", "The heuristic rules are only relevant to token types, and are independent of specific tokens.", "For this reason, our token types and heuristic rules are independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domain (i.e., war domain) and specific text types (i.e., formal text and informal text) in English.", "The test for other languages simply needs to construct a set of token regular expressions in the target language under our defined token types.", "Figure 3 shows the overview of SynTime in practice.", "Shown on the left-hand side, SynTime is initialized with regular expressions over tokens.", "After initialization, SynTime can be directly applied on text.", "On the other hand, SynTime can be easily expanded by simply adding the time-related token regular expressions from training text under each defined token type.", "The expansion enables SynTime to recognize time expressions in text from different domains and different text types.", "Shown on the right-hand side of Figure 3 , Syn-Time recognizes time expression through three main steps.", "In the first step, SynTime identifies time tokens from the POS-tagged raw text.", "Then around the time tokens SynTime searches for modifiers and numerals to form time segments.", "In the last step, SynTime transforms the time segments to time expressions.", "SynTime Construction We define a syntactic type system for time expression, specifically, 15 token types for time tokens, 5 token types for modifiers, and 1 token type for numeral.", "Token types to tokens is like POS tags to words; for example, 'February' has a POS tag of NNP and a token type of MONTH.", "Time Token.", "We define 15 token types for the time tokens and use their names similar to Joda-Time classes: 5 DECADE (-), YEAR (-), SEA-SON (5), MONTH (12), WEEK (7), DATE (-), TIME (-), DAY TIME (27), TIMELINE (12), HOLIDAY (20), PERIOD (9), DURATION (-), TIME UNIT (15), TIME ZONE (6), and ERA (2).", "Number in '()' indicates the number of distinct tokens in this token type.", "'-' indicates that this token type involves changing digits and cannot be counted.", "Modifier.", "We define 3 token types for the modifiers according to their possible positions relative to time tokens.", "Modifiers that appear before time tokens are PREFIX (48); modifiers after time tokens are SUFFIX (2).", "LINKAGE (4) link two time tokens.", "Besides, we define 2 special modifier types, COMMA (1) for comma ',' and IN ARTICLE (2) for indefinite articles 'a' and 'an.'", "TimeML (Pustejovsky et al., 2003a) and Time-Bank (Pustejovsky et al., 2003b) do not treat most prepositions like 'on' as a part of time expressions.", "Thus SynTime does not collect those prepositions.", "Numeral.", "Number in time expressions can be a time token e.g., '10' in 'October 10, 2016,' or a modifier e.g., '10' in '10 days.'", "We define NU-MERAL (-) for the ordinals and numbers.", "SynTime Initialization.", "The token regular expressions for initializing SynTime are collected from SUTime, 6 a state-of-the-art rule-based tagger that achieved the highest recall in TempEval-3 (Chang and Manning, , 2013 .", "Specifically, we collect from SUTime only the tokens and the regular expressions over tokens, and discard its other rules of recognizing full time expressions.", "Time Expression Recognition On the token types, SynTime designs a small set of heuristic rules to recognize time expressions.", "The recognition process includes three main steps: (1) time token identification, (2) time segment identification, and (3) time expression extraction.", "Time Token Identification Identifying time tokens is simple, through matching of string and regular expressions.", "Some words might cause ambiguity.", "For example, 'May' could be a modal verb, or the fifth month of year.", "To filter out the ambiguous words, we use POS information.", "In implementation, we use Stanford POS Tagger; 7 and the POS tags for matching the instances of token types in SynTime are based on our Finding 4 in Section 3.2.", "Besides time tokens are identified, in this step, individual token is assigned with one token type of either modifier or numeral if it is matched with token regular expressions.", "In the next two steps, SynTime works on those token types.", "Time Segment Identification The task of time segment identification is to search the surrounding of each time token identified in previous step for modifiers and numerals, then gather the time token with its modifiers and numerals to form a time segment.", "The searching is under simple heuristic rules in which the key idea is to expand the time token's boundaries.", "At first, each time token is a time segment.", "If it is either a PERIOD or DURATION, then no need to further search.", "Otherwise, search its left and its right for modifiers and numerals.", "For the left searching, if encounter a PREFIX or NUMERAL or IN ARTICLE, then continue searching.", "For the right searching, if encounter a SUFFIX or NUMERAL, then continue searching.", "Both the left and the right searching stop when reaching a COMMA or LINK-AGE or a non-modifier/numeral word.", "The left searching does not exceed the previous time token; the right searching does not exceed the next time token.", "A time segment consists of exactly one time token, and zero or some modifiers/numerals.", "A special kind of time segments do not contain any time token; they depend on other time segments next to them.", "For example, in '8 to 20 days,' 'to 20 days' is a time segment, and '8 to' forms a dependent time segment.", "(See Figure 4(e) .)", "Time Expression Extraction The task of time expression extraction is to extract time expressions from the identified time segments in which the core step is to determine whether to merge two adjacent or overlapping time segments into a new time segment.", "We scan the time segments in a sentence from beginning to the end.", "A stand-alone time segment is a time expression.", "(See Figure 4(a) .)", "The focus is to deal with two or more time segments that are adjacent or overlapping.", "If two time segments s 1 and s 2 are adjacent, merge them to form a new time segment s 1 .", "(See Figure 4(b) .)", "Consider that s 1 and s 2 overlap at a shared boundary.", "According to our time segment identification, the shared boundary could be a modifier or a numeral.", "If the word at the shared boundary is neither a COMMA nor a LINKAGE, then merge s 1 and s 2 .", "(See Figure 4(c) .)", "If the word is a LINKAGE, then extract s 1 as a time expression and continue scanning.", "When the shared boundary is a COMMA, merge s 1 and s 2 only if the COMMA's previous token and its next token satisfy the three conditions: (1) the previous token is a time token or a NUMERAL; (2) the next token is a time token; and (3) the token types of the previous token and of the next token are not the same.", "(See Figure 4(d) .)", "Although Figure 4 shows the examples as token types together with the tokens, we should note that the heuristic rules only work on the token types.", "After the extraction step, time expressions are exported as a sequence of tokens from the sequence of token types.", "SynTime Expansion SynTime could be expanded by simply adding new words under each defined token type without changing any rule.", "The expansion requires the words to be added to be annotated manually.", "We apply the initial SynTime on the time expressions from training text and list the words that are not covered.", "Whether the uncovered words are added to SynTime is manually determined.", "The rule for determination is that the added words can not cause ambiguity and should be generic.", "Wiki-Wars dataset contains a few examples like this: 'The time Arnold reached Quebec City.'", "Words in this example are extremely descriptive, and we do not collect them.", "In tweets, on the other hand, people may use abbreviations and informal variants; for example, '2day' and 'tday' are popular spellings of 'today.'", "Such kind of abbreviations and informal variants will be collected.", "According to our findings, not many words are used to express time information, the manual addition of keywords thus will not cost much.", "In addition, we find that even in tweets people tend to use formal words.", "In the Twitter word clusters trained from 56 million English tweets, 8 the most often used words are the formal words, and their frequencies are much greater than the informal words'.", "The cluster of 'today,' 9 for example, its most often use is the formal one, 'today,' which appears 1,220,829 times; while its second most often use '2day' appears only 34,827 times.", "The low rate of informal words (e.g., about 3% in 'today' cluster) suggests that even in informal environment the manual keyword addition costs little.", "Experiments We evaluate SynTime against three state-of-theart baselines (i.e., HeidelTime, SUTime, and UW-Time) on three datasets (i.e., TimeBank, Wiki-Wars, and Tweets).", "WikiWars is a specific domain dataset about war; TimeBank and WikiWars are the datasets in formal text while Tweets dataset is in informal text.", "For SynTime we report the results of its two versions: SynTime-I and SynTime-E. SynTime-I is the initial version, and SynTime-E is the expanded version of SynTime-I.", "Experiment Setting Datasets.", "We use three datasets of which TimeBank and WikiWars are benchmark datasets whose details are shown in Section 3.1; Tweets is our manually labeled dataset that are collected from Twitter.", "For Tweets dataset, we randomly sample 4000 tweets and use SUTime to tag them.", "942 tweets of which each contains at least one time expression.", "From the remaining 3,058 tweets, we randomly sample 500 and manually annotate them, and find that only 15 tweets contain time expressions.", "We therefore roughly consider that SU-Time misses about 3% time expressions in tweets.", "Two annotators then manually annotate the 942 tweets with discussion to final agreement according to the standards of TimeML and TimeBank.", "We finally get 1,127 manually labeled time expressions.", "For the 942 tweets, we randomly sample 200 tweets as test set, and the rest 742 as training set, because a baseline UWTime requires training.", "Baseline Methods.", "We compare SynTime with methods: HeidelTime (Strötgen and Gertz, 2010) , SUTime (Chang and , and UW- Evaluation Metrics.", "We follow TempEval-3 and use their evaluation toolkit 10 to report P recision, Recall, and F 1 in terms of strict match and relaxed match (UzZaman et al., 2013).", "22, 1986' and 'February 01, 1989 ' at the level of word or of character.", "One suggestion is to consider a type-based learning method that could use type information.", "For example, the above two time expressions refer to the same pattern of 'MONTH NUMERAL COMMA Table 5 lists the number of time tokens and modifiers added to SynTime-I to get SynTime-E. On TimeBank and Tweets datasets, only a few tokens are added, the corresponding results are affected slightly.", "This confirms that the size of time words is small, and that SynTime-I covers most of time words.", "On WikiWars dataset, relatively more tokens are added, SynTime-E performs much better than SynTime-I, especially in recall.", "It improves the recall by 3.25% in strict match and by 2.98% in relaxed match.", "This indicates that with more words added from specific domains (e.g., WikiWars dataset about war), SynTime can significantly improve the performance.", "Experiment Result Limitations SynTime assumes that words are tokenized and POS tagged correctly.", "In reality, however, the tokenized and tagged words are not that perfect, due to the limitation of used tools.", "For example, Stanford POS Tagger assigns VBD to the word 'sat' in 'friday or sat' while whose tag should be NNP.", "The incorrect tokens and POS tags affect the result.", "Conclusion and future work We conduct an analysis on time expressions from four datasets, and find that time expressions in general are very short and expressed by a small vocabulary, and words in time expressions demonstrate similar syntactic behavior.", "Our findings provide evidence in terms of time expression for the principle of least effort (Zipf, 1949) .", "Inspired by part-of-speech, based on the findings, we define a syntactic type system for the time expression, and propose a type-based time expression tagger, named by SynTime.", "SynTime defines syntactic token types for tokens and on the token types it designs general heuristic rules based on the idea of boundary expansion.", "Experiments on three datasets show that SynTime outperforms the stateof-the-art baselines, including rule-based time taggers and machine learning based time tagger.", "Because our heuristic rules are quite simple, Syn-Time is light-weight and runs in real time.", "Our token types and heuristic rules are independent of specific tokens, SynTime therefore is independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domains and specific text types in English.", "The test for other languages needs only to construct a collection of token regular expressions in the target language under our defined token types.", "Time expression is part of language and follows the principle of least effort.", "Since language usage relates to human habits (Zipf, 1949; Chomsky, 1986; Pinker, 1995) , we might expect that humans would share some common habits, and therefore expect that other parts of language would more or less follow the same principle.", "In the future we will try our analytical method on other parts of language." ] }
{ "paper_header_number": [ "1", "2", "3.2", "4", "4.1", "4.2", "4.2.1", "4.2.2", "4.2.3", "4.3", "5", "5.3", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Finding", "SynTime: Syntactic Token Types and General Heuristic Rules", "SynTime Construction", "Time Expression Recognition", "Time Token Identification", "Time Segment Identification", "Time Expression Extraction", "SynTime Expansion", "Experiments", "Limitations", "Conclusion and future work" ] }
GEM-SciDuet-train-99#paper-1262#slide-10
Time Expression Recognition SynTime
Syntactic token types A type system Time token: explicitly express time information, e.g., year 15 token types: DECADE, YEAR, SEASON, MONTH, WEEK, DATE, TIME, DAY_TIME, TIMELINE, HOLIDAY, PERIOD, DURATION, TIME_UNIT, TIME_ZONE, ERA Modifier: modify time tokens, e.g., next modifies year in next year 5 token types: PREFIX, SUFFIX, LINKAGE, COMMA, IN_ARTICLE Numeral: ordinals and numbers, e.g., 10 in next 10 years 1 token type: NUMERAL Token types to tokens is like POS tags to words POS tags: next/JJ 10/CD years/NNS Token types: next/PREFIX 10/NUMERAL years/TIME_UNIT Only relevant to token types Independent of specific tokens
Syntactic token types A type system Time token: explicitly express time information, e.g., year 15 token types: DECADE, YEAR, SEASON, MONTH, WEEK, DATE, TIME, DAY_TIME, TIMELINE, HOLIDAY, PERIOD, DURATION, TIME_UNIT, TIME_ZONE, ERA Modifier: modify time tokens, e.g., next modifies year in next year 5 token types: PREFIX, SUFFIX, LINKAGE, COMMA, IN_ARTICLE Numeral: ordinals and numbers, e.g., 10 in next 10 years 1 token type: NUMERAL Token types to tokens is like POS tags to words POS tags: next/JJ 10/CD years/NNS Token types: next/PREFIX 10/NUMERAL years/TIME_UNIT Only relevant to token types Independent of specific tokens
[]
GEM-SciDuet-train-99#paper-1262#slide-11
1262
Time Expression Analysis and Recognition Using Syntactic Token Types and General Heuristic Rules
Extracting time expressions from free text is a fundamental task for many applications. We analyze time expressions from four different datasets and find that only a small group of words are used to express time information and that the words in time expressions demonstrate similar syntactic behaviour. Based on the findings, we propose a type-based approach named SynTime 1 for time expression recognition. Specifically, we define three main syntactic token types, namely time token, modifier, and numeral, to group time-related token regular expressions. On the types we design general heuristic rules to recognize time expressions. In recognition, SynTime first identifies time tokens from raw text, then searches their surroundings for modifiers and numerals to form time segments, and finally merges the time segments to time expressions. As a lightweight rule-based tagger, SynTime runs in real time, and can be easily expanded by simply adding keywords for the text from different domains and different text types. Experiments on benchmark datasets and tweets data show that SynTime outperforms state-of-the-art methods.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249 ], "paper_content_text": [ "Introduction Time expression plays an important role in information retrieval and many applications in natural language processing (Alonso et al., 2011; Campos et al., 2014) .", "Recognizing time expressions from free text has attracted considerable attention since last decade (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) .", "1 Source: https://github.com/zhongxiaoshi/syntime We analyze time expressions in four datasets: TimeBank (Pustejovsky et al., 2003b) , Gigaword (Parker et al., 2011) , WikiWars (Mazur and Dale, 2010) , and Tweets.", "From the analysis we make four findings about time expressions.", "First, most time expressions are very short, with 80% of time expressions containing no more than three tokens.", "Second, at least 91.8% of time expressions contain at least one time token.", "Third, the vocabulary used to express time information is very small, with a small group of keywords.", "Finally, words in time expressions demonstrate similar syntactic behaviour.", "All the findings relate to the principle of least effort (Zipf, 1949) .", "That is, people tend to act under the least effort in order to minimize the cost of energy at both individual level and collective level to language usage (Zipf, 1949) .", "Time expression is part of language and acts as an interface of communication.", "Short expressions, occurrence, small vocabulary, and similar syntactic behaviour all reduce the cost of energy required to communicate.", "According to the findings we propose a typebased approach named SynTime ('Syn' stands for syntactic) to recognize time expressions.", "Specifically, we define three main token types, namely time token, modifier, and numeral, to group timerelated token regular expressions.", "Time tokens are the words that explicitly express time information, such as time units (e.g., 'year').", "Modifiers modify time tokens; they appear before or after time tokens, e.g., 'several' and 'ago' in 'several years ago.'", "Numerals are ordinals and numbers.", "From free text SynTime first identifies time tokens, then recognizes modifiers and numerals.", "Naturally, SynTime is a rule-based tagger.", "The key difference between SynTime and other rulebased taggers lies in the way of defining token types and the way of designing rules.", "The definition of token type in SynTime is inspired by part-of-speech in which \"linguists group some words of language into classes (sets) which show similar syntactic behaviour.\"", "(Manning and Schutze, 1999) SynTime defines token types for tokens according to their syntactic behaviour.", "Other rulebased taggers define types for tokens based on their semantic meaning.", "For example, SUTime defines 5 semantic modifier types, such as frequency modifiers; 2 while SynTime defines 5 syntactic modifier types, such as modifiers that appear before time tokens.", "(See Section 4.1 for details.)", "Accordingly, other rule-based taggers design deterministic rules based on their meanings of tokens themselves.", "SynTime instead designs general rules on the token types rather than on the tokens themselves.", "For example, our general rules do not work on tokens 'February' nor '1989' but on their token types 'MONTH' and 'YEAR.'", "That is why we call SynTime a type-based approach.", "More importantly, other rule-based taggers design rules in a fixed method, including fixed length and fixed position.", "In contrast, SynTime designs general rules in a heuristic way, based on the idea of boundary expansion.", "The general heuristic rules are quite light-weight that it makes SynTime much more flexible and expansible, and leads SynTime to run in real time.", "The heuristic rules are designed on token types and are independent of specific tokens, SynTime therefore is independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domains and specific text types in English.", "(The test for other languages needs only to construct a collection of token regular expressions in the target language under our defined token types.)", "Specifically, we evaluate SynTime against three state-of-the-art methods (i.e., HeidelTime, SUTime, and UWTime) on three datasets: TimeBank, WikiWars, and Tweets.", "3 datasets.", "More importantly, SynTime achieves the best recalls on all three datasets and exceptionally good results on Tweets dataset.", "To sum up, we make the following contributions.", "• We analyze time expressions from four datasets and make four findings.", "The findings provide evidence in terms of time expression for the principle of least effort (Zipf, 1949) .", "• We propose a time tagger named SynTime to recognize time expressions using syntactic token types and general heuristic rules.", "Syn-Time is independent of specific tokens, and therefore independent of specific domains, specific text types, and specific languages.", "• We conduct experiments on three datasets, and the results demonstrate the effectiveness of SynTime against state-of-the-art baselines.", "Related Work Many research works on time expression identification are reported in TempEval exercises (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) .", "The task is divided into two subtasks: recognition and normalization.", "Rule-based Time Expression Recognition.", "Rule-based time taggers like GUTime, Heidel-Time, and SUTime, predefine time-related words and rules (Verhagen et al., 2005; Strötgen and Gertz, 2010; Chang and Manning, 2012) .", "Heidel-Time (Strötgen and Gertz, 2010) hand-crafts rules with time resources like weekdays and months, and leverages language clues like part-of-speech to identify time expression.", "SUTime (Chang and Manning, 2012) designs deterministic rules using a cascade finite automata (Hobbs et al., 1997) on regular expressions over tokens (Chang and Manning, 2014) .", "It first identifies individual words, then expands them to chunks, and finally to time expressions.", "Rule-based taggers achieve very good results in TempEval exercises.", "SynTime is also a rule-based tagger while its key difference from other rule-based taggers is that between the rules and the tokens it introduces a layer of token type; its rules work on token types and are independent of specific tokens.", "Moreover, SynTime designs rules in a heuristic way.", "Machine Learning based Method.", "Machine learning based methods extract features from the text and apply statistical models on the features for recognizing time expressions.", "Example features include character features, word features, syntactic features, semantic features, and gazetteer features (Llorens et al., 2010; Filannino et al., 2013; Bethard, 2013) .", "The statistical models include Markov logic network, logistic regression, support vector machines, maximum entropy, and conditional random fields (Llorens et al., 2010; Uz-Zaman and Allen, 2010; Filannino et al., 2013; Bethard, 2013) .", "Some models obtain good performance, and even achieve the highest F 1 of 82.71% on strict match in TempEval-3 (Bethard, 2013) .", "Outside TempEval exercises, Angeli et al.", "leverage compositional grammar and employ a EMstyle approach to learn a latent parser for time expression recognition (Angeli et al., 2012) .", "In the method named UWTime, Lee et al.", "handcraft a combinatory categorial grammar (CCG) (Steedman, 1996) to define a set of lexicon with rules and use L1-regularization to learn linguistic context (Lee et al., 2014) .", "The two methods explicitly use linguistic information.", "In (Lee et al., 2014) , especially, CCG could capture rich structure information of language, similar to the rule-based methods.", "Tabassum et al.", "focus on resolving the dates in tweets, and use distant supervision to recognize time expressions (Tabassum et al., 2016) .", "They use five time types and assign one of them to each word, which is similar to SynTime in the way of defining types over tokens.", "However, they focus only on the type of date, while SynTime recoginizes all the time expressions and does not involve learning and runs in real time.", "Time Expression Normalization.", "Methods in TempEval exercises design rules for time expression normalization (Verhagen et al., 2005; Strötgen and Gertz, 2010; Llorens et al., 2010; Uz-Zaman and Allen, 2010; Filannino et al., 2013; Bethard, 2013) .", "Because the rule systems have high similarity, Llorens et al.", "suggest to construct a large knowledge base as a public resource for the task (Llorens et al., 2012) .", "Some researchers treat the normalization process as a learning task and use machine learning methods (Lee et al., 2014; Tabassum et al., 2016) .", "Lee et al.", "(Lee et al., 2014) use AdaGrad algorithm (Duchi et al., 2011) and Tabassum et al.", "(Tabassum et al., 2016 ) use a loglinear algorithm to normalize time expressions.", "SynTime focuses only on the recognition task.", "The normalization could be achieved by using methods similar to the existing rule systems, because they are highly similar (Llorens et al., 2012) .", "We conduct an analysis on four datasets: Time-Bank, Gigaword, WikiWars, and Tweets.", "Time-Bank (Pustejovsky et al., 2003b ) is a benchmark dataset in TempEval series (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) , consisting of 183 news articles.", "Gigaword (Parker et al., 2011 ) is a large automatically labelled dataset with 2,452 news articles and used in TempEval-3.", "WikiWars dataset is derived from Wikipedia articles about wars (Mazur and Dale, 2010) .", "Tweets is our manually annotated dataset with 942 tweets of which each contains at least one time expression.", "Table 1 summarizes the datasets.", "Finding From the four datasets, we analyze their time expressions and make four findings.", "We will see that despite the four datasets vary in corpus sizes, in text types, and in domains, their time expressions demonstrate similar characteristics.", "Finding 1 Time expressions are very short.", "More than 80% of time expressions contain no more than three words and more than 90% contain no more than four words.", "Figure 1 plots the length distribution of time expressions.", "Although the texts are collected from different sources (i.e., news articles, Wikipedia articles, and tweets) and vary in sizes, the length Finding 2 More than 91% of time expressions contain at least one time token.", "The second column in Table 2 reports the percentage of time expressions that contain at least one time token.", "We find that at least 91.81% of time expressions contain time token(s).", "(Some time expressions have no time token but depend on other time expressions; in '2 to 8 days,' for example, '2' depends on '8 days.')", "This suggests that time tokens account for time expressions.", "Therefore, to recognize time expressions, it is essential to recognize their time tokens.", "Finding 3 Only a small group of time-related keywords are used to express time information.", "From the time expressions in all four datasets, we find that the group of keywords used to express time information is small.", "Table 3 reports the number of distinct words and of distinct time tokens.", "The words/tokens are manually normalized before counting and their variants are ignored.", "For example, 'year' and '5yrs' are counted as one token 'year.'", "Numerals in the counting are ignored.", "Despite the four datasets vary in sizes, domains, and text types, the numbers of their distinct time tokens are comparable.", "Across the four datasets, the number of distinct words is 350, about half of the simply summing of 675; the number of distinct time tokens is 123, less than half of the simply summing 282.", "Among the 123 distinct time tokens, 45 appear in all the four datasets, and 101 appear in at least two datasets.", "This indicates that time tokens, which account for time expressions, are highly overlapped across the four datasets.", "In other words, time expressions highly overlap at their time tokens.", "Finding 4 POS information could not distinguish time expressions from common words, but within time expressions, POS tags can help distinguish their constituents.", "For each dataset we list the top 10 POS tags that appear in time expressions, and their percentages over the whole text.", "Among the 40 tags (10 × 4 datasets), 37 have percentage lower than 20%; other 3 are CD.", "This indicates that POS could not provide enough information to distinguish time expressions from common words.", "However, the most common POS tags in time expressions are NN*, JJ, RB, CD, and DT.", "Within time expressions, the time tokens usually have NN* and RB, the modifiers have JJ and RB, and the numerals have CD.", "This finding indicates that for the time expressions, their similar constituents behave in similar syntactic way.", "When seeing this, we realize that this is exactly how linguists define part-of-speech for language.", "4 The definition of POS for language inspires us to define a syntactic type system for the time expression, part of language.", "The four findings all relate to the principle of least effort (Zipf, 1949) .", "That is, people tend to act with least effort so as to minimize the cost of energy at both individual and collective levels to the language usage (Zipf, 1949) .", "Time expression is part of language and acts as an interface of communication.", "Short expressions, occurrence, small vocabulary, and similar syntactic behaviour all reduce the cost of energy required to communicate.", "To summarize: on average, a time expression contains two tokens of which one is time token and the other is modifier/numeral, and the size of time tokens is small.", "To recognize a time expression, therefore, we first recognize the time token, then recognize the modifier/numeral.", "SynTime: Syntactic Token Types and General Heuristic Rules SynTime defines a syntactic type system for the tokens of time expressions, and designs heuristic rules working on the token types.", "Figure 2 shows the layout of SynTime, consisting of three levels: Token level, type level, and rule level.", "Token types at the type level group the tokens of time expressions.", "Heuristic rules lie at the rule level, working on token types rather than on tokens themselves.", "That is why the heuristic rules are general.", "For example, the heuristic rules do not work on tokens '1989' nor 'February,' but on their token types 'YEAR' and 'MONTH.'", "The heuristic rules are only relevant to token types, and are independent of specific tokens.", "For this reason, our token types and heuristic rules are independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domain (i.e., war domain) and specific text types (i.e., formal text and informal text) in English.", "The test for other languages simply needs to construct a set of token regular expressions in the target language under our defined token types.", "Figure 3 shows the overview of SynTime in practice.", "Shown on the left-hand side, SynTime is initialized with regular expressions over tokens.", "After initialization, SynTime can be directly applied on text.", "On the other hand, SynTime can be easily expanded by simply adding the time-related token regular expressions from training text under each defined token type.", "The expansion enables SynTime to recognize time expressions in text from different domains and different text types.", "Shown on the right-hand side of Figure 3 , Syn-Time recognizes time expression through three main steps.", "In the first step, SynTime identifies time tokens from the POS-tagged raw text.", "Then around the time tokens SynTime searches for modifiers and numerals to form time segments.", "In the last step, SynTime transforms the time segments to time expressions.", "SynTime Construction We define a syntactic type system for time expression, specifically, 15 token types for time tokens, 5 token types for modifiers, and 1 token type for numeral.", "Token types to tokens is like POS tags to words; for example, 'February' has a POS tag of NNP and a token type of MONTH.", "Time Token.", "We define 15 token types for the time tokens and use their names similar to Joda-Time classes: 5 DECADE (-), YEAR (-), SEA-SON (5), MONTH (12), WEEK (7), DATE (-), TIME (-), DAY TIME (27), TIMELINE (12), HOLIDAY (20), PERIOD (9), DURATION (-), TIME UNIT (15), TIME ZONE (6), and ERA (2).", "Number in '()' indicates the number of distinct tokens in this token type.", "'-' indicates that this token type involves changing digits and cannot be counted.", "Modifier.", "We define 3 token types for the modifiers according to their possible positions relative to time tokens.", "Modifiers that appear before time tokens are PREFIX (48); modifiers after time tokens are SUFFIX (2).", "LINKAGE (4) link two time tokens.", "Besides, we define 2 special modifier types, COMMA (1) for comma ',' and IN ARTICLE (2) for indefinite articles 'a' and 'an.'", "TimeML (Pustejovsky et al., 2003a) and Time-Bank (Pustejovsky et al., 2003b) do not treat most prepositions like 'on' as a part of time expressions.", "Thus SynTime does not collect those prepositions.", "Numeral.", "Number in time expressions can be a time token e.g., '10' in 'October 10, 2016,' or a modifier e.g., '10' in '10 days.'", "We define NU-MERAL (-) for the ordinals and numbers.", "SynTime Initialization.", "The token regular expressions for initializing SynTime are collected from SUTime, 6 a state-of-the-art rule-based tagger that achieved the highest recall in TempEval-3 (Chang and Manning, , 2013 .", "Specifically, we collect from SUTime only the tokens and the regular expressions over tokens, and discard its other rules of recognizing full time expressions.", "Time Expression Recognition On the token types, SynTime designs a small set of heuristic rules to recognize time expressions.", "The recognition process includes three main steps: (1) time token identification, (2) time segment identification, and (3) time expression extraction.", "Time Token Identification Identifying time tokens is simple, through matching of string and regular expressions.", "Some words might cause ambiguity.", "For example, 'May' could be a modal verb, or the fifth month of year.", "To filter out the ambiguous words, we use POS information.", "In implementation, we use Stanford POS Tagger; 7 and the POS tags for matching the instances of token types in SynTime are based on our Finding 4 in Section 3.2.", "Besides time tokens are identified, in this step, individual token is assigned with one token type of either modifier or numeral if it is matched with token regular expressions.", "In the next two steps, SynTime works on those token types.", "Time Segment Identification The task of time segment identification is to search the surrounding of each time token identified in previous step for modifiers and numerals, then gather the time token with its modifiers and numerals to form a time segment.", "The searching is under simple heuristic rules in which the key idea is to expand the time token's boundaries.", "At first, each time token is a time segment.", "If it is either a PERIOD or DURATION, then no need to further search.", "Otherwise, search its left and its right for modifiers and numerals.", "For the left searching, if encounter a PREFIX or NUMERAL or IN ARTICLE, then continue searching.", "For the right searching, if encounter a SUFFIX or NUMERAL, then continue searching.", "Both the left and the right searching stop when reaching a COMMA or LINK-AGE or a non-modifier/numeral word.", "The left searching does not exceed the previous time token; the right searching does not exceed the next time token.", "A time segment consists of exactly one time token, and zero or some modifiers/numerals.", "A special kind of time segments do not contain any time token; they depend on other time segments next to them.", "For example, in '8 to 20 days,' 'to 20 days' is a time segment, and '8 to' forms a dependent time segment.", "(See Figure 4(e) .)", "Time Expression Extraction The task of time expression extraction is to extract time expressions from the identified time segments in which the core step is to determine whether to merge two adjacent or overlapping time segments into a new time segment.", "We scan the time segments in a sentence from beginning to the end.", "A stand-alone time segment is a time expression.", "(See Figure 4(a) .)", "The focus is to deal with two or more time segments that are adjacent or overlapping.", "If two time segments s 1 and s 2 are adjacent, merge them to form a new time segment s 1 .", "(See Figure 4(b) .)", "Consider that s 1 and s 2 overlap at a shared boundary.", "According to our time segment identification, the shared boundary could be a modifier or a numeral.", "If the word at the shared boundary is neither a COMMA nor a LINKAGE, then merge s 1 and s 2 .", "(See Figure 4(c) .)", "If the word is a LINKAGE, then extract s 1 as a time expression and continue scanning.", "When the shared boundary is a COMMA, merge s 1 and s 2 only if the COMMA's previous token and its next token satisfy the three conditions: (1) the previous token is a time token or a NUMERAL; (2) the next token is a time token; and (3) the token types of the previous token and of the next token are not the same.", "(See Figure 4(d) .)", "Although Figure 4 shows the examples as token types together with the tokens, we should note that the heuristic rules only work on the token types.", "After the extraction step, time expressions are exported as a sequence of tokens from the sequence of token types.", "SynTime Expansion SynTime could be expanded by simply adding new words under each defined token type without changing any rule.", "The expansion requires the words to be added to be annotated manually.", "We apply the initial SynTime on the time expressions from training text and list the words that are not covered.", "Whether the uncovered words are added to SynTime is manually determined.", "The rule for determination is that the added words can not cause ambiguity and should be generic.", "Wiki-Wars dataset contains a few examples like this: 'The time Arnold reached Quebec City.'", "Words in this example are extremely descriptive, and we do not collect them.", "In tweets, on the other hand, people may use abbreviations and informal variants; for example, '2day' and 'tday' are popular spellings of 'today.'", "Such kind of abbreviations and informal variants will be collected.", "According to our findings, not many words are used to express time information, the manual addition of keywords thus will not cost much.", "In addition, we find that even in tweets people tend to use formal words.", "In the Twitter word clusters trained from 56 million English tweets, 8 the most often used words are the formal words, and their frequencies are much greater than the informal words'.", "The cluster of 'today,' 9 for example, its most often use is the formal one, 'today,' which appears 1,220,829 times; while its second most often use '2day' appears only 34,827 times.", "The low rate of informal words (e.g., about 3% in 'today' cluster) suggests that even in informal environment the manual keyword addition costs little.", "Experiments We evaluate SynTime against three state-of-theart baselines (i.e., HeidelTime, SUTime, and UW-Time) on three datasets (i.e., TimeBank, Wiki-Wars, and Tweets).", "WikiWars is a specific domain dataset about war; TimeBank and WikiWars are the datasets in formal text while Tweets dataset is in informal text.", "For SynTime we report the results of its two versions: SynTime-I and SynTime-E. SynTime-I is the initial version, and SynTime-E is the expanded version of SynTime-I.", "Experiment Setting Datasets.", "We use three datasets of which TimeBank and WikiWars are benchmark datasets whose details are shown in Section 3.1; Tweets is our manually labeled dataset that are collected from Twitter.", "For Tweets dataset, we randomly sample 4000 tweets and use SUTime to tag them.", "942 tweets of which each contains at least one time expression.", "From the remaining 3,058 tweets, we randomly sample 500 and manually annotate them, and find that only 15 tweets contain time expressions.", "We therefore roughly consider that SU-Time misses about 3% time expressions in tweets.", "Two annotators then manually annotate the 942 tweets with discussion to final agreement according to the standards of TimeML and TimeBank.", "We finally get 1,127 manually labeled time expressions.", "For the 942 tweets, we randomly sample 200 tweets as test set, and the rest 742 as training set, because a baseline UWTime requires training.", "Baseline Methods.", "We compare SynTime with methods: HeidelTime (Strötgen and Gertz, 2010) , SUTime (Chang and , and UW- Evaluation Metrics.", "We follow TempEval-3 and use their evaluation toolkit 10 to report P recision, Recall, and F 1 in terms of strict match and relaxed match (UzZaman et al., 2013).", "22, 1986' and 'February 01, 1989 ' at the level of word or of character.", "One suggestion is to consider a type-based learning method that could use type information.", "For example, the above two time expressions refer to the same pattern of 'MONTH NUMERAL COMMA Table 5 lists the number of time tokens and modifiers added to SynTime-I to get SynTime-E. On TimeBank and Tweets datasets, only a few tokens are added, the corresponding results are affected slightly.", "This confirms that the size of time words is small, and that SynTime-I covers most of time words.", "On WikiWars dataset, relatively more tokens are added, SynTime-E performs much better than SynTime-I, especially in recall.", "It improves the recall by 3.25% in strict match and by 2.98% in relaxed match.", "This indicates that with more words added from specific domains (e.g., WikiWars dataset about war), SynTime can significantly improve the performance.", "Experiment Result Limitations SynTime assumes that words are tokenized and POS tagged correctly.", "In reality, however, the tokenized and tagged words are not that perfect, due to the limitation of used tools.", "For example, Stanford POS Tagger assigns VBD to the word 'sat' in 'friday or sat' while whose tag should be NNP.", "The incorrect tokens and POS tags affect the result.", "Conclusion and future work We conduct an analysis on time expressions from four datasets, and find that time expressions in general are very short and expressed by a small vocabulary, and words in time expressions demonstrate similar syntactic behavior.", "Our findings provide evidence in terms of time expression for the principle of least effort (Zipf, 1949) .", "Inspired by part-of-speech, based on the findings, we define a syntactic type system for the time expression, and propose a type-based time expression tagger, named by SynTime.", "SynTime defines syntactic token types for tokens and on the token types it designs general heuristic rules based on the idea of boundary expansion.", "Experiments on three datasets show that SynTime outperforms the stateof-the-art baselines, including rule-based time taggers and machine learning based time tagger.", "Because our heuristic rules are quite simple, Syn-Time is light-weight and runs in real time.", "Our token types and heuristic rules are independent of specific tokens, SynTime therefore is independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domains and specific text types in English.", "The test for other languages needs only to construct a collection of token regular expressions in the target language under our defined token types.", "Time expression is part of language and follows the principle of least effort.", "Since language usage relates to human habits (Zipf, 1949; Chomsky, 1986; Pinker, 1995) , we might expect that humans would share some common habits, and therefore expect that other parts of language would more or less follow the same principle.", "In the future we will try our analytical method on other parts of language." ] }
{ "paper_header_number": [ "1", "2", "3.2", "4", "4.1", "4.2", "4.2.1", "4.2.2", "4.2.3", "4.3", "5", "5.3", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Finding", "SynTime: Syntactic Token Types and General Heuristic Rules", "SynTime Construction", "Time Expression Recognition", "Time Token Identification", "Time Segment Identification", "Time Expression Extraction", "SynTime Expansion", "Experiments", "Limitations", "Conclusion and future work" ] }
GEM-SciDuet-train-99#paper-1262#slide-11
SynTime Layout
Type level Time Token, Modifier, Numeral Token level: time-related tokens and token regular expressions Type level: token types group the tokens and token regular expressions Rule level: heuristic rules work on token types and are independent of specific tokens
Type level Time Token, Modifier, Numeral Token level: time-related tokens and token regular expressions Type level: token types group the tokens and token regular expressions Rule level: heuristic rules work on token types and are independent of specific tokens
[]
GEM-SciDuet-train-99#paper-1262#slide-12
1262
Time Expression Analysis and Recognition Using Syntactic Token Types and General Heuristic Rules
Extracting time expressions from free text is a fundamental task for many applications. We analyze time expressions from four different datasets and find that only a small group of words are used to express time information and that the words in time expressions demonstrate similar syntactic behaviour. Based on the findings, we propose a type-based approach named SynTime 1 for time expression recognition. Specifically, we define three main syntactic token types, namely time token, modifier, and numeral, to group time-related token regular expressions. On the types we design general heuristic rules to recognize time expressions. In recognition, SynTime first identifies time tokens from raw text, then searches their surroundings for modifiers and numerals to form time segments, and finally merges the time segments to time expressions. As a lightweight rule-based tagger, SynTime runs in real time, and can be easily expanded by simply adding keywords for the text from different domains and different text types. Experiments on benchmark datasets and tweets data show that SynTime outperforms state-of-the-art methods.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249 ], "paper_content_text": [ "Introduction Time expression plays an important role in information retrieval and many applications in natural language processing (Alonso et al., 2011; Campos et al., 2014) .", "Recognizing time expressions from free text has attracted considerable attention since last decade (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) .", "1 Source: https://github.com/zhongxiaoshi/syntime We analyze time expressions in four datasets: TimeBank (Pustejovsky et al., 2003b) , Gigaword (Parker et al., 2011) , WikiWars (Mazur and Dale, 2010) , and Tweets.", "From the analysis we make four findings about time expressions.", "First, most time expressions are very short, with 80% of time expressions containing no more than three tokens.", "Second, at least 91.8% of time expressions contain at least one time token.", "Third, the vocabulary used to express time information is very small, with a small group of keywords.", "Finally, words in time expressions demonstrate similar syntactic behaviour.", "All the findings relate to the principle of least effort (Zipf, 1949) .", "That is, people tend to act under the least effort in order to minimize the cost of energy at both individual level and collective level to language usage (Zipf, 1949) .", "Time expression is part of language and acts as an interface of communication.", "Short expressions, occurrence, small vocabulary, and similar syntactic behaviour all reduce the cost of energy required to communicate.", "According to the findings we propose a typebased approach named SynTime ('Syn' stands for syntactic) to recognize time expressions.", "Specifically, we define three main token types, namely time token, modifier, and numeral, to group timerelated token regular expressions.", "Time tokens are the words that explicitly express time information, such as time units (e.g., 'year').", "Modifiers modify time tokens; they appear before or after time tokens, e.g., 'several' and 'ago' in 'several years ago.'", "Numerals are ordinals and numbers.", "From free text SynTime first identifies time tokens, then recognizes modifiers and numerals.", "Naturally, SynTime is a rule-based tagger.", "The key difference between SynTime and other rulebased taggers lies in the way of defining token types and the way of designing rules.", "The definition of token type in SynTime is inspired by part-of-speech in which \"linguists group some words of language into classes (sets) which show similar syntactic behaviour.\"", "(Manning and Schutze, 1999) SynTime defines token types for tokens according to their syntactic behaviour.", "Other rulebased taggers define types for tokens based on their semantic meaning.", "For example, SUTime defines 5 semantic modifier types, such as frequency modifiers; 2 while SynTime defines 5 syntactic modifier types, such as modifiers that appear before time tokens.", "(See Section 4.1 for details.)", "Accordingly, other rule-based taggers design deterministic rules based on their meanings of tokens themselves.", "SynTime instead designs general rules on the token types rather than on the tokens themselves.", "For example, our general rules do not work on tokens 'February' nor '1989' but on their token types 'MONTH' and 'YEAR.'", "That is why we call SynTime a type-based approach.", "More importantly, other rule-based taggers design rules in a fixed method, including fixed length and fixed position.", "In contrast, SynTime designs general rules in a heuristic way, based on the idea of boundary expansion.", "The general heuristic rules are quite light-weight that it makes SynTime much more flexible and expansible, and leads SynTime to run in real time.", "The heuristic rules are designed on token types and are independent of specific tokens, SynTime therefore is independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domains and specific text types in English.", "(The test for other languages needs only to construct a collection of token regular expressions in the target language under our defined token types.)", "Specifically, we evaluate SynTime against three state-of-the-art methods (i.e., HeidelTime, SUTime, and UWTime) on three datasets: TimeBank, WikiWars, and Tweets.", "3 datasets.", "More importantly, SynTime achieves the best recalls on all three datasets and exceptionally good results on Tweets dataset.", "To sum up, we make the following contributions.", "• We analyze time expressions from four datasets and make four findings.", "The findings provide evidence in terms of time expression for the principle of least effort (Zipf, 1949) .", "• We propose a time tagger named SynTime to recognize time expressions using syntactic token types and general heuristic rules.", "Syn-Time is independent of specific tokens, and therefore independent of specific domains, specific text types, and specific languages.", "• We conduct experiments on three datasets, and the results demonstrate the effectiveness of SynTime against state-of-the-art baselines.", "Related Work Many research works on time expression identification are reported in TempEval exercises (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) .", "The task is divided into two subtasks: recognition and normalization.", "Rule-based Time Expression Recognition.", "Rule-based time taggers like GUTime, Heidel-Time, and SUTime, predefine time-related words and rules (Verhagen et al., 2005; Strötgen and Gertz, 2010; Chang and Manning, 2012) .", "Heidel-Time (Strötgen and Gertz, 2010) hand-crafts rules with time resources like weekdays and months, and leverages language clues like part-of-speech to identify time expression.", "SUTime (Chang and Manning, 2012) designs deterministic rules using a cascade finite automata (Hobbs et al., 1997) on regular expressions over tokens (Chang and Manning, 2014) .", "It first identifies individual words, then expands them to chunks, and finally to time expressions.", "Rule-based taggers achieve very good results in TempEval exercises.", "SynTime is also a rule-based tagger while its key difference from other rule-based taggers is that between the rules and the tokens it introduces a layer of token type; its rules work on token types and are independent of specific tokens.", "Moreover, SynTime designs rules in a heuristic way.", "Machine Learning based Method.", "Machine learning based methods extract features from the text and apply statistical models on the features for recognizing time expressions.", "Example features include character features, word features, syntactic features, semantic features, and gazetteer features (Llorens et al., 2010; Filannino et al., 2013; Bethard, 2013) .", "The statistical models include Markov logic network, logistic regression, support vector machines, maximum entropy, and conditional random fields (Llorens et al., 2010; Uz-Zaman and Allen, 2010; Filannino et al., 2013; Bethard, 2013) .", "Some models obtain good performance, and even achieve the highest F 1 of 82.71% on strict match in TempEval-3 (Bethard, 2013) .", "Outside TempEval exercises, Angeli et al.", "leverage compositional grammar and employ a EMstyle approach to learn a latent parser for time expression recognition (Angeli et al., 2012) .", "In the method named UWTime, Lee et al.", "handcraft a combinatory categorial grammar (CCG) (Steedman, 1996) to define a set of lexicon with rules and use L1-regularization to learn linguistic context (Lee et al., 2014) .", "The two methods explicitly use linguistic information.", "In (Lee et al., 2014) , especially, CCG could capture rich structure information of language, similar to the rule-based methods.", "Tabassum et al.", "focus on resolving the dates in tweets, and use distant supervision to recognize time expressions (Tabassum et al., 2016) .", "They use five time types and assign one of them to each word, which is similar to SynTime in the way of defining types over tokens.", "However, they focus only on the type of date, while SynTime recoginizes all the time expressions and does not involve learning and runs in real time.", "Time Expression Normalization.", "Methods in TempEval exercises design rules for time expression normalization (Verhagen et al., 2005; Strötgen and Gertz, 2010; Llorens et al., 2010; Uz-Zaman and Allen, 2010; Filannino et al., 2013; Bethard, 2013) .", "Because the rule systems have high similarity, Llorens et al.", "suggest to construct a large knowledge base as a public resource for the task (Llorens et al., 2012) .", "Some researchers treat the normalization process as a learning task and use machine learning methods (Lee et al., 2014; Tabassum et al., 2016) .", "Lee et al.", "(Lee et al., 2014) use AdaGrad algorithm (Duchi et al., 2011) and Tabassum et al.", "(Tabassum et al., 2016 ) use a loglinear algorithm to normalize time expressions.", "SynTime focuses only on the recognition task.", "The normalization could be achieved by using methods similar to the existing rule systems, because they are highly similar (Llorens et al., 2012) .", "We conduct an analysis on four datasets: Time-Bank, Gigaword, WikiWars, and Tweets.", "Time-Bank (Pustejovsky et al., 2003b ) is a benchmark dataset in TempEval series (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) , consisting of 183 news articles.", "Gigaword (Parker et al., 2011 ) is a large automatically labelled dataset with 2,452 news articles and used in TempEval-3.", "WikiWars dataset is derived from Wikipedia articles about wars (Mazur and Dale, 2010) .", "Tweets is our manually annotated dataset with 942 tweets of which each contains at least one time expression.", "Table 1 summarizes the datasets.", "Finding From the four datasets, we analyze their time expressions and make four findings.", "We will see that despite the four datasets vary in corpus sizes, in text types, and in domains, their time expressions demonstrate similar characteristics.", "Finding 1 Time expressions are very short.", "More than 80% of time expressions contain no more than three words and more than 90% contain no more than four words.", "Figure 1 plots the length distribution of time expressions.", "Although the texts are collected from different sources (i.e., news articles, Wikipedia articles, and tweets) and vary in sizes, the length Finding 2 More than 91% of time expressions contain at least one time token.", "The second column in Table 2 reports the percentage of time expressions that contain at least one time token.", "We find that at least 91.81% of time expressions contain time token(s).", "(Some time expressions have no time token but depend on other time expressions; in '2 to 8 days,' for example, '2' depends on '8 days.')", "This suggests that time tokens account for time expressions.", "Therefore, to recognize time expressions, it is essential to recognize their time tokens.", "Finding 3 Only a small group of time-related keywords are used to express time information.", "From the time expressions in all four datasets, we find that the group of keywords used to express time information is small.", "Table 3 reports the number of distinct words and of distinct time tokens.", "The words/tokens are manually normalized before counting and their variants are ignored.", "For example, 'year' and '5yrs' are counted as one token 'year.'", "Numerals in the counting are ignored.", "Despite the four datasets vary in sizes, domains, and text types, the numbers of their distinct time tokens are comparable.", "Across the four datasets, the number of distinct words is 350, about half of the simply summing of 675; the number of distinct time tokens is 123, less than half of the simply summing 282.", "Among the 123 distinct time tokens, 45 appear in all the four datasets, and 101 appear in at least two datasets.", "This indicates that time tokens, which account for time expressions, are highly overlapped across the four datasets.", "In other words, time expressions highly overlap at their time tokens.", "Finding 4 POS information could not distinguish time expressions from common words, but within time expressions, POS tags can help distinguish their constituents.", "For each dataset we list the top 10 POS tags that appear in time expressions, and their percentages over the whole text.", "Among the 40 tags (10 × 4 datasets), 37 have percentage lower than 20%; other 3 are CD.", "This indicates that POS could not provide enough information to distinguish time expressions from common words.", "However, the most common POS tags in time expressions are NN*, JJ, RB, CD, and DT.", "Within time expressions, the time tokens usually have NN* and RB, the modifiers have JJ and RB, and the numerals have CD.", "This finding indicates that for the time expressions, their similar constituents behave in similar syntactic way.", "When seeing this, we realize that this is exactly how linguists define part-of-speech for language.", "4 The definition of POS for language inspires us to define a syntactic type system for the time expression, part of language.", "The four findings all relate to the principle of least effort (Zipf, 1949) .", "That is, people tend to act with least effort so as to minimize the cost of energy at both individual and collective levels to the language usage (Zipf, 1949) .", "Time expression is part of language and acts as an interface of communication.", "Short expressions, occurrence, small vocabulary, and similar syntactic behaviour all reduce the cost of energy required to communicate.", "To summarize: on average, a time expression contains two tokens of which one is time token and the other is modifier/numeral, and the size of time tokens is small.", "To recognize a time expression, therefore, we first recognize the time token, then recognize the modifier/numeral.", "SynTime: Syntactic Token Types and General Heuristic Rules SynTime defines a syntactic type system for the tokens of time expressions, and designs heuristic rules working on the token types.", "Figure 2 shows the layout of SynTime, consisting of three levels: Token level, type level, and rule level.", "Token types at the type level group the tokens of time expressions.", "Heuristic rules lie at the rule level, working on token types rather than on tokens themselves.", "That is why the heuristic rules are general.", "For example, the heuristic rules do not work on tokens '1989' nor 'February,' but on their token types 'YEAR' and 'MONTH.'", "The heuristic rules are only relevant to token types, and are independent of specific tokens.", "For this reason, our token types and heuristic rules are independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domain (i.e., war domain) and specific text types (i.e., formal text and informal text) in English.", "The test for other languages simply needs to construct a set of token regular expressions in the target language under our defined token types.", "Figure 3 shows the overview of SynTime in practice.", "Shown on the left-hand side, SynTime is initialized with regular expressions over tokens.", "After initialization, SynTime can be directly applied on text.", "On the other hand, SynTime can be easily expanded by simply adding the time-related token regular expressions from training text under each defined token type.", "The expansion enables SynTime to recognize time expressions in text from different domains and different text types.", "Shown on the right-hand side of Figure 3 , Syn-Time recognizes time expression through three main steps.", "In the first step, SynTime identifies time tokens from the POS-tagged raw text.", "Then around the time tokens SynTime searches for modifiers and numerals to form time segments.", "In the last step, SynTime transforms the time segments to time expressions.", "SynTime Construction We define a syntactic type system for time expression, specifically, 15 token types for time tokens, 5 token types for modifiers, and 1 token type for numeral.", "Token types to tokens is like POS tags to words; for example, 'February' has a POS tag of NNP and a token type of MONTH.", "Time Token.", "We define 15 token types for the time tokens and use their names similar to Joda-Time classes: 5 DECADE (-), YEAR (-), SEA-SON (5), MONTH (12), WEEK (7), DATE (-), TIME (-), DAY TIME (27), TIMELINE (12), HOLIDAY (20), PERIOD (9), DURATION (-), TIME UNIT (15), TIME ZONE (6), and ERA (2).", "Number in '()' indicates the number of distinct tokens in this token type.", "'-' indicates that this token type involves changing digits and cannot be counted.", "Modifier.", "We define 3 token types for the modifiers according to their possible positions relative to time tokens.", "Modifiers that appear before time tokens are PREFIX (48); modifiers after time tokens are SUFFIX (2).", "LINKAGE (4) link two time tokens.", "Besides, we define 2 special modifier types, COMMA (1) for comma ',' and IN ARTICLE (2) for indefinite articles 'a' and 'an.'", "TimeML (Pustejovsky et al., 2003a) and Time-Bank (Pustejovsky et al., 2003b) do not treat most prepositions like 'on' as a part of time expressions.", "Thus SynTime does not collect those prepositions.", "Numeral.", "Number in time expressions can be a time token e.g., '10' in 'October 10, 2016,' or a modifier e.g., '10' in '10 days.'", "We define NU-MERAL (-) for the ordinals and numbers.", "SynTime Initialization.", "The token regular expressions for initializing SynTime are collected from SUTime, 6 a state-of-the-art rule-based tagger that achieved the highest recall in TempEval-3 (Chang and Manning, , 2013 .", "Specifically, we collect from SUTime only the tokens and the regular expressions over tokens, and discard its other rules of recognizing full time expressions.", "Time Expression Recognition On the token types, SynTime designs a small set of heuristic rules to recognize time expressions.", "The recognition process includes three main steps: (1) time token identification, (2) time segment identification, and (3) time expression extraction.", "Time Token Identification Identifying time tokens is simple, through matching of string and regular expressions.", "Some words might cause ambiguity.", "For example, 'May' could be a modal verb, or the fifth month of year.", "To filter out the ambiguous words, we use POS information.", "In implementation, we use Stanford POS Tagger; 7 and the POS tags for matching the instances of token types in SynTime are based on our Finding 4 in Section 3.2.", "Besides time tokens are identified, in this step, individual token is assigned with one token type of either modifier or numeral if it is matched with token regular expressions.", "In the next two steps, SynTime works on those token types.", "Time Segment Identification The task of time segment identification is to search the surrounding of each time token identified in previous step for modifiers and numerals, then gather the time token with its modifiers and numerals to form a time segment.", "The searching is under simple heuristic rules in which the key idea is to expand the time token's boundaries.", "At first, each time token is a time segment.", "If it is either a PERIOD or DURATION, then no need to further search.", "Otherwise, search its left and its right for modifiers and numerals.", "For the left searching, if encounter a PREFIX or NUMERAL or IN ARTICLE, then continue searching.", "For the right searching, if encounter a SUFFIX or NUMERAL, then continue searching.", "Both the left and the right searching stop when reaching a COMMA or LINK-AGE or a non-modifier/numeral word.", "The left searching does not exceed the previous time token; the right searching does not exceed the next time token.", "A time segment consists of exactly one time token, and zero or some modifiers/numerals.", "A special kind of time segments do not contain any time token; they depend on other time segments next to them.", "For example, in '8 to 20 days,' 'to 20 days' is a time segment, and '8 to' forms a dependent time segment.", "(See Figure 4(e) .)", "Time Expression Extraction The task of time expression extraction is to extract time expressions from the identified time segments in which the core step is to determine whether to merge two adjacent or overlapping time segments into a new time segment.", "We scan the time segments in a sentence from beginning to the end.", "A stand-alone time segment is a time expression.", "(See Figure 4(a) .)", "The focus is to deal with two or more time segments that are adjacent or overlapping.", "If two time segments s 1 and s 2 are adjacent, merge them to form a new time segment s 1 .", "(See Figure 4(b) .)", "Consider that s 1 and s 2 overlap at a shared boundary.", "According to our time segment identification, the shared boundary could be a modifier or a numeral.", "If the word at the shared boundary is neither a COMMA nor a LINKAGE, then merge s 1 and s 2 .", "(See Figure 4(c) .)", "If the word is a LINKAGE, then extract s 1 as a time expression and continue scanning.", "When the shared boundary is a COMMA, merge s 1 and s 2 only if the COMMA's previous token and its next token satisfy the three conditions: (1) the previous token is a time token or a NUMERAL; (2) the next token is a time token; and (3) the token types of the previous token and of the next token are not the same.", "(See Figure 4(d) .)", "Although Figure 4 shows the examples as token types together with the tokens, we should note that the heuristic rules only work on the token types.", "After the extraction step, time expressions are exported as a sequence of tokens from the sequence of token types.", "SynTime Expansion SynTime could be expanded by simply adding new words under each defined token type without changing any rule.", "The expansion requires the words to be added to be annotated manually.", "We apply the initial SynTime on the time expressions from training text and list the words that are not covered.", "Whether the uncovered words are added to SynTime is manually determined.", "The rule for determination is that the added words can not cause ambiguity and should be generic.", "Wiki-Wars dataset contains a few examples like this: 'The time Arnold reached Quebec City.'", "Words in this example are extremely descriptive, and we do not collect them.", "In tweets, on the other hand, people may use abbreviations and informal variants; for example, '2day' and 'tday' are popular spellings of 'today.'", "Such kind of abbreviations and informal variants will be collected.", "According to our findings, not many words are used to express time information, the manual addition of keywords thus will not cost much.", "In addition, we find that even in tweets people tend to use formal words.", "In the Twitter word clusters trained from 56 million English tweets, 8 the most often used words are the formal words, and their frequencies are much greater than the informal words'.", "The cluster of 'today,' 9 for example, its most often use is the formal one, 'today,' which appears 1,220,829 times; while its second most often use '2day' appears only 34,827 times.", "The low rate of informal words (e.g., about 3% in 'today' cluster) suggests that even in informal environment the manual keyword addition costs little.", "Experiments We evaluate SynTime against three state-of-theart baselines (i.e., HeidelTime, SUTime, and UW-Time) on three datasets (i.e., TimeBank, Wiki-Wars, and Tweets).", "WikiWars is a specific domain dataset about war; TimeBank and WikiWars are the datasets in formal text while Tweets dataset is in informal text.", "For SynTime we report the results of its two versions: SynTime-I and SynTime-E. SynTime-I is the initial version, and SynTime-E is the expanded version of SynTime-I.", "Experiment Setting Datasets.", "We use three datasets of which TimeBank and WikiWars are benchmark datasets whose details are shown in Section 3.1; Tweets is our manually labeled dataset that are collected from Twitter.", "For Tweets dataset, we randomly sample 4000 tweets and use SUTime to tag them.", "942 tweets of which each contains at least one time expression.", "From the remaining 3,058 tweets, we randomly sample 500 and manually annotate them, and find that only 15 tweets contain time expressions.", "We therefore roughly consider that SU-Time misses about 3% time expressions in tweets.", "Two annotators then manually annotate the 942 tweets with discussion to final agreement according to the standards of TimeML and TimeBank.", "We finally get 1,127 manually labeled time expressions.", "For the 942 tweets, we randomly sample 200 tweets as test set, and the rest 742 as training set, because a baseline UWTime requires training.", "Baseline Methods.", "We compare SynTime with methods: HeidelTime (Strötgen and Gertz, 2010) , SUTime (Chang and , and UW- Evaluation Metrics.", "We follow TempEval-3 and use their evaluation toolkit 10 to report P recision, Recall, and F 1 in terms of strict match and relaxed match (UzZaman et al., 2013).", "22, 1986' and 'February 01, 1989 ' at the level of word or of character.", "One suggestion is to consider a type-based learning method that could use type information.", "For example, the above two time expressions refer to the same pattern of 'MONTH NUMERAL COMMA Table 5 lists the number of time tokens and modifiers added to SynTime-I to get SynTime-E. On TimeBank and Tweets datasets, only a few tokens are added, the corresponding results are affected slightly.", "This confirms that the size of time words is small, and that SynTime-I covers most of time words.", "On WikiWars dataset, relatively more tokens are added, SynTime-E performs much better than SynTime-I, especially in recall.", "It improves the recall by 3.25% in strict match and by 2.98% in relaxed match.", "This indicates that with more words added from specific domains (e.g., WikiWars dataset about war), SynTime can significantly improve the performance.", "Experiment Result Limitations SynTime assumes that words are tokenized and POS tagged correctly.", "In reality, however, the tokenized and tagged words are not that perfect, due to the limitation of used tools.", "For example, Stanford POS Tagger assigns VBD to the word 'sat' in 'friday or sat' while whose tag should be NNP.", "The incorrect tokens and POS tags affect the result.", "Conclusion and future work We conduct an analysis on time expressions from four datasets, and find that time expressions in general are very short and expressed by a small vocabulary, and words in time expressions demonstrate similar syntactic behavior.", "Our findings provide evidence in terms of time expression for the principle of least effort (Zipf, 1949) .", "Inspired by part-of-speech, based on the findings, we define a syntactic type system for the time expression, and propose a type-based time expression tagger, named by SynTime.", "SynTime defines syntactic token types for tokens and on the token types it designs general heuristic rules based on the idea of boundary expansion.", "Experiments on three datasets show that SynTime outperforms the stateof-the-art baselines, including rule-based time taggers and machine learning based time tagger.", "Because our heuristic rules are quite simple, Syn-Time is light-weight and runs in real time.", "Our token types and heuristic rules are independent of specific tokens, SynTime therefore is independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domains and specific text types in English.", "The test for other languages needs only to construct a collection of token regular expressions in the target language under our defined token types.", "Time expression is part of language and follows the principle of least effort.", "Since language usage relates to human habits (Zipf, 1949; Chomsky, 1986; Pinker, 1995) , we might expect that humans would share some common habits, and therefore expect that other parts of language would more or less follow the same principle.", "In the future we will try our analytical method on other parts of language." ] }
{ "paper_header_number": [ "1", "2", "3.2", "4", "4.1", "4.2", "4.2.1", "4.2.2", "4.2.3", "4.3", "5", "5.3", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Finding", "SynTime: Syntactic Token Types and General Heuristic Rules", "SynTime Construction", "Time Expression Recognition", "Time Token Identification", "Time Segment Identification", "Time Expression Extraction", "SynTime Expansion", "Experiments", "Limitations", "Conclusion and future work" ] }
GEM-SciDuet-train-99#paper-1262#slide-12
SynTime Overview in practice
defined token types and do not change any rules Identify time tokens Import token regex to time numerals by expanding the
defined token types and do not change any rules Identify time tokens Import token regex to time numerals by expanding the
[]
GEM-SciDuet-train-99#paper-1262#slide-13
1262
Time Expression Analysis and Recognition Using Syntactic Token Types and General Heuristic Rules
Extracting time expressions from free text is a fundamental task for many applications. We analyze time expressions from four different datasets and find that only a small group of words are used to express time information and that the words in time expressions demonstrate similar syntactic behaviour. Based on the findings, we propose a type-based approach named SynTime 1 for time expression recognition. Specifically, we define three main syntactic token types, namely time token, modifier, and numeral, to group time-related token regular expressions. On the types we design general heuristic rules to recognize time expressions. In recognition, SynTime first identifies time tokens from raw text, then searches their surroundings for modifiers and numerals to form time segments, and finally merges the time segments to time expressions. As a lightweight rule-based tagger, SynTime runs in real time, and can be easily expanded by simply adding keywords for the text from different domains and different text types. Experiments on benchmark datasets and tweets data show that SynTime outperforms state-of-the-art methods.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249 ], "paper_content_text": [ "Introduction Time expression plays an important role in information retrieval and many applications in natural language processing (Alonso et al., 2011; Campos et al., 2014) .", "Recognizing time expressions from free text has attracted considerable attention since last decade (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) .", "1 Source: https://github.com/zhongxiaoshi/syntime We analyze time expressions in four datasets: TimeBank (Pustejovsky et al., 2003b) , Gigaword (Parker et al., 2011) , WikiWars (Mazur and Dale, 2010) , and Tweets.", "From the analysis we make four findings about time expressions.", "First, most time expressions are very short, with 80% of time expressions containing no more than three tokens.", "Second, at least 91.8% of time expressions contain at least one time token.", "Third, the vocabulary used to express time information is very small, with a small group of keywords.", "Finally, words in time expressions demonstrate similar syntactic behaviour.", "All the findings relate to the principle of least effort (Zipf, 1949) .", "That is, people tend to act under the least effort in order to minimize the cost of energy at both individual level and collective level to language usage (Zipf, 1949) .", "Time expression is part of language and acts as an interface of communication.", "Short expressions, occurrence, small vocabulary, and similar syntactic behaviour all reduce the cost of energy required to communicate.", "According to the findings we propose a typebased approach named SynTime ('Syn' stands for syntactic) to recognize time expressions.", "Specifically, we define three main token types, namely time token, modifier, and numeral, to group timerelated token regular expressions.", "Time tokens are the words that explicitly express time information, such as time units (e.g., 'year').", "Modifiers modify time tokens; they appear before or after time tokens, e.g., 'several' and 'ago' in 'several years ago.'", "Numerals are ordinals and numbers.", "From free text SynTime first identifies time tokens, then recognizes modifiers and numerals.", "Naturally, SynTime is a rule-based tagger.", "The key difference between SynTime and other rulebased taggers lies in the way of defining token types and the way of designing rules.", "The definition of token type in SynTime is inspired by part-of-speech in which \"linguists group some words of language into classes (sets) which show similar syntactic behaviour.\"", "(Manning and Schutze, 1999) SynTime defines token types for tokens according to their syntactic behaviour.", "Other rulebased taggers define types for tokens based on their semantic meaning.", "For example, SUTime defines 5 semantic modifier types, such as frequency modifiers; 2 while SynTime defines 5 syntactic modifier types, such as modifiers that appear before time tokens.", "(See Section 4.1 for details.)", "Accordingly, other rule-based taggers design deterministic rules based on their meanings of tokens themselves.", "SynTime instead designs general rules on the token types rather than on the tokens themselves.", "For example, our general rules do not work on tokens 'February' nor '1989' but on their token types 'MONTH' and 'YEAR.'", "That is why we call SynTime a type-based approach.", "More importantly, other rule-based taggers design rules in a fixed method, including fixed length and fixed position.", "In contrast, SynTime designs general rules in a heuristic way, based on the idea of boundary expansion.", "The general heuristic rules are quite light-weight that it makes SynTime much more flexible and expansible, and leads SynTime to run in real time.", "The heuristic rules are designed on token types and are independent of specific tokens, SynTime therefore is independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domains and specific text types in English.", "(The test for other languages needs only to construct a collection of token regular expressions in the target language under our defined token types.)", "Specifically, we evaluate SynTime against three state-of-the-art methods (i.e., HeidelTime, SUTime, and UWTime) on three datasets: TimeBank, WikiWars, and Tweets.", "3 datasets.", "More importantly, SynTime achieves the best recalls on all three datasets and exceptionally good results on Tweets dataset.", "To sum up, we make the following contributions.", "• We analyze time expressions from four datasets and make four findings.", "The findings provide evidence in terms of time expression for the principle of least effort (Zipf, 1949) .", "• We propose a time tagger named SynTime to recognize time expressions using syntactic token types and general heuristic rules.", "Syn-Time is independent of specific tokens, and therefore independent of specific domains, specific text types, and specific languages.", "• We conduct experiments on three datasets, and the results demonstrate the effectiveness of SynTime against state-of-the-art baselines.", "Related Work Many research works on time expression identification are reported in TempEval exercises (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) .", "The task is divided into two subtasks: recognition and normalization.", "Rule-based Time Expression Recognition.", "Rule-based time taggers like GUTime, Heidel-Time, and SUTime, predefine time-related words and rules (Verhagen et al., 2005; Strötgen and Gertz, 2010; Chang and Manning, 2012) .", "Heidel-Time (Strötgen and Gertz, 2010) hand-crafts rules with time resources like weekdays and months, and leverages language clues like part-of-speech to identify time expression.", "SUTime (Chang and Manning, 2012) designs deterministic rules using a cascade finite automata (Hobbs et al., 1997) on regular expressions over tokens (Chang and Manning, 2014) .", "It first identifies individual words, then expands them to chunks, and finally to time expressions.", "Rule-based taggers achieve very good results in TempEval exercises.", "SynTime is also a rule-based tagger while its key difference from other rule-based taggers is that between the rules and the tokens it introduces a layer of token type; its rules work on token types and are independent of specific tokens.", "Moreover, SynTime designs rules in a heuristic way.", "Machine Learning based Method.", "Machine learning based methods extract features from the text and apply statistical models on the features for recognizing time expressions.", "Example features include character features, word features, syntactic features, semantic features, and gazetteer features (Llorens et al., 2010; Filannino et al., 2013; Bethard, 2013) .", "The statistical models include Markov logic network, logistic regression, support vector machines, maximum entropy, and conditional random fields (Llorens et al., 2010; Uz-Zaman and Allen, 2010; Filannino et al., 2013; Bethard, 2013) .", "Some models obtain good performance, and even achieve the highest F 1 of 82.71% on strict match in TempEval-3 (Bethard, 2013) .", "Outside TempEval exercises, Angeli et al.", "leverage compositional grammar and employ a EMstyle approach to learn a latent parser for time expression recognition (Angeli et al., 2012) .", "In the method named UWTime, Lee et al.", "handcraft a combinatory categorial grammar (CCG) (Steedman, 1996) to define a set of lexicon with rules and use L1-regularization to learn linguistic context (Lee et al., 2014) .", "The two methods explicitly use linguistic information.", "In (Lee et al., 2014) , especially, CCG could capture rich structure information of language, similar to the rule-based methods.", "Tabassum et al.", "focus on resolving the dates in tweets, and use distant supervision to recognize time expressions (Tabassum et al., 2016) .", "They use five time types and assign one of them to each word, which is similar to SynTime in the way of defining types over tokens.", "However, they focus only on the type of date, while SynTime recoginizes all the time expressions and does not involve learning and runs in real time.", "Time Expression Normalization.", "Methods in TempEval exercises design rules for time expression normalization (Verhagen et al., 2005; Strötgen and Gertz, 2010; Llorens et al., 2010; Uz-Zaman and Allen, 2010; Filannino et al., 2013; Bethard, 2013) .", "Because the rule systems have high similarity, Llorens et al.", "suggest to construct a large knowledge base as a public resource for the task (Llorens et al., 2012) .", "Some researchers treat the normalization process as a learning task and use machine learning methods (Lee et al., 2014; Tabassum et al., 2016) .", "Lee et al.", "(Lee et al., 2014) use AdaGrad algorithm (Duchi et al., 2011) and Tabassum et al.", "(Tabassum et al., 2016 ) use a loglinear algorithm to normalize time expressions.", "SynTime focuses only on the recognition task.", "The normalization could be achieved by using methods similar to the existing rule systems, because they are highly similar (Llorens et al., 2012) .", "We conduct an analysis on four datasets: Time-Bank, Gigaword, WikiWars, and Tweets.", "Time-Bank (Pustejovsky et al., 2003b ) is a benchmark dataset in TempEval series (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) , consisting of 183 news articles.", "Gigaword (Parker et al., 2011 ) is a large automatically labelled dataset with 2,452 news articles and used in TempEval-3.", "WikiWars dataset is derived from Wikipedia articles about wars (Mazur and Dale, 2010) .", "Tweets is our manually annotated dataset with 942 tweets of which each contains at least one time expression.", "Table 1 summarizes the datasets.", "Finding From the four datasets, we analyze their time expressions and make four findings.", "We will see that despite the four datasets vary in corpus sizes, in text types, and in domains, their time expressions demonstrate similar characteristics.", "Finding 1 Time expressions are very short.", "More than 80% of time expressions contain no more than three words and more than 90% contain no more than four words.", "Figure 1 plots the length distribution of time expressions.", "Although the texts are collected from different sources (i.e., news articles, Wikipedia articles, and tweets) and vary in sizes, the length Finding 2 More than 91% of time expressions contain at least one time token.", "The second column in Table 2 reports the percentage of time expressions that contain at least one time token.", "We find that at least 91.81% of time expressions contain time token(s).", "(Some time expressions have no time token but depend on other time expressions; in '2 to 8 days,' for example, '2' depends on '8 days.')", "This suggests that time tokens account for time expressions.", "Therefore, to recognize time expressions, it is essential to recognize their time tokens.", "Finding 3 Only a small group of time-related keywords are used to express time information.", "From the time expressions in all four datasets, we find that the group of keywords used to express time information is small.", "Table 3 reports the number of distinct words and of distinct time tokens.", "The words/tokens are manually normalized before counting and their variants are ignored.", "For example, 'year' and '5yrs' are counted as one token 'year.'", "Numerals in the counting are ignored.", "Despite the four datasets vary in sizes, domains, and text types, the numbers of their distinct time tokens are comparable.", "Across the four datasets, the number of distinct words is 350, about half of the simply summing of 675; the number of distinct time tokens is 123, less than half of the simply summing 282.", "Among the 123 distinct time tokens, 45 appear in all the four datasets, and 101 appear in at least two datasets.", "This indicates that time tokens, which account for time expressions, are highly overlapped across the four datasets.", "In other words, time expressions highly overlap at their time tokens.", "Finding 4 POS information could not distinguish time expressions from common words, but within time expressions, POS tags can help distinguish their constituents.", "For each dataset we list the top 10 POS tags that appear in time expressions, and their percentages over the whole text.", "Among the 40 tags (10 × 4 datasets), 37 have percentage lower than 20%; other 3 are CD.", "This indicates that POS could not provide enough information to distinguish time expressions from common words.", "However, the most common POS tags in time expressions are NN*, JJ, RB, CD, and DT.", "Within time expressions, the time tokens usually have NN* and RB, the modifiers have JJ and RB, and the numerals have CD.", "This finding indicates that for the time expressions, their similar constituents behave in similar syntactic way.", "When seeing this, we realize that this is exactly how linguists define part-of-speech for language.", "4 The definition of POS for language inspires us to define a syntactic type system for the time expression, part of language.", "The four findings all relate to the principle of least effort (Zipf, 1949) .", "That is, people tend to act with least effort so as to minimize the cost of energy at both individual and collective levels to the language usage (Zipf, 1949) .", "Time expression is part of language and acts as an interface of communication.", "Short expressions, occurrence, small vocabulary, and similar syntactic behaviour all reduce the cost of energy required to communicate.", "To summarize: on average, a time expression contains two tokens of which one is time token and the other is modifier/numeral, and the size of time tokens is small.", "To recognize a time expression, therefore, we first recognize the time token, then recognize the modifier/numeral.", "SynTime: Syntactic Token Types and General Heuristic Rules SynTime defines a syntactic type system for the tokens of time expressions, and designs heuristic rules working on the token types.", "Figure 2 shows the layout of SynTime, consisting of three levels: Token level, type level, and rule level.", "Token types at the type level group the tokens of time expressions.", "Heuristic rules lie at the rule level, working on token types rather than on tokens themselves.", "That is why the heuristic rules are general.", "For example, the heuristic rules do not work on tokens '1989' nor 'February,' but on their token types 'YEAR' and 'MONTH.'", "The heuristic rules are only relevant to token types, and are independent of specific tokens.", "For this reason, our token types and heuristic rules are independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domain (i.e., war domain) and specific text types (i.e., formal text and informal text) in English.", "The test for other languages simply needs to construct a set of token regular expressions in the target language under our defined token types.", "Figure 3 shows the overview of SynTime in practice.", "Shown on the left-hand side, SynTime is initialized with regular expressions over tokens.", "After initialization, SynTime can be directly applied on text.", "On the other hand, SynTime can be easily expanded by simply adding the time-related token regular expressions from training text under each defined token type.", "The expansion enables SynTime to recognize time expressions in text from different domains and different text types.", "Shown on the right-hand side of Figure 3 , Syn-Time recognizes time expression through three main steps.", "In the first step, SynTime identifies time tokens from the POS-tagged raw text.", "Then around the time tokens SynTime searches for modifiers and numerals to form time segments.", "In the last step, SynTime transforms the time segments to time expressions.", "SynTime Construction We define a syntactic type system for time expression, specifically, 15 token types for time tokens, 5 token types for modifiers, and 1 token type for numeral.", "Token types to tokens is like POS tags to words; for example, 'February' has a POS tag of NNP and a token type of MONTH.", "Time Token.", "We define 15 token types for the time tokens and use their names similar to Joda-Time classes: 5 DECADE (-), YEAR (-), SEA-SON (5), MONTH (12), WEEK (7), DATE (-), TIME (-), DAY TIME (27), TIMELINE (12), HOLIDAY (20), PERIOD (9), DURATION (-), TIME UNIT (15), TIME ZONE (6), and ERA (2).", "Number in '()' indicates the number of distinct tokens in this token type.", "'-' indicates that this token type involves changing digits and cannot be counted.", "Modifier.", "We define 3 token types for the modifiers according to their possible positions relative to time tokens.", "Modifiers that appear before time tokens are PREFIX (48); modifiers after time tokens are SUFFIX (2).", "LINKAGE (4) link two time tokens.", "Besides, we define 2 special modifier types, COMMA (1) for comma ',' and IN ARTICLE (2) for indefinite articles 'a' and 'an.'", "TimeML (Pustejovsky et al., 2003a) and Time-Bank (Pustejovsky et al., 2003b) do not treat most prepositions like 'on' as a part of time expressions.", "Thus SynTime does not collect those prepositions.", "Numeral.", "Number in time expressions can be a time token e.g., '10' in 'October 10, 2016,' or a modifier e.g., '10' in '10 days.'", "We define NU-MERAL (-) for the ordinals and numbers.", "SynTime Initialization.", "The token regular expressions for initializing SynTime are collected from SUTime, 6 a state-of-the-art rule-based tagger that achieved the highest recall in TempEval-3 (Chang and Manning, , 2013 .", "Specifically, we collect from SUTime only the tokens and the regular expressions over tokens, and discard its other rules of recognizing full time expressions.", "Time Expression Recognition On the token types, SynTime designs a small set of heuristic rules to recognize time expressions.", "The recognition process includes three main steps: (1) time token identification, (2) time segment identification, and (3) time expression extraction.", "Time Token Identification Identifying time tokens is simple, through matching of string and regular expressions.", "Some words might cause ambiguity.", "For example, 'May' could be a modal verb, or the fifth month of year.", "To filter out the ambiguous words, we use POS information.", "In implementation, we use Stanford POS Tagger; 7 and the POS tags for matching the instances of token types in SynTime are based on our Finding 4 in Section 3.2.", "Besides time tokens are identified, in this step, individual token is assigned with one token type of either modifier or numeral if it is matched with token regular expressions.", "In the next two steps, SynTime works on those token types.", "Time Segment Identification The task of time segment identification is to search the surrounding of each time token identified in previous step for modifiers and numerals, then gather the time token with its modifiers and numerals to form a time segment.", "The searching is under simple heuristic rules in which the key idea is to expand the time token's boundaries.", "At first, each time token is a time segment.", "If it is either a PERIOD or DURATION, then no need to further search.", "Otherwise, search its left and its right for modifiers and numerals.", "For the left searching, if encounter a PREFIX or NUMERAL or IN ARTICLE, then continue searching.", "For the right searching, if encounter a SUFFIX or NUMERAL, then continue searching.", "Both the left and the right searching stop when reaching a COMMA or LINK-AGE or a non-modifier/numeral word.", "The left searching does not exceed the previous time token; the right searching does not exceed the next time token.", "A time segment consists of exactly one time token, and zero or some modifiers/numerals.", "A special kind of time segments do not contain any time token; they depend on other time segments next to them.", "For example, in '8 to 20 days,' 'to 20 days' is a time segment, and '8 to' forms a dependent time segment.", "(See Figure 4(e) .)", "Time Expression Extraction The task of time expression extraction is to extract time expressions from the identified time segments in which the core step is to determine whether to merge two adjacent or overlapping time segments into a new time segment.", "We scan the time segments in a sentence from beginning to the end.", "A stand-alone time segment is a time expression.", "(See Figure 4(a) .)", "The focus is to deal with two or more time segments that are adjacent or overlapping.", "If two time segments s 1 and s 2 are adjacent, merge them to form a new time segment s 1 .", "(See Figure 4(b) .)", "Consider that s 1 and s 2 overlap at a shared boundary.", "According to our time segment identification, the shared boundary could be a modifier or a numeral.", "If the word at the shared boundary is neither a COMMA nor a LINKAGE, then merge s 1 and s 2 .", "(See Figure 4(c) .)", "If the word is a LINKAGE, then extract s 1 as a time expression and continue scanning.", "When the shared boundary is a COMMA, merge s 1 and s 2 only if the COMMA's previous token and its next token satisfy the three conditions: (1) the previous token is a time token or a NUMERAL; (2) the next token is a time token; and (3) the token types of the previous token and of the next token are not the same.", "(See Figure 4(d) .)", "Although Figure 4 shows the examples as token types together with the tokens, we should note that the heuristic rules only work on the token types.", "After the extraction step, time expressions are exported as a sequence of tokens from the sequence of token types.", "SynTime Expansion SynTime could be expanded by simply adding new words under each defined token type without changing any rule.", "The expansion requires the words to be added to be annotated manually.", "We apply the initial SynTime on the time expressions from training text and list the words that are not covered.", "Whether the uncovered words are added to SynTime is manually determined.", "The rule for determination is that the added words can not cause ambiguity and should be generic.", "Wiki-Wars dataset contains a few examples like this: 'The time Arnold reached Quebec City.'", "Words in this example are extremely descriptive, and we do not collect them.", "In tweets, on the other hand, people may use abbreviations and informal variants; for example, '2day' and 'tday' are popular spellings of 'today.'", "Such kind of abbreviations and informal variants will be collected.", "According to our findings, not many words are used to express time information, the manual addition of keywords thus will not cost much.", "In addition, we find that even in tweets people tend to use formal words.", "In the Twitter word clusters trained from 56 million English tweets, 8 the most often used words are the formal words, and their frequencies are much greater than the informal words'.", "The cluster of 'today,' 9 for example, its most often use is the formal one, 'today,' which appears 1,220,829 times; while its second most often use '2day' appears only 34,827 times.", "The low rate of informal words (e.g., about 3% in 'today' cluster) suggests that even in informal environment the manual keyword addition costs little.", "Experiments We evaluate SynTime against three state-of-theart baselines (i.e., HeidelTime, SUTime, and UW-Time) on three datasets (i.e., TimeBank, Wiki-Wars, and Tweets).", "WikiWars is a specific domain dataset about war; TimeBank and WikiWars are the datasets in formal text while Tweets dataset is in informal text.", "For SynTime we report the results of its two versions: SynTime-I and SynTime-E. SynTime-I is the initial version, and SynTime-E is the expanded version of SynTime-I.", "Experiment Setting Datasets.", "We use three datasets of which TimeBank and WikiWars are benchmark datasets whose details are shown in Section 3.1; Tweets is our manually labeled dataset that are collected from Twitter.", "For Tweets dataset, we randomly sample 4000 tweets and use SUTime to tag them.", "942 tweets of which each contains at least one time expression.", "From the remaining 3,058 tweets, we randomly sample 500 and manually annotate them, and find that only 15 tweets contain time expressions.", "We therefore roughly consider that SU-Time misses about 3% time expressions in tweets.", "Two annotators then manually annotate the 942 tweets with discussion to final agreement according to the standards of TimeML and TimeBank.", "We finally get 1,127 manually labeled time expressions.", "For the 942 tweets, we randomly sample 200 tweets as test set, and the rest 742 as training set, because a baseline UWTime requires training.", "Baseline Methods.", "We compare SynTime with methods: HeidelTime (Strötgen and Gertz, 2010) , SUTime (Chang and , and UW- Evaluation Metrics.", "We follow TempEval-3 and use their evaluation toolkit 10 to report P recision, Recall, and F 1 in terms of strict match and relaxed match (UzZaman et al., 2013).", "22, 1986' and 'February 01, 1989 ' at the level of word or of character.", "One suggestion is to consider a type-based learning method that could use type information.", "For example, the above two time expressions refer to the same pattern of 'MONTH NUMERAL COMMA Table 5 lists the number of time tokens and modifiers added to SynTime-I to get SynTime-E. On TimeBank and Tweets datasets, only a few tokens are added, the corresponding results are affected slightly.", "This confirms that the size of time words is small, and that SynTime-I covers most of time words.", "On WikiWars dataset, relatively more tokens are added, SynTime-E performs much better than SynTime-I, especially in recall.", "It improves the recall by 3.25% in strict match and by 2.98% in relaxed match.", "This indicates that with more words added from specific domains (e.g., WikiWars dataset about war), SynTime can significantly improve the performance.", "Experiment Result Limitations SynTime assumes that words are tokenized and POS tagged correctly.", "In reality, however, the tokenized and tagged words are not that perfect, due to the limitation of used tools.", "For example, Stanford POS Tagger assigns VBD to the word 'sat' in 'friday or sat' while whose tag should be NNP.", "The incorrect tokens and POS tags affect the result.", "Conclusion and future work We conduct an analysis on time expressions from four datasets, and find that time expressions in general are very short and expressed by a small vocabulary, and words in time expressions demonstrate similar syntactic behavior.", "Our findings provide evidence in terms of time expression for the principle of least effort (Zipf, 1949) .", "Inspired by part-of-speech, based on the findings, we define a syntactic type system for the time expression, and propose a type-based time expression tagger, named by SynTime.", "SynTime defines syntactic token types for tokens and on the token types it designs general heuristic rules based on the idea of boundary expansion.", "Experiments on three datasets show that SynTime outperforms the stateof-the-art baselines, including rule-based time taggers and machine learning based time tagger.", "Because our heuristic rules are quite simple, Syn-Time is light-weight and runs in real time.", "Our token types and heuristic rules are independent of specific tokens, SynTime therefore is independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domains and specific text types in English.", "The test for other languages needs only to construct a collection of token regular expressions in the target language under our defined token types.", "Time expression is part of language and follows the principle of least effort.", "Since language usage relates to human habits (Zipf, 1949; Chomsky, 1986; Pinker, 1995) , we might expect that humans would share some common habits, and therefore expect that other parts of language would more or less follow the same principle.", "In the future we will try our analytical method on other parts of language." ] }
{ "paper_header_number": [ "1", "2", "3.2", "4", "4.1", "4.2", "4.2.1", "4.2.2", "4.2.3", "4.3", "5", "5.3", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Finding", "SynTime: Syntactic Token Types and General Heuristic Rules", "SynTime Construction", "Time Expression Recognition", "Time Token Identification", "Time Segment Identification", "Time Expression Extraction", "SynTime Expansion", "Experiments", "Limitations", "Conclusion and future work" ] }
GEM-SciDuet-train-99#paper-1262#slide-13
An example the third quarter of 1984
A sequence of tokens: the third quarter of Assign tokens with token types PREFIX NUMERAL TIME_UNIT PREFIX YEAR Identify modifiers and numerals by searching time tokens surroundings A sequence of token types PREFIX NUMERAL TIME_UNIT PREFIX YEAR Export a sequence of tokens as time expression the third quarter of
A sequence of tokens: the third quarter of Assign tokens with token types PREFIX NUMERAL TIME_UNIT PREFIX YEAR Identify modifiers and numerals by searching time tokens surroundings A sequence of token types PREFIX NUMERAL TIME_UNIT PREFIX YEAR Export a sequence of tokens as time expression the third quarter of
[]
GEM-SciDuet-train-99#paper-1262#slide-14
1262
Time Expression Analysis and Recognition Using Syntactic Token Types and General Heuristic Rules
Extracting time expressions from free text is a fundamental task for many applications. We analyze time expressions from four different datasets and find that only a small group of words are used to express time information and that the words in time expressions demonstrate similar syntactic behaviour. Based on the findings, we propose a type-based approach named SynTime 1 for time expression recognition. Specifically, we define three main syntactic token types, namely time token, modifier, and numeral, to group time-related token regular expressions. On the types we design general heuristic rules to recognize time expressions. In recognition, SynTime first identifies time tokens from raw text, then searches their surroundings for modifiers and numerals to form time segments, and finally merges the time segments to time expressions. As a lightweight rule-based tagger, SynTime runs in real time, and can be easily expanded by simply adding keywords for the text from different domains and different text types. Experiments on benchmark datasets and tweets data show that SynTime outperforms state-of-the-art methods.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249 ], "paper_content_text": [ "Introduction Time expression plays an important role in information retrieval and many applications in natural language processing (Alonso et al., 2011; Campos et al., 2014) .", "Recognizing time expressions from free text has attracted considerable attention since last decade (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) .", "1 Source: https://github.com/zhongxiaoshi/syntime We analyze time expressions in four datasets: TimeBank (Pustejovsky et al., 2003b) , Gigaword (Parker et al., 2011) , WikiWars (Mazur and Dale, 2010) , and Tweets.", "From the analysis we make four findings about time expressions.", "First, most time expressions are very short, with 80% of time expressions containing no more than three tokens.", "Second, at least 91.8% of time expressions contain at least one time token.", "Third, the vocabulary used to express time information is very small, with a small group of keywords.", "Finally, words in time expressions demonstrate similar syntactic behaviour.", "All the findings relate to the principle of least effort (Zipf, 1949) .", "That is, people tend to act under the least effort in order to minimize the cost of energy at both individual level and collective level to language usage (Zipf, 1949) .", "Time expression is part of language and acts as an interface of communication.", "Short expressions, occurrence, small vocabulary, and similar syntactic behaviour all reduce the cost of energy required to communicate.", "According to the findings we propose a typebased approach named SynTime ('Syn' stands for syntactic) to recognize time expressions.", "Specifically, we define three main token types, namely time token, modifier, and numeral, to group timerelated token regular expressions.", "Time tokens are the words that explicitly express time information, such as time units (e.g., 'year').", "Modifiers modify time tokens; they appear before or after time tokens, e.g., 'several' and 'ago' in 'several years ago.'", "Numerals are ordinals and numbers.", "From free text SynTime first identifies time tokens, then recognizes modifiers and numerals.", "Naturally, SynTime is a rule-based tagger.", "The key difference between SynTime and other rulebased taggers lies in the way of defining token types and the way of designing rules.", "The definition of token type in SynTime is inspired by part-of-speech in which \"linguists group some words of language into classes (sets) which show similar syntactic behaviour.\"", "(Manning and Schutze, 1999) SynTime defines token types for tokens according to their syntactic behaviour.", "Other rulebased taggers define types for tokens based on their semantic meaning.", "For example, SUTime defines 5 semantic modifier types, such as frequency modifiers; 2 while SynTime defines 5 syntactic modifier types, such as modifiers that appear before time tokens.", "(See Section 4.1 for details.)", "Accordingly, other rule-based taggers design deterministic rules based on their meanings of tokens themselves.", "SynTime instead designs general rules on the token types rather than on the tokens themselves.", "For example, our general rules do not work on tokens 'February' nor '1989' but on their token types 'MONTH' and 'YEAR.'", "That is why we call SynTime a type-based approach.", "More importantly, other rule-based taggers design rules in a fixed method, including fixed length and fixed position.", "In contrast, SynTime designs general rules in a heuristic way, based on the idea of boundary expansion.", "The general heuristic rules are quite light-weight that it makes SynTime much more flexible and expansible, and leads SynTime to run in real time.", "The heuristic rules are designed on token types and are independent of specific tokens, SynTime therefore is independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domains and specific text types in English.", "(The test for other languages needs only to construct a collection of token regular expressions in the target language under our defined token types.)", "Specifically, we evaluate SynTime against three state-of-the-art methods (i.e., HeidelTime, SUTime, and UWTime) on three datasets: TimeBank, WikiWars, and Tweets.", "3 datasets.", "More importantly, SynTime achieves the best recalls on all three datasets and exceptionally good results on Tweets dataset.", "To sum up, we make the following contributions.", "• We analyze time expressions from four datasets and make four findings.", "The findings provide evidence in terms of time expression for the principle of least effort (Zipf, 1949) .", "• We propose a time tagger named SynTime to recognize time expressions using syntactic token types and general heuristic rules.", "Syn-Time is independent of specific tokens, and therefore independent of specific domains, specific text types, and specific languages.", "• We conduct experiments on three datasets, and the results demonstrate the effectiveness of SynTime against state-of-the-art baselines.", "Related Work Many research works on time expression identification are reported in TempEval exercises (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) .", "The task is divided into two subtasks: recognition and normalization.", "Rule-based Time Expression Recognition.", "Rule-based time taggers like GUTime, Heidel-Time, and SUTime, predefine time-related words and rules (Verhagen et al., 2005; Strötgen and Gertz, 2010; Chang and Manning, 2012) .", "Heidel-Time (Strötgen and Gertz, 2010) hand-crafts rules with time resources like weekdays and months, and leverages language clues like part-of-speech to identify time expression.", "SUTime (Chang and Manning, 2012) designs deterministic rules using a cascade finite automata (Hobbs et al., 1997) on regular expressions over tokens (Chang and Manning, 2014) .", "It first identifies individual words, then expands them to chunks, and finally to time expressions.", "Rule-based taggers achieve very good results in TempEval exercises.", "SynTime is also a rule-based tagger while its key difference from other rule-based taggers is that between the rules and the tokens it introduces a layer of token type; its rules work on token types and are independent of specific tokens.", "Moreover, SynTime designs rules in a heuristic way.", "Machine Learning based Method.", "Machine learning based methods extract features from the text and apply statistical models on the features for recognizing time expressions.", "Example features include character features, word features, syntactic features, semantic features, and gazetteer features (Llorens et al., 2010; Filannino et al., 2013; Bethard, 2013) .", "The statistical models include Markov logic network, logistic regression, support vector machines, maximum entropy, and conditional random fields (Llorens et al., 2010; Uz-Zaman and Allen, 2010; Filannino et al., 2013; Bethard, 2013) .", "Some models obtain good performance, and even achieve the highest F 1 of 82.71% on strict match in TempEval-3 (Bethard, 2013) .", "Outside TempEval exercises, Angeli et al.", "leverage compositional grammar and employ a EMstyle approach to learn a latent parser for time expression recognition (Angeli et al., 2012) .", "In the method named UWTime, Lee et al.", "handcraft a combinatory categorial grammar (CCG) (Steedman, 1996) to define a set of lexicon with rules and use L1-regularization to learn linguistic context (Lee et al., 2014) .", "The two methods explicitly use linguistic information.", "In (Lee et al., 2014) , especially, CCG could capture rich structure information of language, similar to the rule-based methods.", "Tabassum et al.", "focus on resolving the dates in tweets, and use distant supervision to recognize time expressions (Tabassum et al., 2016) .", "They use five time types and assign one of them to each word, which is similar to SynTime in the way of defining types over tokens.", "However, they focus only on the type of date, while SynTime recoginizes all the time expressions and does not involve learning and runs in real time.", "Time Expression Normalization.", "Methods in TempEval exercises design rules for time expression normalization (Verhagen et al., 2005; Strötgen and Gertz, 2010; Llorens et al., 2010; Uz-Zaman and Allen, 2010; Filannino et al., 2013; Bethard, 2013) .", "Because the rule systems have high similarity, Llorens et al.", "suggest to construct a large knowledge base as a public resource for the task (Llorens et al., 2012) .", "Some researchers treat the normalization process as a learning task and use machine learning methods (Lee et al., 2014; Tabassum et al., 2016) .", "Lee et al.", "(Lee et al., 2014) use AdaGrad algorithm (Duchi et al., 2011) and Tabassum et al.", "(Tabassum et al., 2016 ) use a loglinear algorithm to normalize time expressions.", "SynTime focuses only on the recognition task.", "The normalization could be achieved by using methods similar to the existing rule systems, because they are highly similar (Llorens et al., 2012) .", "We conduct an analysis on four datasets: Time-Bank, Gigaword, WikiWars, and Tweets.", "Time-Bank (Pustejovsky et al., 2003b ) is a benchmark dataset in TempEval series (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) , consisting of 183 news articles.", "Gigaword (Parker et al., 2011 ) is a large automatically labelled dataset with 2,452 news articles and used in TempEval-3.", "WikiWars dataset is derived from Wikipedia articles about wars (Mazur and Dale, 2010) .", "Tweets is our manually annotated dataset with 942 tweets of which each contains at least one time expression.", "Table 1 summarizes the datasets.", "Finding From the four datasets, we analyze their time expressions and make four findings.", "We will see that despite the four datasets vary in corpus sizes, in text types, and in domains, their time expressions demonstrate similar characteristics.", "Finding 1 Time expressions are very short.", "More than 80% of time expressions contain no more than three words and more than 90% contain no more than four words.", "Figure 1 plots the length distribution of time expressions.", "Although the texts are collected from different sources (i.e., news articles, Wikipedia articles, and tweets) and vary in sizes, the length Finding 2 More than 91% of time expressions contain at least one time token.", "The second column in Table 2 reports the percentage of time expressions that contain at least one time token.", "We find that at least 91.81% of time expressions contain time token(s).", "(Some time expressions have no time token but depend on other time expressions; in '2 to 8 days,' for example, '2' depends on '8 days.')", "This suggests that time tokens account for time expressions.", "Therefore, to recognize time expressions, it is essential to recognize their time tokens.", "Finding 3 Only a small group of time-related keywords are used to express time information.", "From the time expressions in all four datasets, we find that the group of keywords used to express time information is small.", "Table 3 reports the number of distinct words and of distinct time tokens.", "The words/tokens are manually normalized before counting and their variants are ignored.", "For example, 'year' and '5yrs' are counted as one token 'year.'", "Numerals in the counting are ignored.", "Despite the four datasets vary in sizes, domains, and text types, the numbers of their distinct time tokens are comparable.", "Across the four datasets, the number of distinct words is 350, about half of the simply summing of 675; the number of distinct time tokens is 123, less than half of the simply summing 282.", "Among the 123 distinct time tokens, 45 appear in all the four datasets, and 101 appear in at least two datasets.", "This indicates that time tokens, which account for time expressions, are highly overlapped across the four datasets.", "In other words, time expressions highly overlap at their time tokens.", "Finding 4 POS information could not distinguish time expressions from common words, but within time expressions, POS tags can help distinguish their constituents.", "For each dataset we list the top 10 POS tags that appear in time expressions, and their percentages over the whole text.", "Among the 40 tags (10 × 4 datasets), 37 have percentage lower than 20%; other 3 are CD.", "This indicates that POS could not provide enough information to distinguish time expressions from common words.", "However, the most common POS tags in time expressions are NN*, JJ, RB, CD, and DT.", "Within time expressions, the time tokens usually have NN* and RB, the modifiers have JJ and RB, and the numerals have CD.", "This finding indicates that for the time expressions, their similar constituents behave in similar syntactic way.", "When seeing this, we realize that this is exactly how linguists define part-of-speech for language.", "4 The definition of POS for language inspires us to define a syntactic type system for the time expression, part of language.", "The four findings all relate to the principle of least effort (Zipf, 1949) .", "That is, people tend to act with least effort so as to minimize the cost of energy at both individual and collective levels to the language usage (Zipf, 1949) .", "Time expression is part of language and acts as an interface of communication.", "Short expressions, occurrence, small vocabulary, and similar syntactic behaviour all reduce the cost of energy required to communicate.", "To summarize: on average, a time expression contains two tokens of which one is time token and the other is modifier/numeral, and the size of time tokens is small.", "To recognize a time expression, therefore, we first recognize the time token, then recognize the modifier/numeral.", "SynTime: Syntactic Token Types and General Heuristic Rules SynTime defines a syntactic type system for the tokens of time expressions, and designs heuristic rules working on the token types.", "Figure 2 shows the layout of SynTime, consisting of three levels: Token level, type level, and rule level.", "Token types at the type level group the tokens of time expressions.", "Heuristic rules lie at the rule level, working on token types rather than on tokens themselves.", "That is why the heuristic rules are general.", "For example, the heuristic rules do not work on tokens '1989' nor 'February,' but on their token types 'YEAR' and 'MONTH.'", "The heuristic rules are only relevant to token types, and are independent of specific tokens.", "For this reason, our token types and heuristic rules are independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domain (i.e., war domain) and specific text types (i.e., formal text and informal text) in English.", "The test for other languages simply needs to construct a set of token regular expressions in the target language under our defined token types.", "Figure 3 shows the overview of SynTime in practice.", "Shown on the left-hand side, SynTime is initialized with regular expressions over tokens.", "After initialization, SynTime can be directly applied on text.", "On the other hand, SynTime can be easily expanded by simply adding the time-related token regular expressions from training text under each defined token type.", "The expansion enables SynTime to recognize time expressions in text from different domains and different text types.", "Shown on the right-hand side of Figure 3 , Syn-Time recognizes time expression through three main steps.", "In the first step, SynTime identifies time tokens from the POS-tagged raw text.", "Then around the time tokens SynTime searches for modifiers and numerals to form time segments.", "In the last step, SynTime transforms the time segments to time expressions.", "SynTime Construction We define a syntactic type system for time expression, specifically, 15 token types for time tokens, 5 token types for modifiers, and 1 token type for numeral.", "Token types to tokens is like POS tags to words; for example, 'February' has a POS tag of NNP and a token type of MONTH.", "Time Token.", "We define 15 token types for the time tokens and use their names similar to Joda-Time classes: 5 DECADE (-), YEAR (-), SEA-SON (5), MONTH (12), WEEK (7), DATE (-), TIME (-), DAY TIME (27), TIMELINE (12), HOLIDAY (20), PERIOD (9), DURATION (-), TIME UNIT (15), TIME ZONE (6), and ERA (2).", "Number in '()' indicates the number of distinct tokens in this token type.", "'-' indicates that this token type involves changing digits and cannot be counted.", "Modifier.", "We define 3 token types for the modifiers according to their possible positions relative to time tokens.", "Modifiers that appear before time tokens are PREFIX (48); modifiers after time tokens are SUFFIX (2).", "LINKAGE (4) link two time tokens.", "Besides, we define 2 special modifier types, COMMA (1) for comma ',' and IN ARTICLE (2) for indefinite articles 'a' and 'an.'", "TimeML (Pustejovsky et al., 2003a) and Time-Bank (Pustejovsky et al., 2003b) do not treat most prepositions like 'on' as a part of time expressions.", "Thus SynTime does not collect those prepositions.", "Numeral.", "Number in time expressions can be a time token e.g., '10' in 'October 10, 2016,' or a modifier e.g., '10' in '10 days.'", "We define NU-MERAL (-) for the ordinals and numbers.", "SynTime Initialization.", "The token regular expressions for initializing SynTime are collected from SUTime, 6 a state-of-the-art rule-based tagger that achieved the highest recall in TempEval-3 (Chang and Manning, , 2013 .", "Specifically, we collect from SUTime only the tokens and the regular expressions over tokens, and discard its other rules of recognizing full time expressions.", "Time Expression Recognition On the token types, SynTime designs a small set of heuristic rules to recognize time expressions.", "The recognition process includes three main steps: (1) time token identification, (2) time segment identification, and (3) time expression extraction.", "Time Token Identification Identifying time tokens is simple, through matching of string and regular expressions.", "Some words might cause ambiguity.", "For example, 'May' could be a modal verb, or the fifth month of year.", "To filter out the ambiguous words, we use POS information.", "In implementation, we use Stanford POS Tagger; 7 and the POS tags for matching the instances of token types in SynTime are based on our Finding 4 in Section 3.2.", "Besides time tokens are identified, in this step, individual token is assigned with one token type of either modifier or numeral if it is matched with token regular expressions.", "In the next two steps, SynTime works on those token types.", "Time Segment Identification The task of time segment identification is to search the surrounding of each time token identified in previous step for modifiers and numerals, then gather the time token with its modifiers and numerals to form a time segment.", "The searching is under simple heuristic rules in which the key idea is to expand the time token's boundaries.", "At first, each time token is a time segment.", "If it is either a PERIOD or DURATION, then no need to further search.", "Otherwise, search its left and its right for modifiers and numerals.", "For the left searching, if encounter a PREFIX or NUMERAL or IN ARTICLE, then continue searching.", "For the right searching, if encounter a SUFFIX or NUMERAL, then continue searching.", "Both the left and the right searching stop when reaching a COMMA or LINK-AGE or a non-modifier/numeral word.", "The left searching does not exceed the previous time token; the right searching does not exceed the next time token.", "A time segment consists of exactly one time token, and zero or some modifiers/numerals.", "A special kind of time segments do not contain any time token; they depend on other time segments next to them.", "For example, in '8 to 20 days,' 'to 20 days' is a time segment, and '8 to' forms a dependent time segment.", "(See Figure 4(e) .)", "Time Expression Extraction The task of time expression extraction is to extract time expressions from the identified time segments in which the core step is to determine whether to merge two adjacent or overlapping time segments into a new time segment.", "We scan the time segments in a sentence from beginning to the end.", "A stand-alone time segment is a time expression.", "(See Figure 4(a) .)", "The focus is to deal with two or more time segments that are adjacent or overlapping.", "If two time segments s 1 and s 2 are adjacent, merge them to form a new time segment s 1 .", "(See Figure 4(b) .)", "Consider that s 1 and s 2 overlap at a shared boundary.", "According to our time segment identification, the shared boundary could be a modifier or a numeral.", "If the word at the shared boundary is neither a COMMA nor a LINKAGE, then merge s 1 and s 2 .", "(See Figure 4(c) .)", "If the word is a LINKAGE, then extract s 1 as a time expression and continue scanning.", "When the shared boundary is a COMMA, merge s 1 and s 2 only if the COMMA's previous token and its next token satisfy the three conditions: (1) the previous token is a time token or a NUMERAL; (2) the next token is a time token; and (3) the token types of the previous token and of the next token are not the same.", "(See Figure 4(d) .)", "Although Figure 4 shows the examples as token types together with the tokens, we should note that the heuristic rules only work on the token types.", "After the extraction step, time expressions are exported as a sequence of tokens from the sequence of token types.", "SynTime Expansion SynTime could be expanded by simply adding new words under each defined token type without changing any rule.", "The expansion requires the words to be added to be annotated manually.", "We apply the initial SynTime on the time expressions from training text and list the words that are not covered.", "Whether the uncovered words are added to SynTime is manually determined.", "The rule for determination is that the added words can not cause ambiguity and should be generic.", "Wiki-Wars dataset contains a few examples like this: 'The time Arnold reached Quebec City.'", "Words in this example are extremely descriptive, and we do not collect them.", "In tweets, on the other hand, people may use abbreviations and informal variants; for example, '2day' and 'tday' are popular spellings of 'today.'", "Such kind of abbreviations and informal variants will be collected.", "According to our findings, not many words are used to express time information, the manual addition of keywords thus will not cost much.", "In addition, we find that even in tweets people tend to use formal words.", "In the Twitter word clusters trained from 56 million English tweets, 8 the most often used words are the formal words, and their frequencies are much greater than the informal words'.", "The cluster of 'today,' 9 for example, its most often use is the formal one, 'today,' which appears 1,220,829 times; while its second most often use '2day' appears only 34,827 times.", "The low rate of informal words (e.g., about 3% in 'today' cluster) suggests that even in informal environment the manual keyword addition costs little.", "Experiments We evaluate SynTime against three state-of-theart baselines (i.e., HeidelTime, SUTime, and UW-Time) on three datasets (i.e., TimeBank, Wiki-Wars, and Tweets).", "WikiWars is a specific domain dataset about war; TimeBank and WikiWars are the datasets in formal text while Tweets dataset is in informal text.", "For SynTime we report the results of its two versions: SynTime-I and SynTime-E. SynTime-I is the initial version, and SynTime-E is the expanded version of SynTime-I.", "Experiment Setting Datasets.", "We use three datasets of which TimeBank and WikiWars are benchmark datasets whose details are shown in Section 3.1; Tweets is our manually labeled dataset that are collected from Twitter.", "For Tweets dataset, we randomly sample 4000 tweets and use SUTime to tag them.", "942 tweets of which each contains at least one time expression.", "From the remaining 3,058 tweets, we randomly sample 500 and manually annotate them, and find that only 15 tweets contain time expressions.", "We therefore roughly consider that SU-Time misses about 3% time expressions in tweets.", "Two annotators then manually annotate the 942 tweets with discussion to final agreement according to the standards of TimeML and TimeBank.", "We finally get 1,127 manually labeled time expressions.", "For the 942 tweets, we randomly sample 200 tweets as test set, and the rest 742 as training set, because a baseline UWTime requires training.", "Baseline Methods.", "We compare SynTime with methods: HeidelTime (Strötgen and Gertz, 2010) , SUTime (Chang and , and UW- Evaluation Metrics.", "We follow TempEval-3 and use their evaluation toolkit 10 to report P recision, Recall, and F 1 in terms of strict match and relaxed match (UzZaman et al., 2013).", "22, 1986' and 'February 01, 1989 ' at the level of word or of character.", "One suggestion is to consider a type-based learning method that could use type information.", "For example, the above two time expressions refer to the same pattern of 'MONTH NUMERAL COMMA Table 5 lists the number of time tokens and modifiers added to SynTime-I to get SynTime-E. On TimeBank and Tweets datasets, only a few tokens are added, the corresponding results are affected slightly.", "This confirms that the size of time words is small, and that SynTime-I covers most of time words.", "On WikiWars dataset, relatively more tokens are added, SynTime-E performs much better than SynTime-I, especially in recall.", "It improves the recall by 3.25% in strict match and by 2.98% in relaxed match.", "This indicates that with more words added from specific domains (e.g., WikiWars dataset about war), SynTime can significantly improve the performance.", "Experiment Result Limitations SynTime assumes that words are tokenized and POS tagged correctly.", "In reality, however, the tokenized and tagged words are not that perfect, due to the limitation of used tools.", "For example, Stanford POS Tagger assigns VBD to the word 'sat' in 'friday or sat' while whose tag should be NNP.", "The incorrect tokens and POS tags affect the result.", "Conclusion and future work We conduct an analysis on time expressions from four datasets, and find that time expressions in general are very short and expressed by a small vocabulary, and words in time expressions demonstrate similar syntactic behavior.", "Our findings provide evidence in terms of time expression for the principle of least effort (Zipf, 1949) .", "Inspired by part-of-speech, based on the findings, we define a syntactic type system for the time expression, and propose a type-based time expression tagger, named by SynTime.", "SynTime defines syntactic token types for tokens and on the token types it designs general heuristic rules based on the idea of boundary expansion.", "Experiments on three datasets show that SynTime outperforms the stateof-the-art baselines, including rule-based time taggers and machine learning based time tagger.", "Because our heuristic rules are quite simple, Syn-Time is light-weight and runs in real time.", "Our token types and heuristic rules are independent of specific tokens, SynTime therefore is independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domains and specific text types in English.", "The test for other languages needs only to construct a collection of token regular expressions in the target language under our defined token types.", "Time expression is part of language and follows the principle of least effort.", "Since language usage relates to human habits (Zipf, 1949; Chomsky, 1986; Pinker, 1995) , we might expect that humans would share some common habits, and therefore expect that other parts of language would more or less follow the same principle.", "In the future we will try our analytical method on other parts of language." ] }
{ "paper_header_number": [ "1", "2", "3.2", "4", "4.1", "4.2", "4.2.1", "4.2.2", "4.2.3", "4.3", "5", "5.3", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Finding", "SynTime: Syntactic Token Types and General Heuristic Rules", "SynTime Construction", "Time Expression Recognition", "Time Token Identification", "Time Segment Identification", "Time Expression Extraction", "SynTime Expansion", "Experiments", "Limitations", "Conclusion and future work" ] }
GEM-SciDuet-train-99#paper-1262#slide-14
Time Expression Recognition Experiments
SynTime-E: Expanded version, adding keywords to SynTime-I (Add keywords under the defined token types and do not change any rules.) TimeBank: comprehensive data in formal text WikiWars: specific domain data in formal text Tweets: comprehensive data in informal text
SynTime-E: Expanded version, adding keywords to SynTime-I (Add keywords under the defined token types and do not change any rules.) TimeBank: comprehensive data in formal text WikiWars: specific domain data in formal text Tweets: comprehensive data in informal text
[]
GEM-SciDuet-train-99#paper-1262#slide-15
1262
Time Expression Analysis and Recognition Using Syntactic Token Types and General Heuristic Rules
Extracting time expressions from free text is a fundamental task for many applications. We analyze time expressions from four different datasets and find that only a small group of words are used to express time information and that the words in time expressions demonstrate similar syntactic behaviour. Based on the findings, we propose a type-based approach named SynTime 1 for time expression recognition. Specifically, we define three main syntactic token types, namely time token, modifier, and numeral, to group time-related token regular expressions. On the types we design general heuristic rules to recognize time expressions. In recognition, SynTime first identifies time tokens from raw text, then searches their surroundings for modifiers and numerals to form time segments, and finally merges the time segments to time expressions. As a lightweight rule-based tagger, SynTime runs in real time, and can be easily expanded by simply adding keywords for the text from different domains and different text types. Experiments on benchmark datasets and tweets data show that SynTime outperforms state-of-the-art methods.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249 ], "paper_content_text": [ "Introduction Time expression plays an important role in information retrieval and many applications in natural language processing (Alonso et al., 2011; Campos et al., 2014) .", "Recognizing time expressions from free text has attracted considerable attention since last decade (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) .", "1 Source: https://github.com/zhongxiaoshi/syntime We analyze time expressions in four datasets: TimeBank (Pustejovsky et al., 2003b) , Gigaword (Parker et al., 2011) , WikiWars (Mazur and Dale, 2010) , and Tweets.", "From the analysis we make four findings about time expressions.", "First, most time expressions are very short, with 80% of time expressions containing no more than three tokens.", "Second, at least 91.8% of time expressions contain at least one time token.", "Third, the vocabulary used to express time information is very small, with a small group of keywords.", "Finally, words in time expressions demonstrate similar syntactic behaviour.", "All the findings relate to the principle of least effort (Zipf, 1949) .", "That is, people tend to act under the least effort in order to minimize the cost of energy at both individual level and collective level to language usage (Zipf, 1949) .", "Time expression is part of language and acts as an interface of communication.", "Short expressions, occurrence, small vocabulary, and similar syntactic behaviour all reduce the cost of energy required to communicate.", "According to the findings we propose a typebased approach named SynTime ('Syn' stands for syntactic) to recognize time expressions.", "Specifically, we define three main token types, namely time token, modifier, and numeral, to group timerelated token regular expressions.", "Time tokens are the words that explicitly express time information, such as time units (e.g., 'year').", "Modifiers modify time tokens; they appear before or after time tokens, e.g., 'several' and 'ago' in 'several years ago.'", "Numerals are ordinals and numbers.", "From free text SynTime first identifies time tokens, then recognizes modifiers and numerals.", "Naturally, SynTime is a rule-based tagger.", "The key difference between SynTime and other rulebased taggers lies in the way of defining token types and the way of designing rules.", "The definition of token type in SynTime is inspired by part-of-speech in which \"linguists group some words of language into classes (sets) which show similar syntactic behaviour.\"", "(Manning and Schutze, 1999) SynTime defines token types for tokens according to their syntactic behaviour.", "Other rulebased taggers define types for tokens based on their semantic meaning.", "For example, SUTime defines 5 semantic modifier types, such as frequency modifiers; 2 while SynTime defines 5 syntactic modifier types, such as modifiers that appear before time tokens.", "(See Section 4.1 for details.)", "Accordingly, other rule-based taggers design deterministic rules based on their meanings of tokens themselves.", "SynTime instead designs general rules on the token types rather than on the tokens themselves.", "For example, our general rules do not work on tokens 'February' nor '1989' but on their token types 'MONTH' and 'YEAR.'", "That is why we call SynTime a type-based approach.", "More importantly, other rule-based taggers design rules in a fixed method, including fixed length and fixed position.", "In contrast, SynTime designs general rules in a heuristic way, based on the idea of boundary expansion.", "The general heuristic rules are quite light-weight that it makes SynTime much more flexible and expansible, and leads SynTime to run in real time.", "The heuristic rules are designed on token types and are independent of specific tokens, SynTime therefore is independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domains and specific text types in English.", "(The test for other languages needs only to construct a collection of token regular expressions in the target language under our defined token types.)", "Specifically, we evaluate SynTime against three state-of-the-art methods (i.e., HeidelTime, SUTime, and UWTime) on three datasets: TimeBank, WikiWars, and Tweets.", "3 datasets.", "More importantly, SynTime achieves the best recalls on all three datasets and exceptionally good results on Tweets dataset.", "To sum up, we make the following contributions.", "• We analyze time expressions from four datasets and make four findings.", "The findings provide evidence in terms of time expression for the principle of least effort (Zipf, 1949) .", "• We propose a time tagger named SynTime to recognize time expressions using syntactic token types and general heuristic rules.", "Syn-Time is independent of specific tokens, and therefore independent of specific domains, specific text types, and specific languages.", "• We conduct experiments on three datasets, and the results demonstrate the effectiveness of SynTime against state-of-the-art baselines.", "Related Work Many research works on time expression identification are reported in TempEval exercises (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) .", "The task is divided into two subtasks: recognition and normalization.", "Rule-based Time Expression Recognition.", "Rule-based time taggers like GUTime, Heidel-Time, and SUTime, predefine time-related words and rules (Verhagen et al., 2005; Strötgen and Gertz, 2010; Chang and Manning, 2012) .", "Heidel-Time (Strötgen and Gertz, 2010) hand-crafts rules with time resources like weekdays and months, and leverages language clues like part-of-speech to identify time expression.", "SUTime (Chang and Manning, 2012) designs deterministic rules using a cascade finite automata (Hobbs et al., 1997) on regular expressions over tokens (Chang and Manning, 2014) .", "It first identifies individual words, then expands them to chunks, and finally to time expressions.", "Rule-based taggers achieve very good results in TempEval exercises.", "SynTime is also a rule-based tagger while its key difference from other rule-based taggers is that between the rules and the tokens it introduces a layer of token type; its rules work on token types and are independent of specific tokens.", "Moreover, SynTime designs rules in a heuristic way.", "Machine Learning based Method.", "Machine learning based methods extract features from the text and apply statistical models on the features for recognizing time expressions.", "Example features include character features, word features, syntactic features, semantic features, and gazetteer features (Llorens et al., 2010; Filannino et al., 2013; Bethard, 2013) .", "The statistical models include Markov logic network, logistic regression, support vector machines, maximum entropy, and conditional random fields (Llorens et al., 2010; Uz-Zaman and Allen, 2010; Filannino et al., 2013; Bethard, 2013) .", "Some models obtain good performance, and even achieve the highest F 1 of 82.71% on strict match in TempEval-3 (Bethard, 2013) .", "Outside TempEval exercises, Angeli et al.", "leverage compositional grammar and employ a EMstyle approach to learn a latent parser for time expression recognition (Angeli et al., 2012) .", "In the method named UWTime, Lee et al.", "handcraft a combinatory categorial grammar (CCG) (Steedman, 1996) to define a set of lexicon with rules and use L1-regularization to learn linguistic context (Lee et al., 2014) .", "The two methods explicitly use linguistic information.", "In (Lee et al., 2014) , especially, CCG could capture rich structure information of language, similar to the rule-based methods.", "Tabassum et al.", "focus on resolving the dates in tweets, and use distant supervision to recognize time expressions (Tabassum et al., 2016) .", "They use five time types and assign one of them to each word, which is similar to SynTime in the way of defining types over tokens.", "However, they focus only on the type of date, while SynTime recoginizes all the time expressions and does not involve learning and runs in real time.", "Time Expression Normalization.", "Methods in TempEval exercises design rules for time expression normalization (Verhagen et al., 2005; Strötgen and Gertz, 2010; Llorens et al., 2010; Uz-Zaman and Allen, 2010; Filannino et al., 2013; Bethard, 2013) .", "Because the rule systems have high similarity, Llorens et al.", "suggest to construct a large knowledge base as a public resource for the task (Llorens et al., 2012) .", "Some researchers treat the normalization process as a learning task and use machine learning methods (Lee et al., 2014; Tabassum et al., 2016) .", "Lee et al.", "(Lee et al., 2014) use AdaGrad algorithm (Duchi et al., 2011) and Tabassum et al.", "(Tabassum et al., 2016 ) use a loglinear algorithm to normalize time expressions.", "SynTime focuses only on the recognition task.", "The normalization could be achieved by using methods similar to the existing rule systems, because they are highly similar (Llorens et al., 2012) .", "We conduct an analysis on four datasets: Time-Bank, Gigaword, WikiWars, and Tweets.", "Time-Bank (Pustejovsky et al., 2003b ) is a benchmark dataset in TempEval series (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) , consisting of 183 news articles.", "Gigaword (Parker et al., 2011 ) is a large automatically labelled dataset with 2,452 news articles and used in TempEval-3.", "WikiWars dataset is derived from Wikipedia articles about wars (Mazur and Dale, 2010) .", "Tweets is our manually annotated dataset with 942 tweets of which each contains at least one time expression.", "Table 1 summarizes the datasets.", "Finding From the four datasets, we analyze their time expressions and make four findings.", "We will see that despite the four datasets vary in corpus sizes, in text types, and in domains, their time expressions demonstrate similar characteristics.", "Finding 1 Time expressions are very short.", "More than 80% of time expressions contain no more than three words and more than 90% contain no more than four words.", "Figure 1 plots the length distribution of time expressions.", "Although the texts are collected from different sources (i.e., news articles, Wikipedia articles, and tweets) and vary in sizes, the length Finding 2 More than 91% of time expressions contain at least one time token.", "The second column in Table 2 reports the percentage of time expressions that contain at least one time token.", "We find that at least 91.81% of time expressions contain time token(s).", "(Some time expressions have no time token but depend on other time expressions; in '2 to 8 days,' for example, '2' depends on '8 days.')", "This suggests that time tokens account for time expressions.", "Therefore, to recognize time expressions, it is essential to recognize their time tokens.", "Finding 3 Only a small group of time-related keywords are used to express time information.", "From the time expressions in all four datasets, we find that the group of keywords used to express time information is small.", "Table 3 reports the number of distinct words and of distinct time tokens.", "The words/tokens are manually normalized before counting and their variants are ignored.", "For example, 'year' and '5yrs' are counted as one token 'year.'", "Numerals in the counting are ignored.", "Despite the four datasets vary in sizes, domains, and text types, the numbers of their distinct time tokens are comparable.", "Across the four datasets, the number of distinct words is 350, about half of the simply summing of 675; the number of distinct time tokens is 123, less than half of the simply summing 282.", "Among the 123 distinct time tokens, 45 appear in all the four datasets, and 101 appear in at least two datasets.", "This indicates that time tokens, which account for time expressions, are highly overlapped across the four datasets.", "In other words, time expressions highly overlap at their time tokens.", "Finding 4 POS information could not distinguish time expressions from common words, but within time expressions, POS tags can help distinguish their constituents.", "For each dataset we list the top 10 POS tags that appear in time expressions, and their percentages over the whole text.", "Among the 40 tags (10 × 4 datasets), 37 have percentage lower than 20%; other 3 are CD.", "This indicates that POS could not provide enough information to distinguish time expressions from common words.", "However, the most common POS tags in time expressions are NN*, JJ, RB, CD, and DT.", "Within time expressions, the time tokens usually have NN* and RB, the modifiers have JJ and RB, and the numerals have CD.", "This finding indicates that for the time expressions, their similar constituents behave in similar syntactic way.", "When seeing this, we realize that this is exactly how linguists define part-of-speech for language.", "4 The definition of POS for language inspires us to define a syntactic type system for the time expression, part of language.", "The four findings all relate to the principle of least effort (Zipf, 1949) .", "That is, people tend to act with least effort so as to minimize the cost of energy at both individual and collective levels to the language usage (Zipf, 1949) .", "Time expression is part of language and acts as an interface of communication.", "Short expressions, occurrence, small vocabulary, and similar syntactic behaviour all reduce the cost of energy required to communicate.", "To summarize: on average, a time expression contains two tokens of which one is time token and the other is modifier/numeral, and the size of time tokens is small.", "To recognize a time expression, therefore, we first recognize the time token, then recognize the modifier/numeral.", "SynTime: Syntactic Token Types and General Heuristic Rules SynTime defines a syntactic type system for the tokens of time expressions, and designs heuristic rules working on the token types.", "Figure 2 shows the layout of SynTime, consisting of three levels: Token level, type level, and rule level.", "Token types at the type level group the tokens of time expressions.", "Heuristic rules lie at the rule level, working on token types rather than on tokens themselves.", "That is why the heuristic rules are general.", "For example, the heuristic rules do not work on tokens '1989' nor 'February,' but on their token types 'YEAR' and 'MONTH.'", "The heuristic rules are only relevant to token types, and are independent of specific tokens.", "For this reason, our token types and heuristic rules are independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domain (i.e., war domain) and specific text types (i.e., formal text and informal text) in English.", "The test for other languages simply needs to construct a set of token regular expressions in the target language under our defined token types.", "Figure 3 shows the overview of SynTime in practice.", "Shown on the left-hand side, SynTime is initialized with regular expressions over tokens.", "After initialization, SynTime can be directly applied on text.", "On the other hand, SynTime can be easily expanded by simply adding the time-related token regular expressions from training text under each defined token type.", "The expansion enables SynTime to recognize time expressions in text from different domains and different text types.", "Shown on the right-hand side of Figure 3 , Syn-Time recognizes time expression through three main steps.", "In the first step, SynTime identifies time tokens from the POS-tagged raw text.", "Then around the time tokens SynTime searches for modifiers and numerals to form time segments.", "In the last step, SynTime transforms the time segments to time expressions.", "SynTime Construction We define a syntactic type system for time expression, specifically, 15 token types for time tokens, 5 token types for modifiers, and 1 token type for numeral.", "Token types to tokens is like POS tags to words; for example, 'February' has a POS tag of NNP and a token type of MONTH.", "Time Token.", "We define 15 token types for the time tokens and use their names similar to Joda-Time classes: 5 DECADE (-), YEAR (-), SEA-SON (5), MONTH (12), WEEK (7), DATE (-), TIME (-), DAY TIME (27), TIMELINE (12), HOLIDAY (20), PERIOD (9), DURATION (-), TIME UNIT (15), TIME ZONE (6), and ERA (2).", "Number in '()' indicates the number of distinct tokens in this token type.", "'-' indicates that this token type involves changing digits and cannot be counted.", "Modifier.", "We define 3 token types for the modifiers according to their possible positions relative to time tokens.", "Modifiers that appear before time tokens are PREFIX (48); modifiers after time tokens are SUFFIX (2).", "LINKAGE (4) link two time tokens.", "Besides, we define 2 special modifier types, COMMA (1) for comma ',' and IN ARTICLE (2) for indefinite articles 'a' and 'an.'", "TimeML (Pustejovsky et al., 2003a) and Time-Bank (Pustejovsky et al., 2003b) do not treat most prepositions like 'on' as a part of time expressions.", "Thus SynTime does not collect those prepositions.", "Numeral.", "Number in time expressions can be a time token e.g., '10' in 'October 10, 2016,' or a modifier e.g., '10' in '10 days.'", "We define NU-MERAL (-) for the ordinals and numbers.", "SynTime Initialization.", "The token regular expressions for initializing SynTime are collected from SUTime, 6 a state-of-the-art rule-based tagger that achieved the highest recall in TempEval-3 (Chang and Manning, , 2013 .", "Specifically, we collect from SUTime only the tokens and the regular expressions over tokens, and discard its other rules of recognizing full time expressions.", "Time Expression Recognition On the token types, SynTime designs a small set of heuristic rules to recognize time expressions.", "The recognition process includes three main steps: (1) time token identification, (2) time segment identification, and (3) time expression extraction.", "Time Token Identification Identifying time tokens is simple, through matching of string and regular expressions.", "Some words might cause ambiguity.", "For example, 'May' could be a modal verb, or the fifth month of year.", "To filter out the ambiguous words, we use POS information.", "In implementation, we use Stanford POS Tagger; 7 and the POS tags for matching the instances of token types in SynTime are based on our Finding 4 in Section 3.2.", "Besides time tokens are identified, in this step, individual token is assigned with one token type of either modifier or numeral if it is matched with token regular expressions.", "In the next two steps, SynTime works on those token types.", "Time Segment Identification The task of time segment identification is to search the surrounding of each time token identified in previous step for modifiers and numerals, then gather the time token with its modifiers and numerals to form a time segment.", "The searching is under simple heuristic rules in which the key idea is to expand the time token's boundaries.", "At first, each time token is a time segment.", "If it is either a PERIOD or DURATION, then no need to further search.", "Otherwise, search its left and its right for modifiers and numerals.", "For the left searching, if encounter a PREFIX or NUMERAL or IN ARTICLE, then continue searching.", "For the right searching, if encounter a SUFFIX or NUMERAL, then continue searching.", "Both the left and the right searching stop when reaching a COMMA or LINK-AGE or a non-modifier/numeral word.", "The left searching does not exceed the previous time token; the right searching does not exceed the next time token.", "A time segment consists of exactly one time token, and zero or some modifiers/numerals.", "A special kind of time segments do not contain any time token; they depend on other time segments next to them.", "For example, in '8 to 20 days,' 'to 20 days' is a time segment, and '8 to' forms a dependent time segment.", "(See Figure 4(e) .)", "Time Expression Extraction The task of time expression extraction is to extract time expressions from the identified time segments in which the core step is to determine whether to merge two adjacent or overlapping time segments into a new time segment.", "We scan the time segments in a sentence from beginning to the end.", "A stand-alone time segment is a time expression.", "(See Figure 4(a) .)", "The focus is to deal with two or more time segments that are adjacent or overlapping.", "If two time segments s 1 and s 2 are adjacent, merge them to form a new time segment s 1 .", "(See Figure 4(b) .)", "Consider that s 1 and s 2 overlap at a shared boundary.", "According to our time segment identification, the shared boundary could be a modifier or a numeral.", "If the word at the shared boundary is neither a COMMA nor a LINKAGE, then merge s 1 and s 2 .", "(See Figure 4(c) .)", "If the word is a LINKAGE, then extract s 1 as a time expression and continue scanning.", "When the shared boundary is a COMMA, merge s 1 and s 2 only if the COMMA's previous token and its next token satisfy the three conditions: (1) the previous token is a time token or a NUMERAL; (2) the next token is a time token; and (3) the token types of the previous token and of the next token are not the same.", "(See Figure 4(d) .)", "Although Figure 4 shows the examples as token types together with the tokens, we should note that the heuristic rules only work on the token types.", "After the extraction step, time expressions are exported as a sequence of tokens from the sequence of token types.", "SynTime Expansion SynTime could be expanded by simply adding new words under each defined token type without changing any rule.", "The expansion requires the words to be added to be annotated manually.", "We apply the initial SynTime on the time expressions from training text and list the words that are not covered.", "Whether the uncovered words are added to SynTime is manually determined.", "The rule for determination is that the added words can not cause ambiguity and should be generic.", "Wiki-Wars dataset contains a few examples like this: 'The time Arnold reached Quebec City.'", "Words in this example are extremely descriptive, and we do not collect them.", "In tweets, on the other hand, people may use abbreviations and informal variants; for example, '2day' and 'tday' are popular spellings of 'today.'", "Such kind of abbreviations and informal variants will be collected.", "According to our findings, not many words are used to express time information, the manual addition of keywords thus will not cost much.", "In addition, we find that even in tweets people tend to use formal words.", "In the Twitter word clusters trained from 56 million English tweets, 8 the most often used words are the formal words, and their frequencies are much greater than the informal words'.", "The cluster of 'today,' 9 for example, its most often use is the formal one, 'today,' which appears 1,220,829 times; while its second most often use '2day' appears only 34,827 times.", "The low rate of informal words (e.g., about 3% in 'today' cluster) suggests that even in informal environment the manual keyword addition costs little.", "Experiments We evaluate SynTime against three state-of-theart baselines (i.e., HeidelTime, SUTime, and UW-Time) on three datasets (i.e., TimeBank, Wiki-Wars, and Tweets).", "WikiWars is a specific domain dataset about war; TimeBank and WikiWars are the datasets in formal text while Tweets dataset is in informal text.", "For SynTime we report the results of its two versions: SynTime-I and SynTime-E. SynTime-I is the initial version, and SynTime-E is the expanded version of SynTime-I.", "Experiment Setting Datasets.", "We use three datasets of which TimeBank and WikiWars are benchmark datasets whose details are shown in Section 3.1; Tweets is our manually labeled dataset that are collected from Twitter.", "For Tweets dataset, we randomly sample 4000 tweets and use SUTime to tag them.", "942 tweets of which each contains at least one time expression.", "From the remaining 3,058 tweets, we randomly sample 500 and manually annotate them, and find that only 15 tweets contain time expressions.", "We therefore roughly consider that SU-Time misses about 3% time expressions in tweets.", "Two annotators then manually annotate the 942 tweets with discussion to final agreement according to the standards of TimeML and TimeBank.", "We finally get 1,127 manually labeled time expressions.", "For the 942 tweets, we randomly sample 200 tweets as test set, and the rest 742 as training set, because a baseline UWTime requires training.", "Baseline Methods.", "We compare SynTime with methods: HeidelTime (Strötgen and Gertz, 2010) , SUTime (Chang and , and UW- Evaluation Metrics.", "We follow TempEval-3 and use their evaluation toolkit 10 to report P recision, Recall, and F 1 in terms of strict match and relaxed match (UzZaman et al., 2013).", "22, 1986' and 'February 01, 1989 ' at the level of word or of character.", "One suggestion is to consider a type-based learning method that could use type information.", "For example, the above two time expressions refer to the same pattern of 'MONTH NUMERAL COMMA Table 5 lists the number of time tokens and modifiers added to SynTime-I to get SynTime-E. On TimeBank and Tweets datasets, only a few tokens are added, the corresponding results are affected slightly.", "This confirms that the size of time words is small, and that SynTime-I covers most of time words.", "On WikiWars dataset, relatively more tokens are added, SynTime-E performs much better than SynTime-I, especially in recall.", "It improves the recall by 3.25% in strict match and by 2.98% in relaxed match.", "This indicates that with more words added from specific domains (e.g., WikiWars dataset about war), SynTime can significantly improve the performance.", "Experiment Result Limitations SynTime assumes that words are tokenized and POS tagged correctly.", "In reality, however, the tokenized and tagged words are not that perfect, due to the limitation of used tools.", "For example, Stanford POS Tagger assigns VBD to the word 'sat' in 'friday or sat' while whose tag should be NNP.", "The incorrect tokens and POS tags affect the result.", "Conclusion and future work We conduct an analysis on time expressions from four datasets, and find that time expressions in general are very short and expressed by a small vocabulary, and words in time expressions demonstrate similar syntactic behavior.", "Our findings provide evidence in terms of time expression for the principle of least effort (Zipf, 1949) .", "Inspired by part-of-speech, based on the findings, we define a syntactic type system for the time expression, and propose a type-based time expression tagger, named by SynTime.", "SynTime defines syntactic token types for tokens and on the token types it designs general heuristic rules based on the idea of boundary expansion.", "Experiments on three datasets show that SynTime outperforms the stateof-the-art baselines, including rule-based time taggers and machine learning based time tagger.", "Because our heuristic rules are quite simple, Syn-Time is light-weight and runs in real time.", "Our token types and heuristic rules are independent of specific tokens, SynTime therefore is independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domains and specific text types in English.", "The test for other languages needs only to construct a collection of token regular expressions in the target language under our defined token types.", "Time expression is part of language and follows the principle of least effort.", "Since language usage relates to human habits (Zipf, 1949; Chomsky, 1986; Pinker, 1995) , we might expect that humans would share some common habits, and therefore expect that other parts of language would more or less follow the same principle.", "In the future we will try our analytical method on other parts of language." ] }
{ "paper_header_number": [ "1", "2", "3.2", "4", "4.1", "4.2", "4.2.1", "4.2.2", "4.2.3", "4.3", "5", "5.3", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Finding", "SynTime: Syntactic Token Types and General Heuristic Rules", "SynTime Construction", "Time Expression Recognition", "Time Token Identification", "Time Segment Identification", "Time Expression Extraction", "SynTime Expansion", "Experiments", "Limitations", "Conclusion and future work" ] }
GEM-SciDuet-train-99#paper-1262#slide-15
Overall performance
borrowed from their original papers and the papers indicated by the references. SUTime(Chang and Manning, 2013)
borrowed from their original papers and the papers indicated by the references. SUTime(Chang and Manning, 2013)
[]
GEM-SciDuet-train-99#paper-1262#slide-16
1262
Time Expression Analysis and Recognition Using Syntactic Token Types and General Heuristic Rules
Extracting time expressions from free text is a fundamental task for many applications. We analyze time expressions from four different datasets and find that only a small group of words are used to express time information and that the words in time expressions demonstrate similar syntactic behaviour. Based on the findings, we propose a type-based approach named SynTime 1 for time expression recognition. Specifically, we define three main syntactic token types, namely time token, modifier, and numeral, to group time-related token regular expressions. On the types we design general heuristic rules to recognize time expressions. In recognition, SynTime first identifies time tokens from raw text, then searches their surroundings for modifiers and numerals to form time segments, and finally merges the time segments to time expressions. As a lightweight rule-based tagger, SynTime runs in real time, and can be easily expanded by simply adding keywords for the text from different domains and different text types. Experiments on benchmark datasets and tweets data show that SynTime outperforms state-of-the-art methods.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249 ], "paper_content_text": [ "Introduction Time expression plays an important role in information retrieval and many applications in natural language processing (Alonso et al., 2011; Campos et al., 2014) .", "Recognizing time expressions from free text has attracted considerable attention since last decade (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) .", "1 Source: https://github.com/zhongxiaoshi/syntime We analyze time expressions in four datasets: TimeBank (Pustejovsky et al., 2003b) , Gigaword (Parker et al., 2011) , WikiWars (Mazur and Dale, 2010) , and Tweets.", "From the analysis we make four findings about time expressions.", "First, most time expressions are very short, with 80% of time expressions containing no more than three tokens.", "Second, at least 91.8% of time expressions contain at least one time token.", "Third, the vocabulary used to express time information is very small, with a small group of keywords.", "Finally, words in time expressions demonstrate similar syntactic behaviour.", "All the findings relate to the principle of least effort (Zipf, 1949) .", "That is, people tend to act under the least effort in order to minimize the cost of energy at both individual level and collective level to language usage (Zipf, 1949) .", "Time expression is part of language and acts as an interface of communication.", "Short expressions, occurrence, small vocabulary, and similar syntactic behaviour all reduce the cost of energy required to communicate.", "According to the findings we propose a typebased approach named SynTime ('Syn' stands for syntactic) to recognize time expressions.", "Specifically, we define three main token types, namely time token, modifier, and numeral, to group timerelated token regular expressions.", "Time tokens are the words that explicitly express time information, such as time units (e.g., 'year').", "Modifiers modify time tokens; they appear before or after time tokens, e.g., 'several' and 'ago' in 'several years ago.'", "Numerals are ordinals and numbers.", "From free text SynTime first identifies time tokens, then recognizes modifiers and numerals.", "Naturally, SynTime is a rule-based tagger.", "The key difference between SynTime and other rulebased taggers lies in the way of defining token types and the way of designing rules.", "The definition of token type in SynTime is inspired by part-of-speech in which \"linguists group some words of language into classes (sets) which show similar syntactic behaviour.\"", "(Manning and Schutze, 1999) SynTime defines token types for tokens according to their syntactic behaviour.", "Other rulebased taggers define types for tokens based on their semantic meaning.", "For example, SUTime defines 5 semantic modifier types, such as frequency modifiers; 2 while SynTime defines 5 syntactic modifier types, such as modifiers that appear before time tokens.", "(See Section 4.1 for details.)", "Accordingly, other rule-based taggers design deterministic rules based on their meanings of tokens themselves.", "SynTime instead designs general rules on the token types rather than on the tokens themselves.", "For example, our general rules do not work on tokens 'February' nor '1989' but on their token types 'MONTH' and 'YEAR.'", "That is why we call SynTime a type-based approach.", "More importantly, other rule-based taggers design rules in a fixed method, including fixed length and fixed position.", "In contrast, SynTime designs general rules in a heuristic way, based on the idea of boundary expansion.", "The general heuristic rules are quite light-weight that it makes SynTime much more flexible and expansible, and leads SynTime to run in real time.", "The heuristic rules are designed on token types and are independent of specific tokens, SynTime therefore is independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domains and specific text types in English.", "(The test for other languages needs only to construct a collection of token regular expressions in the target language under our defined token types.)", "Specifically, we evaluate SynTime against three state-of-the-art methods (i.e., HeidelTime, SUTime, and UWTime) on three datasets: TimeBank, WikiWars, and Tweets.", "3 datasets.", "More importantly, SynTime achieves the best recalls on all three datasets and exceptionally good results on Tweets dataset.", "To sum up, we make the following contributions.", "• We analyze time expressions from four datasets and make four findings.", "The findings provide evidence in terms of time expression for the principle of least effort (Zipf, 1949) .", "• We propose a time tagger named SynTime to recognize time expressions using syntactic token types and general heuristic rules.", "Syn-Time is independent of specific tokens, and therefore independent of specific domains, specific text types, and specific languages.", "• We conduct experiments on three datasets, and the results demonstrate the effectiveness of SynTime against state-of-the-art baselines.", "Related Work Many research works on time expression identification are reported in TempEval exercises (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) .", "The task is divided into two subtasks: recognition and normalization.", "Rule-based Time Expression Recognition.", "Rule-based time taggers like GUTime, Heidel-Time, and SUTime, predefine time-related words and rules (Verhagen et al., 2005; Strötgen and Gertz, 2010; Chang and Manning, 2012) .", "Heidel-Time (Strötgen and Gertz, 2010) hand-crafts rules with time resources like weekdays and months, and leverages language clues like part-of-speech to identify time expression.", "SUTime (Chang and Manning, 2012) designs deterministic rules using a cascade finite automata (Hobbs et al., 1997) on regular expressions over tokens (Chang and Manning, 2014) .", "It first identifies individual words, then expands them to chunks, and finally to time expressions.", "Rule-based taggers achieve very good results in TempEval exercises.", "SynTime is also a rule-based tagger while its key difference from other rule-based taggers is that between the rules and the tokens it introduces a layer of token type; its rules work on token types and are independent of specific tokens.", "Moreover, SynTime designs rules in a heuristic way.", "Machine Learning based Method.", "Machine learning based methods extract features from the text and apply statistical models on the features for recognizing time expressions.", "Example features include character features, word features, syntactic features, semantic features, and gazetteer features (Llorens et al., 2010; Filannino et al., 2013; Bethard, 2013) .", "The statistical models include Markov logic network, logistic regression, support vector machines, maximum entropy, and conditional random fields (Llorens et al., 2010; Uz-Zaman and Allen, 2010; Filannino et al., 2013; Bethard, 2013) .", "Some models obtain good performance, and even achieve the highest F 1 of 82.71% on strict match in TempEval-3 (Bethard, 2013) .", "Outside TempEval exercises, Angeli et al.", "leverage compositional grammar and employ a EMstyle approach to learn a latent parser for time expression recognition (Angeli et al., 2012) .", "In the method named UWTime, Lee et al.", "handcraft a combinatory categorial grammar (CCG) (Steedman, 1996) to define a set of lexicon with rules and use L1-regularization to learn linguistic context (Lee et al., 2014) .", "The two methods explicitly use linguistic information.", "In (Lee et al., 2014) , especially, CCG could capture rich structure information of language, similar to the rule-based methods.", "Tabassum et al.", "focus on resolving the dates in tweets, and use distant supervision to recognize time expressions (Tabassum et al., 2016) .", "They use five time types and assign one of them to each word, which is similar to SynTime in the way of defining types over tokens.", "However, they focus only on the type of date, while SynTime recoginizes all the time expressions and does not involve learning and runs in real time.", "Time Expression Normalization.", "Methods in TempEval exercises design rules for time expression normalization (Verhagen et al., 2005; Strötgen and Gertz, 2010; Llorens et al., 2010; Uz-Zaman and Allen, 2010; Filannino et al., 2013; Bethard, 2013) .", "Because the rule systems have high similarity, Llorens et al.", "suggest to construct a large knowledge base as a public resource for the task (Llorens et al., 2012) .", "Some researchers treat the normalization process as a learning task and use machine learning methods (Lee et al., 2014; Tabassum et al., 2016) .", "Lee et al.", "(Lee et al., 2014) use AdaGrad algorithm (Duchi et al., 2011) and Tabassum et al.", "(Tabassum et al., 2016 ) use a loglinear algorithm to normalize time expressions.", "SynTime focuses only on the recognition task.", "The normalization could be achieved by using methods similar to the existing rule systems, because they are highly similar (Llorens et al., 2012) .", "We conduct an analysis on four datasets: Time-Bank, Gigaword, WikiWars, and Tweets.", "Time-Bank (Pustejovsky et al., 2003b ) is a benchmark dataset in TempEval series (Verhagen et al., 2007 (Verhagen et al., , 2010 UzZaman et al., 2013) , consisting of 183 news articles.", "Gigaword (Parker et al., 2011 ) is a large automatically labelled dataset with 2,452 news articles and used in TempEval-3.", "WikiWars dataset is derived from Wikipedia articles about wars (Mazur and Dale, 2010) .", "Tweets is our manually annotated dataset with 942 tweets of which each contains at least one time expression.", "Table 1 summarizes the datasets.", "Finding From the four datasets, we analyze their time expressions and make four findings.", "We will see that despite the four datasets vary in corpus sizes, in text types, and in domains, their time expressions demonstrate similar characteristics.", "Finding 1 Time expressions are very short.", "More than 80% of time expressions contain no more than three words and more than 90% contain no more than four words.", "Figure 1 plots the length distribution of time expressions.", "Although the texts are collected from different sources (i.e., news articles, Wikipedia articles, and tweets) and vary in sizes, the length Finding 2 More than 91% of time expressions contain at least one time token.", "The second column in Table 2 reports the percentage of time expressions that contain at least one time token.", "We find that at least 91.81% of time expressions contain time token(s).", "(Some time expressions have no time token but depend on other time expressions; in '2 to 8 days,' for example, '2' depends on '8 days.')", "This suggests that time tokens account for time expressions.", "Therefore, to recognize time expressions, it is essential to recognize their time tokens.", "Finding 3 Only a small group of time-related keywords are used to express time information.", "From the time expressions in all four datasets, we find that the group of keywords used to express time information is small.", "Table 3 reports the number of distinct words and of distinct time tokens.", "The words/tokens are manually normalized before counting and their variants are ignored.", "For example, 'year' and '5yrs' are counted as one token 'year.'", "Numerals in the counting are ignored.", "Despite the four datasets vary in sizes, domains, and text types, the numbers of their distinct time tokens are comparable.", "Across the four datasets, the number of distinct words is 350, about half of the simply summing of 675; the number of distinct time tokens is 123, less than half of the simply summing 282.", "Among the 123 distinct time tokens, 45 appear in all the four datasets, and 101 appear in at least two datasets.", "This indicates that time tokens, which account for time expressions, are highly overlapped across the four datasets.", "In other words, time expressions highly overlap at their time tokens.", "Finding 4 POS information could not distinguish time expressions from common words, but within time expressions, POS tags can help distinguish their constituents.", "For each dataset we list the top 10 POS tags that appear in time expressions, and their percentages over the whole text.", "Among the 40 tags (10 × 4 datasets), 37 have percentage lower than 20%; other 3 are CD.", "This indicates that POS could not provide enough information to distinguish time expressions from common words.", "However, the most common POS tags in time expressions are NN*, JJ, RB, CD, and DT.", "Within time expressions, the time tokens usually have NN* and RB, the modifiers have JJ and RB, and the numerals have CD.", "This finding indicates that for the time expressions, their similar constituents behave in similar syntactic way.", "When seeing this, we realize that this is exactly how linguists define part-of-speech for language.", "4 The definition of POS for language inspires us to define a syntactic type system for the time expression, part of language.", "The four findings all relate to the principle of least effort (Zipf, 1949) .", "That is, people tend to act with least effort so as to minimize the cost of energy at both individual and collective levels to the language usage (Zipf, 1949) .", "Time expression is part of language and acts as an interface of communication.", "Short expressions, occurrence, small vocabulary, and similar syntactic behaviour all reduce the cost of energy required to communicate.", "To summarize: on average, a time expression contains two tokens of which one is time token and the other is modifier/numeral, and the size of time tokens is small.", "To recognize a time expression, therefore, we first recognize the time token, then recognize the modifier/numeral.", "SynTime: Syntactic Token Types and General Heuristic Rules SynTime defines a syntactic type system for the tokens of time expressions, and designs heuristic rules working on the token types.", "Figure 2 shows the layout of SynTime, consisting of three levels: Token level, type level, and rule level.", "Token types at the type level group the tokens of time expressions.", "Heuristic rules lie at the rule level, working on token types rather than on tokens themselves.", "That is why the heuristic rules are general.", "For example, the heuristic rules do not work on tokens '1989' nor 'February,' but on their token types 'YEAR' and 'MONTH.'", "The heuristic rules are only relevant to token types, and are independent of specific tokens.", "For this reason, our token types and heuristic rules are independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domain (i.e., war domain) and specific text types (i.e., formal text and informal text) in English.", "The test for other languages simply needs to construct a set of token regular expressions in the target language under our defined token types.", "Figure 3 shows the overview of SynTime in practice.", "Shown on the left-hand side, SynTime is initialized with regular expressions over tokens.", "After initialization, SynTime can be directly applied on text.", "On the other hand, SynTime can be easily expanded by simply adding the time-related token regular expressions from training text under each defined token type.", "The expansion enables SynTime to recognize time expressions in text from different domains and different text types.", "Shown on the right-hand side of Figure 3 , Syn-Time recognizes time expression through three main steps.", "In the first step, SynTime identifies time tokens from the POS-tagged raw text.", "Then around the time tokens SynTime searches for modifiers and numerals to form time segments.", "In the last step, SynTime transforms the time segments to time expressions.", "SynTime Construction We define a syntactic type system for time expression, specifically, 15 token types for time tokens, 5 token types for modifiers, and 1 token type for numeral.", "Token types to tokens is like POS tags to words; for example, 'February' has a POS tag of NNP and a token type of MONTH.", "Time Token.", "We define 15 token types for the time tokens and use their names similar to Joda-Time classes: 5 DECADE (-), YEAR (-), SEA-SON (5), MONTH (12), WEEK (7), DATE (-), TIME (-), DAY TIME (27), TIMELINE (12), HOLIDAY (20), PERIOD (9), DURATION (-), TIME UNIT (15), TIME ZONE (6), and ERA (2).", "Number in '()' indicates the number of distinct tokens in this token type.", "'-' indicates that this token type involves changing digits and cannot be counted.", "Modifier.", "We define 3 token types for the modifiers according to their possible positions relative to time tokens.", "Modifiers that appear before time tokens are PREFIX (48); modifiers after time tokens are SUFFIX (2).", "LINKAGE (4) link two time tokens.", "Besides, we define 2 special modifier types, COMMA (1) for comma ',' and IN ARTICLE (2) for indefinite articles 'a' and 'an.'", "TimeML (Pustejovsky et al., 2003a) and Time-Bank (Pustejovsky et al., 2003b) do not treat most prepositions like 'on' as a part of time expressions.", "Thus SynTime does not collect those prepositions.", "Numeral.", "Number in time expressions can be a time token e.g., '10' in 'October 10, 2016,' or a modifier e.g., '10' in '10 days.'", "We define NU-MERAL (-) for the ordinals and numbers.", "SynTime Initialization.", "The token regular expressions for initializing SynTime are collected from SUTime, 6 a state-of-the-art rule-based tagger that achieved the highest recall in TempEval-3 (Chang and Manning, , 2013 .", "Specifically, we collect from SUTime only the tokens and the regular expressions over tokens, and discard its other rules of recognizing full time expressions.", "Time Expression Recognition On the token types, SynTime designs a small set of heuristic rules to recognize time expressions.", "The recognition process includes three main steps: (1) time token identification, (2) time segment identification, and (3) time expression extraction.", "Time Token Identification Identifying time tokens is simple, through matching of string and regular expressions.", "Some words might cause ambiguity.", "For example, 'May' could be a modal verb, or the fifth month of year.", "To filter out the ambiguous words, we use POS information.", "In implementation, we use Stanford POS Tagger; 7 and the POS tags for matching the instances of token types in SynTime are based on our Finding 4 in Section 3.2.", "Besides time tokens are identified, in this step, individual token is assigned with one token type of either modifier or numeral if it is matched with token regular expressions.", "In the next two steps, SynTime works on those token types.", "Time Segment Identification The task of time segment identification is to search the surrounding of each time token identified in previous step for modifiers and numerals, then gather the time token with its modifiers and numerals to form a time segment.", "The searching is under simple heuristic rules in which the key idea is to expand the time token's boundaries.", "At first, each time token is a time segment.", "If it is either a PERIOD or DURATION, then no need to further search.", "Otherwise, search its left and its right for modifiers and numerals.", "For the left searching, if encounter a PREFIX or NUMERAL or IN ARTICLE, then continue searching.", "For the right searching, if encounter a SUFFIX or NUMERAL, then continue searching.", "Both the left and the right searching stop when reaching a COMMA or LINK-AGE or a non-modifier/numeral word.", "The left searching does not exceed the previous time token; the right searching does not exceed the next time token.", "A time segment consists of exactly one time token, and zero or some modifiers/numerals.", "A special kind of time segments do not contain any time token; they depend on other time segments next to them.", "For example, in '8 to 20 days,' 'to 20 days' is a time segment, and '8 to' forms a dependent time segment.", "(See Figure 4(e) .)", "Time Expression Extraction The task of time expression extraction is to extract time expressions from the identified time segments in which the core step is to determine whether to merge two adjacent or overlapping time segments into a new time segment.", "We scan the time segments in a sentence from beginning to the end.", "A stand-alone time segment is a time expression.", "(See Figure 4(a) .)", "The focus is to deal with two or more time segments that are adjacent or overlapping.", "If two time segments s 1 and s 2 are adjacent, merge them to form a new time segment s 1 .", "(See Figure 4(b) .)", "Consider that s 1 and s 2 overlap at a shared boundary.", "According to our time segment identification, the shared boundary could be a modifier or a numeral.", "If the word at the shared boundary is neither a COMMA nor a LINKAGE, then merge s 1 and s 2 .", "(See Figure 4(c) .)", "If the word is a LINKAGE, then extract s 1 as a time expression and continue scanning.", "When the shared boundary is a COMMA, merge s 1 and s 2 only if the COMMA's previous token and its next token satisfy the three conditions: (1) the previous token is a time token or a NUMERAL; (2) the next token is a time token; and (3) the token types of the previous token and of the next token are not the same.", "(See Figure 4(d) .)", "Although Figure 4 shows the examples as token types together with the tokens, we should note that the heuristic rules only work on the token types.", "After the extraction step, time expressions are exported as a sequence of tokens from the sequence of token types.", "SynTime Expansion SynTime could be expanded by simply adding new words under each defined token type without changing any rule.", "The expansion requires the words to be added to be annotated manually.", "We apply the initial SynTime on the time expressions from training text and list the words that are not covered.", "Whether the uncovered words are added to SynTime is manually determined.", "The rule for determination is that the added words can not cause ambiguity and should be generic.", "Wiki-Wars dataset contains a few examples like this: 'The time Arnold reached Quebec City.'", "Words in this example are extremely descriptive, and we do not collect them.", "In tweets, on the other hand, people may use abbreviations and informal variants; for example, '2day' and 'tday' are popular spellings of 'today.'", "Such kind of abbreviations and informal variants will be collected.", "According to our findings, not many words are used to express time information, the manual addition of keywords thus will not cost much.", "In addition, we find that even in tweets people tend to use formal words.", "In the Twitter word clusters trained from 56 million English tweets, 8 the most often used words are the formal words, and their frequencies are much greater than the informal words'.", "The cluster of 'today,' 9 for example, its most often use is the formal one, 'today,' which appears 1,220,829 times; while its second most often use '2day' appears only 34,827 times.", "The low rate of informal words (e.g., about 3% in 'today' cluster) suggests that even in informal environment the manual keyword addition costs little.", "Experiments We evaluate SynTime against three state-of-theart baselines (i.e., HeidelTime, SUTime, and UW-Time) on three datasets (i.e., TimeBank, Wiki-Wars, and Tweets).", "WikiWars is a specific domain dataset about war; TimeBank and WikiWars are the datasets in formal text while Tweets dataset is in informal text.", "For SynTime we report the results of its two versions: SynTime-I and SynTime-E. SynTime-I is the initial version, and SynTime-E is the expanded version of SynTime-I.", "Experiment Setting Datasets.", "We use three datasets of which TimeBank and WikiWars are benchmark datasets whose details are shown in Section 3.1; Tweets is our manually labeled dataset that are collected from Twitter.", "For Tweets dataset, we randomly sample 4000 tweets and use SUTime to tag them.", "942 tweets of which each contains at least one time expression.", "From the remaining 3,058 tweets, we randomly sample 500 and manually annotate them, and find that only 15 tweets contain time expressions.", "We therefore roughly consider that SU-Time misses about 3% time expressions in tweets.", "Two annotators then manually annotate the 942 tweets with discussion to final agreement according to the standards of TimeML and TimeBank.", "We finally get 1,127 manually labeled time expressions.", "For the 942 tweets, we randomly sample 200 tweets as test set, and the rest 742 as training set, because a baseline UWTime requires training.", "Baseline Methods.", "We compare SynTime with methods: HeidelTime (Strötgen and Gertz, 2010) , SUTime (Chang and , and UW- Evaluation Metrics.", "We follow TempEval-3 and use their evaluation toolkit 10 to report P recision, Recall, and F 1 in terms of strict match and relaxed match (UzZaman et al., 2013).", "22, 1986' and 'February 01, 1989 ' at the level of word or of character.", "One suggestion is to consider a type-based learning method that could use type information.", "For example, the above two time expressions refer to the same pattern of 'MONTH NUMERAL COMMA Table 5 lists the number of time tokens and modifiers added to SynTime-I to get SynTime-E. On TimeBank and Tweets datasets, only a few tokens are added, the corresponding results are affected slightly.", "This confirms that the size of time words is small, and that SynTime-I covers most of time words.", "On WikiWars dataset, relatively more tokens are added, SynTime-E performs much better than SynTime-I, especially in recall.", "It improves the recall by 3.25% in strict match and by 2.98% in relaxed match.", "This indicates that with more words added from specific domains (e.g., WikiWars dataset about war), SynTime can significantly improve the performance.", "Experiment Result Limitations SynTime assumes that words are tokenized and POS tagged correctly.", "In reality, however, the tokenized and tagged words are not that perfect, due to the limitation of used tools.", "For example, Stanford POS Tagger assigns VBD to the word 'sat' in 'friday or sat' while whose tag should be NNP.", "The incorrect tokens and POS tags affect the result.", "Conclusion and future work We conduct an analysis on time expressions from four datasets, and find that time expressions in general are very short and expressed by a small vocabulary, and words in time expressions demonstrate similar syntactic behavior.", "Our findings provide evidence in terms of time expression for the principle of least effort (Zipf, 1949) .", "Inspired by part-of-speech, based on the findings, we define a syntactic type system for the time expression, and propose a type-based time expression tagger, named by SynTime.", "SynTime defines syntactic token types for tokens and on the token types it designs general heuristic rules based on the idea of boundary expansion.", "Experiments on three datasets show that SynTime outperforms the stateof-the-art baselines, including rule-based time taggers and machine learning based time tagger.", "Because our heuristic rules are quite simple, Syn-Time is light-weight and runs in real time.", "Our token types and heuristic rules are independent of specific tokens, SynTime therefore is independent of specific domains, specific text types, and even specific languages that consist of specific tokens.", "In this paper, we test SynTime on specific domains and specific text types in English.", "The test for other languages needs only to construct a collection of token regular expressions in the target language under our defined token types.", "Time expression is part of language and follows the principle of least effort.", "Since language usage relates to human habits (Zipf, 1949; Chomsky, 1986; Pinker, 1995) , we might expect that humans would share some common habits, and therefore expect that other parts of language would more or less follow the same principle.", "In the future we will try our analytical method on other parts of language." ] }
{ "paper_header_number": [ "1", "2", "3.2", "4", "4.1", "4.2", "4.2.1", "4.2.2", "4.2.3", "4.3", "5", "5.3", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Finding", "SynTime: Syntactic Token Types and General Heuristic Rules", "SynTime Construction", "Time Expression Recognition", "Time Token Identification", "Time Segment Identification", "Time Expression Extraction", "SynTime Expansion", "Experiments", "Limitations", "Conclusion and future work" ] }
GEM-SciDuet-train-99#paper-1262#slide-16
Difference from other Rule based Methods
Method SynTime Other rule-based methods Rule level Deterministic Rules Rule level Layout Type level Time Token, Modifier, Numeral Property Heuristic rules work on token types and are independent of specific tokens, thus they are independent of specific domains and specific text types and specific languages. Deterministic rules directly work on tokens and phrases in a fixed manner, thus the taggers lack flexibility the third quarter of
Method SynTime Other rule-based methods Rule level Deterministic Rules Rule level Layout Type level Time Token, Modifier, Numeral Property Heuristic rules work on token types and are independent of specific tokens, thus they are independent of specific domains and specific text types and specific languages. Deterministic rules directly work on tokens and phrases in a fixed manner, thus the taggers lack flexibility the third quarter of
[]
GEM-SciDuet-train-100#paper-1263#slide-0
1263
Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders
While recent neural encoder-decoder models have shown great promise in modeling open-domain conversations, they often generate dull and generic responses. Unlike past work that has focused on diversifying the output of the decoder at word-level to alleviate this problem, we present a novel framework based on conditional variational autoencoders that captures the discourse-level diversity in the encoder. Our model uses latent variables to learn a distribution over potential conversational intents and generates diverse responses using only greedy decoders. We have further developed a novel variant that is integrated with linguistic prior knowledge for better performance. Finally, the training procedure is improved by introducing a bag-of-word loss. Our proposed models have been validated to generate significantly more diverse responses than baseline approaches and exhibit competence in discourse-level decision-making.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196 ], "paper_content_text": [ "Introduction The dialog manager is one of the key components of dialog systems, which is responsible for modeling the decision-making process.", "Specifically, it typically takes a new utterance and the dialog context as input, and generates discourse-level decisions (Bohus and Rudnicky, 2003; Williams and Young, 2007) .", "Advanced dialog managers usually have a list of potential actions that enable them to have diverse behavior during a conversation, e.g.", "different strategies to recover from non-understanding .", "However, the conventional approach of designing a dialog manager (Williams and Young, 2007) does not scale well to open-domain conversation models because of the vast quantity of possible decisions.", "Thus, there has been a growing interest in applying encoder-decoder models for modeling open-domain conversation (Vinyals and Le, 2015; Serban et al., 2016a) .", "The basic approach treats a conversation as a transduction task, in which the dialog history is the source sequence and the next response is the target sequence.", "The model is then trained end-to-end on large conversation corpora using the maximum-likelihood estimation (MLE) objective without the need for manual crafting.", "However recent research has found that encoder-decoder models tend to generate generic and dull responses (e.g., I don't know), rather than meaningful and specific answers (Li et al., 2015; Serban et al., 2016b) .", "There have been many attempts to explain and solve this limitation, and they can be broadly divided into two categories (see Section 2 for details): (1) the first category argues that the dialog history is only one of the factors that decide the next response.", "Other features should be extracted and provided to the models as conditionals in order to generate more specific responses (Xing et al., 2016; Li et al., 2016a) ; (2) the second category aims to improve the encoder-decoder model itself, including decoding with beam search and its variations (Wiseman and Rush, 2016) , encouraging responses that have long-term payoff (Li et al., 2016b) , etc.", "Building upon the past work in dialog managers and encoder-decoder models, the key idea of this paper is to model dialogs as a one-to-many problem at the discourse level.", "Previous studies indicate that there are many factors in open-domain dialogs that decide the next response, and it is nontrivial to extract all of them.", "Intuitively, given a similar dialog history (and other observed inputs), there may exist many valid responses (at the discourse-level), each corresponding to a certain configuration of the latent variables that are not presented in the input.", "To uncover the potential responses, we strive to model a probabilistic distribution over the distributed utterance embeddings of the potential responses using a latent variable ( Figure 1 ).", "This allows us to generate diverse responses by drawing samples from the learned distribution and reconstruct their words via a decoder neural network.", "Specifically, our contributions are three-fold: 1.", "We present a novel neural dialog model adapted from conditional variational autoencoders (CVAE) , which introduces a latent variable that can capture discourse-level variations as described above 2.", "We propose Knowledge-Guided CVAE (kgC-VAE), which enables easy integration of expert knowledge and results in performance improvement and model interpretability.", "3.", "We develop a training method in addressing the difficulty of optimizing CVAE for natural language generation (Bowman et al., 2015) .", "We evaluate our models on human-human conversation data and yield promising results in: (a) generating appropriate and discourse-level diverse responses, and (b) showing that the proposed training method is more effective than the previous techniques.", "Related Work Our work is related to both recent advancement in encoder-decoder dialog models and generative models based on CVAE.", "Encoder-decoder Dialog Models Since the emergence of the neural dialog model, the problem of output diversity has received much attention in the research community.", "Ideal output responses should be both coherent and diverse.", "However, most models end up with generic and dull responses.", "To tackle this problem, one line of research has focused on augmenting the input of encoder-decoder models with richer context information, in order to generate more spe-cific responses.", "Li et al., (2016a) captured speakers' characteristics by encoding background information and speaking style into the distributed embeddings, which are used to re-rank the generated response from an encoder-decoder model.", "Xing et al., (2016) maintain topic encoding based on Latent Dirichlet Allocation (LDA) (Blei et al., 2003) of the conversation to encourage the model to output more topic coherent responses.", "On the other hand, many attempts have also been made to improve the architecture of encoderdecoder models.", "Li et al,.", "(2015) proposed to optimize the standard encoder-decoder by maximizing the mutual information between input and output, which in turn reduces generic responses.", "This approach penalized unconditionally high frequency responses, and favored responses that have high conditional probability given the input.", "Wiseman and Rush (2016) focused on improving the decoder network by alleviating the biases between training and testing.", "They introduced a searchbased loss that directly optimizes the networks for beam search decoding.", "The resulting model achieves better performance on word ordering, parsing and machine translation.", "Besides improving beam search, Li et al., (2016b) pointed out that the MLE objective of an encoder-decoder model is unable to approximate the real-world goal of the conversation.", "Thus, they initialized a encoderdecoder model with MLE objective and leveraged reinforcement learning to fine tune the model by optimizing three heuristic rewards functions: informativity, coherence, and ease of answering.", "Conditional Variational Autoencoder The variational autoencoder (VAE) (Kingma and Welling, 2013; Rezende et al., 2014) is one of the most popular frameworks for image generation.", "The basic idea of VAE is to encode the input x into a probability distribution z instead of a point encoding in the autoencoder.", "Then VAE applies a decoder network to reconstruct the original input using samples from z.", "To generate images, VAE first obtains a sample of z from the prior distribution, e.g.", "N (0, I), and then produces an image via the decoder network.", "A more advanced model, the conditional VAE (CVAE), is a recent modification of VAE to generate diverse images conditioned on certain attributes, e.g.", "generating different human faces given skin color .", "Inspired by CVAE, we view the dialog contexts as the conditional attributes and adapt CVAE to generate diverse responses instead of images.", "Although VAE/CVAE has achieved impressive results in image generation, adapting this to natural language generators is non-trivial.", "Bowman et al., (2015) have used VAE with Long-Short Term Memory (LSTM)-based recognition and decoder networks to generate sentences from a latent Gaussian variable.", "They showed that their model is able to generate diverse sentences with even a greedy LSTM decoder.", "They also reported the difficulty of training because the LSTM decoder tends to ignore the latent variable.", "We refer to this issue as the vanishing latent variable problem.", "Serban et al., (2016b) have applied a latent variable hierarchical encoder-decoder dialog model to introduce utterance-level variations and facilitate longer responses.", "To improve upon the past models, we firstly introduce a novel mechanism to leverage linguistic knowledge in training end-to-end neural dialog models, and we also propose a novel training technique that mitigates the vanishing latent variable problem.", "Proposed Models Conditional Variational Autoencoder (CVAE) for Dialog Generation Each dyadic conversation can be represented via three random variables: the dialog context c (context window size k − 1), the response utterance x (the k th utterance) and a latent variable z, which is used to capture the latent distribution over the valid responses.", "Further, c is composed of the dialog history: the preceding k-1 utterances; conversational floor (1 if the utterance is from the same speaker of x, otherwise 0) and meta features m (e.g.", "the topic).", "We then define the conditional distribution p(x, z|c) = p(x|z, c)p(z|c) and our goal is to use deep neural networks (parametrized by θ) to approximate p(z|c) and p(x|z, c).", "We refer to p θ (z|c) as the prior network and p θ (x, |z, c) as the response decoder.", "Then the generative process of x is (Figure 2 (a)): 1.", "Sample a latent variable z from the prior network p θ (z|c).", "2.", "Generate x through the response decoder p θ (x|z, c).", "CVAE is trained to maximize the conditional log likelihood of x given c, which involves an intractable marginalization over the latent variable z.", "As proposed in , CVAE can be efficiently trained with the Stochastic Gradient Variational Bayes (SGVB) framework (Kingma and Welling, 2013) by maximizing the variational lower bound of the conditional log likelihood.", "We assume the z follows multivariate Gaussian distribution with a diagonal covariance matrix and introduce a recognition network q φ (z|x, c) to approximate the true posterior distribution p(z|x, c).", "have shown that the variational lower bound can be written as: Figure 3 demonstrates an overview of our model.", "The utterance encoder is a bidirectional recurrent neural network (BRNN) (Schuster and Paliwal, 1997) with a gated recurrent unit (GRU) (Chung et al., 2014) to encode each utterance into fixedsize vectors by concatenating the last hidden states of the forward and backward RNN u L(θ, φ; x, c) = −KL(q φ (z|x, c) p θ (z|c)) + E q φ (z|c,x) [log p θ (x|z, c)] (1) ≤ log p(x|c) i = [ h i , h i ].", "x is simply u k .", "The context encoder is a 1-layer GRU network that encodes the preceding k-1 utterances by taking u 1:k−1 and the corresponding conversation floor as inputs.", "The last hidden state h c of the context encoder is concatenated with meta features and c = [h c , m].", "Since we assume z follows isotropic Gaussian distribution, the recognition network q φ (z|x, c) ∼ N (µ, σ 2 I) and the prior network p θ (z|c) ∼ N (µ , σ 2 I), and then we have: µ log(σ 2 ) = W r x c + b r (2) µ log(σ 2 ) = MLP p (c) (3) We then use the reparametrization trick (Kingma and Welling, 2013) to obtain samples of z either from N (z; µ, σ 2 I) predicted by the recognition network (training) or N (z; µ , σ 2 I) predicted by the prior network (testing).", "Finally, the response decoder is a 1-layer GRU network with initial state s 0 = W i [z, c]+b i .", "The response decoder then predicts the words in x sequentially.", "Knowledge-Guided CVAE (kgCVAE) In practice, training CVAE is a challenging optimization problem and often requires large amount of data.", "On the other hand, past research in spoken dialog systems and discourse analysis has suggested that many linguistic cues capture crucial features in representing natural conversation.", "For example, dialog acts (Poesio and Traum, 1998) have been widely used in the dialog managers (Litman and Allen, 1987; Raux et al., 2005; Zhao and Eskenazi, 2016) to represent the propositional function of the system.", "Therefore, we conjecture that it will be beneficial for the model to learn meaningful latent z if it is provided with explicitly extracted discourse features during the training.", "In order to incorporate the linguistic features into the basic CVAE model, we first denote the set of linguistic features as y.", "Then we assume that the generation of x depends on c, z and y. y relies on z and c as shown in Figure 2 .", "Specifically, during training the initial state of the response decoder is s 0 = W i [z, c, y] + b i and the input at every step is [e t , y] where e t is the word embedding of t th word in x.", "In addition, there is an MLP to predict y = MLP y (z, c) based on z and c. In the testing stage, the predicted y is used by the response decoder instead of the oracle decoders.", "We denote the modified model as knowledge-guided CVAE (kgCVAE) and developers can add desired discourse features that they wish the latent variable z to capture.", "KgCVAE model is trained by maximizing: L(θ, φ; x, c, y) = −KL(q φ (z|x, c, y) P θ (z|c)) + E q φ (z|c,x,y) [log p(x|z, c, y)] + E q φ (z|c,x,y) [log p(y|z, c)] (4) Since now the reconstruction of y is a part of the loss function, kgCVAE can more efficiently encode y-related information into z than discovering it only based on the surface-level x and c. Another advantage of kgCVAE is that it can output a highlevel label (e.g.", "dialog act) along with the wordlevel responses, which allows easier interpretation of the model's outputs.", "Optimization Challenges A straightforward VAE with RNN decoder fails to encode meaningful information in z due to the vanishing latent variable problem (Bowman et al., 2015) .", "Bowman et al., (2015) proposed two solutions: (1) KL annealing: gradually increasing the weight of the KL term from 0 to 1 during training; (2) word drop decoding: setting a certain percentage of the target words to 0.", "We found that CVAE suffers from the same issue when the decoder is an RNN.", "Also we did not consider word drop decoding because Bowman et al,.", "(2015) have shown that it may hurt the performance when the drop rate is too high.", "As a result, we propose a simple yet novel technique to tackle the vanishing latent variable problem: bag-of-word loss.", "The idea is to introduce an auxiliary loss that requires the decoder network to predict the bag-of-words in the response x as shown in Figure 3 (b).", "We decompose x into two variables: x o with word order and x bow without order, and assume that x o and x bow are conditionally independent given z and c: p(x, z|c) = p(x o |z, c)p(x bow |z, c)p(z|c).", "Due to the conditional independence assumption, the latent variable is forced to capture global information about the target response.", "Let f = MLP b (z, x) ∈ R V where V is vocabulary size, and we have: log p(x bow |z, c) = log |x| t=1 e fx t V j e f j (5) where |x| is the length of x and x t is the word index of t th word in x.", "The modified variational lower bound for CVAE with bag-of-word loss is (see Appendix A for kgCVAE): L (θ, φ; x, c) = L(θ, φ; x, c) + E q φ (z|c,x,y) [log p(x bow |z, c)] (6) We will show that the bag-of-word loss in Equation 6 is very effective against the vanishing latent variable and it is also complementary to the KL annealing technique.", "Experiment Setup Dataset We chose the Switchboard (SW) 1 Release 2 Corpus (Godfrey and Holliman, 1997) to evaluate the proposed models.", "SW has 2400 two-sided telephone conversations with manually transcribed speech and alignment.", "In the beginning of the call, a computer operator gave the callers recorded prompts that define the desired topic of discussion.", "There are 70 available topics.", "We randomly split the data into 2316/60/62 dialogs for train/validate/test.", "The pre-processing includes (1) tokenize using the NLTK tokenizer (Bird et al., 2009 ); (2) remove non-verbal symbols and repeated words due to false starts; (3) keep the top 10K frequent word types as the vocabulary.", "The final data have 207, 833/5, 225/5, 481 (c, x) pairs for train/validate/test.", "Furthermore, a subset of SW was manually labeled with dialog acts (Stolcke et al., 2000) .", "We extracted dialog act labels based on the dialog act recognizer proposed in (Ribeiro et al., 2015) .", "The features include the uni-gram and bi-gram of the utterance, and the contextual features of the last 3 utterances.", "We trained a Support Vector Machine (SVM) (Suykens and Vandewalle, 1999) with linear kernel on the subset of SW with human annotations.", "There are 42 types of dialog acts and the SVM achieved 77.3% accuracy on held-out data.", "Then the rest of SW data are labelled with dialog acts using the trained SVM dialog act recognizer.", "Training We trained with the following hyperparameters (according to the loss on the validate dataset): word embedding has size 200 and is shared across everywhere.", "We initialize the word embedding from Glove embedding pre-trained on Twitter (Pennington et al., 2014) .", "The utterance encoder has a hidden size of 300 for each direction.", "The context encoder has a hidden size of 600 and the response decoder has a hidden size of 400.", "The prior network and the MLP for predicting y both have 1 hidden layer of size 400 and tanh non-linearity.", "The latent variable z has a size of 200.", "The context window k is 10.", "All the initial weights are sampled from a uniform distribution [-0.08, 0.08].", "The mini-batch size is 30.", "The models are trained end-to-end using the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.001 and gradient clipping at 5.", "We selected the best models based on the variational lower bound on the validate data.", "Finally, we use the BOW loss along with KL annealing of 10,000 batches to achieve the best performance.", "Section 5.4 gives a detailed argument for the importance of the BOW loss.", "Results Experiments Setup We compared three neural dialog models: a strong baseline model, CVAE, and kgCVAE.", "The baseline model is an encoder-decoder neural dialog model without latent variables similar to (Serban et al., 2016a) .", "The baseline model's encoder uses the same context encoder to encode the dialog history and the meta features as shown in Figure 3 .", "The encoded context c is directly fed into the decoder networks as the initial state.", "The hyperparameters of the baseline are the same as the ones reported in Section 4.2 and the baseline is trained to minimize the standard cross entropy loss of the decoder RNN model without any auxiliary loss.", "Also, to compare the diversity introduced by the stochasticity in the proposed latent variable versus the softmax of RNN at each decoding step, we generate N responses from the baseline by sam-pling from the softmax.", "For CVAE/kgCVAE, we sample N times from the latent z and only use greedy decoders so that the randomness comes entirely from the latent variable z. Quantitative Analysis Automatically evaluating an open-domain generative dialog model is an open research challenge .", "Following our one-tomany hypothesis, we propose the following metrics.", "We assume that for a given dialog context c, there exist M c reference responses r j , j ∈ [1, M c ].", "Meanwhile a model can generate N hypothesis re- sponses h i , i ∈ [1, N ].", "The generalized responselevel precision/recall for a given dialog context is: precision(c) = N i=1 max j∈[1,Mc] d(r j , h i ) N recall(c) = Mc j=1 max i∈[1,N ] d(r j , h i )) M c where d(r j , h i ) is a distance function which lies between 0 to 1 and measures the similarities between r j and h i .", "The final score is averaged over the entire test dataset and we report the performance with 3 types of distance functions in order to evaluate the systems from various linguistic points of view: Smoothed Sentence-level BLEU (Chen and Cherry, 2014): BLEU is a popular metric that measures the geometric mean of modified ngram precision with a length penalty (Papineni et al., 2002; Li et al., 2015) .", "We use BLEU-1 to 4 as our lexical similarity metric and normalize the score to 0 to 1 scale.", "Cosine Distance of Bag-of-word Embedding: a simple method to obtain sentence embeddings is to take the average or extrema of all the word embeddings in the sentences (Forgues et al., 2014; Adi et al., 2016) .", "The d(r j , h i ) is the cosine distance of the two embedding vectors.", "We used Glove embedding described in Section 4 and denote the average method as A-bow and extrema method as E-bow.", "3.", "Dialog Act Match: to measure the similarity at the discourse level, the same dialogact tagger from 4.1 is applied to label all the generated responses of each model.", "We set d(r j , h i ) = 1 if r j and h i have the same dialog acts, otherwise d(r j , h i ) = 0.", "One challenge of using the above metrics is that there is only one, rather than multiple reference responses/contexts.", "This impacts reliability of our measures.", "Inspired by (Sordoni et al., 2015) , we utilized information retrieval techniques (see Appendix A) to gather 10 extra candidate reference responses/context from other conversations with the same topics.", "Then the 10 candidate references are filtered by two experts, which serve as the ground truth to train the reference response classifier.", "The result is 6.69 extra references in average per context.", "The average number of distinct reference dialog acts is 4.2.", "Table 1 The proposed models outperform the baseline in terms of recall in all the metrics with statistical significance.", "This confirms our hypothesis that generating responses with discourse-level diversity can lead to a more comprehensive coverage of the potential responses than promoting only word-level diversity.", "As for precision, we observed that the baseline has higher or similar scores than CVAE in all metrics, which is expected since the baseline tends to generate the mostly likely and safe responses repeatedly in the N hypotheses.", "However, kgCVAE is able to achieve the highest precision and recall in the 4 metrics at the same time (BLEU1-4, E-BOW).", "One reason for kgCVAE's good performance is that the predicted dialog act label in kgCVAE can regularize the generation process of its RNN decoder by forcing it to generate more coherent and precise words.", "We further analyze the precision/recall of BLEU-4 by looking at the average score versus the number of distinct reference dialog acts.", "A low number of distinct dialog acts represents the situation where the dialog context has a strong constraint on the range of the next response (low entropy), while a high number indicates the opposite (highentropy).", "Figure 4 shows that CVAE/kgCVAE achieves significantly higher recall than the baseline in higher entropy contexts.", "Also it shows that CVAE suffers from lower precision, especially in low entropy contexts.", "Finally, kgCVAE gets higher precision than both the baseline and CVAE in the full spectrum of context entropy.", "Table 2 shows the outputs generated from the baseline and kgCVAE.", "In example 1, caller A begins with an open-ended question.", "The kgCVAE model generated highly diverse answers that cover multiple plausible dialog acts.", "Further, we notice that the generated text exhibits similar dialog acts compared to the ones predicted separately by the model, implying the consistency of natural language generation based on y.", "On the contrary, the responses from the baseline model are limited to local n-gram variations and share a similar prefix, i.e.", "\"I'm\".", "Example 2 is a situation where caller A is telling B stories.", "The ground truth response is a back-channel and the range of valid answers is more constrained than example 1 since B is playing the role of a listener.", "The baseline successfully predicts \"uh-huh\".", "The kgCVAE model is also able to generate various ways of back-channeling.", "This implies that the latent z is able to capture context-sensitive variations, i.e.", "in low-entropy dialog contexts modeling lexical diversity while in high-entropy ones modeling discourse-level diversity.", "Moreover, kgCVAE is occasionally able to generate more sophisticated grounding (sample 4) beyond a simple back-channel, which is also an acceptable response given the dialog context.", "Qualitative Analysis In addition, past work (Kingma and Welling, 2013) has shown that the recognition network is able to learn to cluster high-dimension data, so we conjecture that posterior z outputted from the recognition network should cluster the responses into meaningful groups.", "Figure 5 visualizes the posterior z of responses in the test dataset in 2D space using t-SNE (Maaten and Hinton, 2008) .", "We found that the learned latent space is highly correlated with the dialog act and length of responses, which confirms our assumption.", "Results for Bag-of-Word Loss Finally, we evaluate the effectiveness of bag-ofword (BOW) loss for training VAE/CVAE with the RNN decoder.", "To compare with past work (Bowman et al., 2015) , we conducted the same language modelling (LM) task on Penn Treebank using VAE.", "The network architecture is same except we use GRU instead of LSTM.", "We compared four different training setups: (1) standard VAE without any heuristics; (2) VAE with KL annealing (KLA); (3) VAE with BOW loss; (4) VAE with both BOW loss and KLA.", "Intuitively, a well trained model should lead to a low reconstruction loss and small but non-trivial KL cost.", "For all models with KLA, the KL weight increases linearly from 0 to 1 in the first 5000 batches.", "Table 3 shows the reconstruction perplexity and the KL cost on the test dataset.", "The standard VAE fails to learn a meaningful latent variable by hav- Table 2 : Generated responses from the baselines and kgCVAE in two examples.", "KgCVAE also provides the predicted dialog act for each response.", "The context only shows the last utterance due to space limit (the actual context window size is 10).", "ing a KL cost close to 0 and a reconstruction perplexity similar to a small LSTM LM (Zaremba et al., 2014) .", "KLA helps to improve the reconstruction loss, but it requires early stopping since the models will fall back to the standard VAE after the KL weight becomes 1.", "At last, the models with BOW loss achieved significantly lower perplexity and larger KL cost.", "Figure 6 visualizes the evolution of the KL cost.", "We can see that for the standard model, the KL cost crashes to 0 at the beginning of training and never recovers.", "On the contrary, the model with only KLA learns to encode substantial information in latent z when the KL cost weight is small.", "However, after the KL weight is increased to 1 (after 5000 batch), the model once again decides to ignore the latent z and falls back to the naive implementation.", "The model with BOW loss, however, consistently converges to a non-trivial KL cost even without KLA, which confirms the importance of BOW loss for training latent variable models with the RNN decoder.", "Last but not least, our experiments showed that the conclusions drawn from LM using VAE also apply to training CVAE/kgCVAE, so we used BOW loss together with KLA for all previous experiments.", "Conclusion and Future Work In conclusion, we identified the one-to-many nature of open-domain conversation and proposed two novel models that show superior performance in generating diverse and appropriate responses at the discourse level.", "While the current paper addresses diversifying responses in respect to dialogue acts, this work is part of a larger research direction that targets leveraging both past linguistic findings and the learning power of deep neural networks to learn better representation of the latent factors in dialog.", "In turn, the output of this novel neural dialog model will be easier to explain and control by humans.", "In addition to dialog acts, we plan to apply our kgCVAE model to capture other different linguistic phenomena including sentiment, named entities,etc.", "Last but not least, the recognition network in our model will serve as the foundation for designing a datadriven dialog manager, which automatically discovers useful high-level intents.", "All of the above suggest a promising research direction." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "4.1", "4.2", "5.1", "5.2", "1.", "2.", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Encoder-decoder Dialog Models", "Conditional Variational Autoencoder", "Conditional Variational Autoencoder (CVAE) for Dialog Generation", "Knowledge-Guided CVAE (kgCVAE)", "Optimization Challenges", "Dataset", "Training", "Experiments Setup", "Quantitative Analysis", "Smoothed Sentence-level BLEU (Chen and", "Cosine", "Qualitative Analysis", "Results for Bag-of-Word Loss", "Conclusion and Future Work" ] }
GEM-SciDuet-train-100#paper-1263#slide-0
Introduction
End-to-end dialog models based on encoder-decoder models have shown great promises for modeling open-domain conversations, due to its flexibility and scalability. Dialog History/Context System Response However, dull response problem! [Li et al 2015, Serban et al. 2016]. Current solutions include: Add more info to the dialog context [Xing et al 2016, Li et al 2016] Improve decoding algorithm, e.g. beam search [Wiseman and Rush 2016] User: I am feeling quite happy today. (previous utterances) sure I dont know Yes
End-to-end dialog models based on encoder-decoder models have shown great promises for modeling open-domain conversations, due to its flexibility and scalability. Dialog History/Context System Response However, dull response problem! [Li et al 2015, Serban et al. 2016]. Current solutions include: Add more info to the dialog context [Xing et al 2016, Li et al 2016] Improve decoding algorithm, e.g. beam search [Wiseman and Rush 2016] User: I am feeling quite happy today. (previous utterances) sure I dont know Yes
[]
GEM-SciDuet-train-100#paper-1263#slide-1
1263
Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders
While recent neural encoder-decoder models have shown great promise in modeling open-domain conversations, they often generate dull and generic responses. Unlike past work that has focused on diversifying the output of the decoder at word-level to alleviate this problem, we present a novel framework based on conditional variational autoencoders that captures the discourse-level diversity in the encoder. Our model uses latent variables to learn a distribution over potential conversational intents and generates diverse responses using only greedy decoders. We have further developed a novel variant that is integrated with linguistic prior knowledge for better performance. Finally, the training procedure is improved by introducing a bag-of-word loss. Our proposed models have been validated to generate significantly more diverse responses than baseline approaches and exhibit competence in discourse-level decision-making.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196 ], "paper_content_text": [ "Introduction The dialog manager is one of the key components of dialog systems, which is responsible for modeling the decision-making process.", "Specifically, it typically takes a new utterance and the dialog context as input, and generates discourse-level decisions (Bohus and Rudnicky, 2003; Williams and Young, 2007) .", "Advanced dialog managers usually have a list of potential actions that enable them to have diverse behavior during a conversation, e.g.", "different strategies to recover from non-understanding .", "However, the conventional approach of designing a dialog manager (Williams and Young, 2007) does not scale well to open-domain conversation models because of the vast quantity of possible decisions.", "Thus, there has been a growing interest in applying encoder-decoder models for modeling open-domain conversation (Vinyals and Le, 2015; Serban et al., 2016a) .", "The basic approach treats a conversation as a transduction task, in which the dialog history is the source sequence and the next response is the target sequence.", "The model is then trained end-to-end on large conversation corpora using the maximum-likelihood estimation (MLE) objective without the need for manual crafting.", "However recent research has found that encoder-decoder models tend to generate generic and dull responses (e.g., I don't know), rather than meaningful and specific answers (Li et al., 2015; Serban et al., 2016b) .", "There have been many attempts to explain and solve this limitation, and they can be broadly divided into two categories (see Section 2 for details): (1) the first category argues that the dialog history is only one of the factors that decide the next response.", "Other features should be extracted and provided to the models as conditionals in order to generate more specific responses (Xing et al., 2016; Li et al., 2016a) ; (2) the second category aims to improve the encoder-decoder model itself, including decoding with beam search and its variations (Wiseman and Rush, 2016) , encouraging responses that have long-term payoff (Li et al., 2016b) , etc.", "Building upon the past work in dialog managers and encoder-decoder models, the key idea of this paper is to model dialogs as a one-to-many problem at the discourse level.", "Previous studies indicate that there are many factors in open-domain dialogs that decide the next response, and it is nontrivial to extract all of them.", "Intuitively, given a similar dialog history (and other observed inputs), there may exist many valid responses (at the discourse-level), each corresponding to a certain configuration of the latent variables that are not presented in the input.", "To uncover the potential responses, we strive to model a probabilistic distribution over the distributed utterance embeddings of the potential responses using a latent variable ( Figure 1 ).", "This allows us to generate diverse responses by drawing samples from the learned distribution and reconstruct their words via a decoder neural network.", "Specifically, our contributions are three-fold: 1.", "We present a novel neural dialog model adapted from conditional variational autoencoders (CVAE) , which introduces a latent variable that can capture discourse-level variations as described above 2.", "We propose Knowledge-Guided CVAE (kgC-VAE), which enables easy integration of expert knowledge and results in performance improvement and model interpretability.", "3.", "We develop a training method in addressing the difficulty of optimizing CVAE for natural language generation (Bowman et al., 2015) .", "We evaluate our models on human-human conversation data and yield promising results in: (a) generating appropriate and discourse-level diverse responses, and (b) showing that the proposed training method is more effective than the previous techniques.", "Related Work Our work is related to both recent advancement in encoder-decoder dialog models and generative models based on CVAE.", "Encoder-decoder Dialog Models Since the emergence of the neural dialog model, the problem of output diversity has received much attention in the research community.", "Ideal output responses should be both coherent and diverse.", "However, most models end up with generic and dull responses.", "To tackle this problem, one line of research has focused on augmenting the input of encoder-decoder models with richer context information, in order to generate more spe-cific responses.", "Li et al., (2016a) captured speakers' characteristics by encoding background information and speaking style into the distributed embeddings, which are used to re-rank the generated response from an encoder-decoder model.", "Xing et al., (2016) maintain topic encoding based on Latent Dirichlet Allocation (LDA) (Blei et al., 2003) of the conversation to encourage the model to output more topic coherent responses.", "On the other hand, many attempts have also been made to improve the architecture of encoderdecoder models.", "Li et al,.", "(2015) proposed to optimize the standard encoder-decoder by maximizing the mutual information between input and output, which in turn reduces generic responses.", "This approach penalized unconditionally high frequency responses, and favored responses that have high conditional probability given the input.", "Wiseman and Rush (2016) focused on improving the decoder network by alleviating the biases between training and testing.", "They introduced a searchbased loss that directly optimizes the networks for beam search decoding.", "The resulting model achieves better performance on word ordering, parsing and machine translation.", "Besides improving beam search, Li et al., (2016b) pointed out that the MLE objective of an encoder-decoder model is unable to approximate the real-world goal of the conversation.", "Thus, they initialized a encoderdecoder model with MLE objective and leveraged reinforcement learning to fine tune the model by optimizing three heuristic rewards functions: informativity, coherence, and ease of answering.", "Conditional Variational Autoencoder The variational autoencoder (VAE) (Kingma and Welling, 2013; Rezende et al., 2014) is one of the most popular frameworks for image generation.", "The basic idea of VAE is to encode the input x into a probability distribution z instead of a point encoding in the autoencoder.", "Then VAE applies a decoder network to reconstruct the original input using samples from z.", "To generate images, VAE first obtains a sample of z from the prior distribution, e.g.", "N (0, I), and then produces an image via the decoder network.", "A more advanced model, the conditional VAE (CVAE), is a recent modification of VAE to generate diverse images conditioned on certain attributes, e.g.", "generating different human faces given skin color .", "Inspired by CVAE, we view the dialog contexts as the conditional attributes and adapt CVAE to generate diverse responses instead of images.", "Although VAE/CVAE has achieved impressive results in image generation, adapting this to natural language generators is non-trivial.", "Bowman et al., (2015) have used VAE with Long-Short Term Memory (LSTM)-based recognition and decoder networks to generate sentences from a latent Gaussian variable.", "They showed that their model is able to generate diverse sentences with even a greedy LSTM decoder.", "They also reported the difficulty of training because the LSTM decoder tends to ignore the latent variable.", "We refer to this issue as the vanishing latent variable problem.", "Serban et al., (2016b) have applied a latent variable hierarchical encoder-decoder dialog model to introduce utterance-level variations and facilitate longer responses.", "To improve upon the past models, we firstly introduce a novel mechanism to leverage linguistic knowledge in training end-to-end neural dialog models, and we also propose a novel training technique that mitigates the vanishing latent variable problem.", "Proposed Models Conditional Variational Autoencoder (CVAE) for Dialog Generation Each dyadic conversation can be represented via three random variables: the dialog context c (context window size k − 1), the response utterance x (the k th utterance) and a latent variable z, which is used to capture the latent distribution over the valid responses.", "Further, c is composed of the dialog history: the preceding k-1 utterances; conversational floor (1 if the utterance is from the same speaker of x, otherwise 0) and meta features m (e.g.", "the topic).", "We then define the conditional distribution p(x, z|c) = p(x|z, c)p(z|c) and our goal is to use deep neural networks (parametrized by θ) to approximate p(z|c) and p(x|z, c).", "We refer to p θ (z|c) as the prior network and p θ (x, |z, c) as the response decoder.", "Then the generative process of x is (Figure 2 (a)): 1.", "Sample a latent variable z from the prior network p θ (z|c).", "2.", "Generate x through the response decoder p θ (x|z, c).", "CVAE is trained to maximize the conditional log likelihood of x given c, which involves an intractable marginalization over the latent variable z.", "As proposed in , CVAE can be efficiently trained with the Stochastic Gradient Variational Bayes (SGVB) framework (Kingma and Welling, 2013) by maximizing the variational lower bound of the conditional log likelihood.", "We assume the z follows multivariate Gaussian distribution with a diagonal covariance matrix and introduce a recognition network q φ (z|x, c) to approximate the true posterior distribution p(z|x, c).", "have shown that the variational lower bound can be written as: Figure 3 demonstrates an overview of our model.", "The utterance encoder is a bidirectional recurrent neural network (BRNN) (Schuster and Paliwal, 1997) with a gated recurrent unit (GRU) (Chung et al., 2014) to encode each utterance into fixedsize vectors by concatenating the last hidden states of the forward and backward RNN u L(θ, φ; x, c) = −KL(q φ (z|x, c) p θ (z|c)) + E q φ (z|c,x) [log p θ (x|z, c)] (1) ≤ log p(x|c) i = [ h i , h i ].", "x is simply u k .", "The context encoder is a 1-layer GRU network that encodes the preceding k-1 utterances by taking u 1:k−1 and the corresponding conversation floor as inputs.", "The last hidden state h c of the context encoder is concatenated with meta features and c = [h c , m].", "Since we assume z follows isotropic Gaussian distribution, the recognition network q φ (z|x, c) ∼ N (µ, σ 2 I) and the prior network p θ (z|c) ∼ N (µ , σ 2 I), and then we have: µ log(σ 2 ) = W r x c + b r (2) µ log(σ 2 ) = MLP p (c) (3) We then use the reparametrization trick (Kingma and Welling, 2013) to obtain samples of z either from N (z; µ, σ 2 I) predicted by the recognition network (training) or N (z; µ , σ 2 I) predicted by the prior network (testing).", "Finally, the response decoder is a 1-layer GRU network with initial state s 0 = W i [z, c]+b i .", "The response decoder then predicts the words in x sequentially.", "Knowledge-Guided CVAE (kgCVAE) In practice, training CVAE is a challenging optimization problem and often requires large amount of data.", "On the other hand, past research in spoken dialog systems and discourse analysis has suggested that many linguistic cues capture crucial features in representing natural conversation.", "For example, dialog acts (Poesio and Traum, 1998) have been widely used in the dialog managers (Litman and Allen, 1987; Raux et al., 2005; Zhao and Eskenazi, 2016) to represent the propositional function of the system.", "Therefore, we conjecture that it will be beneficial for the model to learn meaningful latent z if it is provided with explicitly extracted discourse features during the training.", "In order to incorporate the linguistic features into the basic CVAE model, we first denote the set of linguistic features as y.", "Then we assume that the generation of x depends on c, z and y. y relies on z and c as shown in Figure 2 .", "Specifically, during training the initial state of the response decoder is s 0 = W i [z, c, y] + b i and the input at every step is [e t , y] where e t is the word embedding of t th word in x.", "In addition, there is an MLP to predict y = MLP y (z, c) based on z and c. In the testing stage, the predicted y is used by the response decoder instead of the oracle decoders.", "We denote the modified model as knowledge-guided CVAE (kgCVAE) and developers can add desired discourse features that they wish the latent variable z to capture.", "KgCVAE model is trained by maximizing: L(θ, φ; x, c, y) = −KL(q φ (z|x, c, y) P θ (z|c)) + E q φ (z|c,x,y) [log p(x|z, c, y)] + E q φ (z|c,x,y) [log p(y|z, c)] (4) Since now the reconstruction of y is a part of the loss function, kgCVAE can more efficiently encode y-related information into z than discovering it only based on the surface-level x and c. Another advantage of kgCVAE is that it can output a highlevel label (e.g.", "dialog act) along with the wordlevel responses, which allows easier interpretation of the model's outputs.", "Optimization Challenges A straightforward VAE with RNN decoder fails to encode meaningful information in z due to the vanishing latent variable problem (Bowman et al., 2015) .", "Bowman et al., (2015) proposed two solutions: (1) KL annealing: gradually increasing the weight of the KL term from 0 to 1 during training; (2) word drop decoding: setting a certain percentage of the target words to 0.", "We found that CVAE suffers from the same issue when the decoder is an RNN.", "Also we did not consider word drop decoding because Bowman et al,.", "(2015) have shown that it may hurt the performance when the drop rate is too high.", "As a result, we propose a simple yet novel technique to tackle the vanishing latent variable problem: bag-of-word loss.", "The idea is to introduce an auxiliary loss that requires the decoder network to predict the bag-of-words in the response x as shown in Figure 3 (b).", "We decompose x into two variables: x o with word order and x bow without order, and assume that x o and x bow are conditionally independent given z and c: p(x, z|c) = p(x o |z, c)p(x bow |z, c)p(z|c).", "Due to the conditional independence assumption, the latent variable is forced to capture global information about the target response.", "Let f = MLP b (z, x) ∈ R V where V is vocabulary size, and we have: log p(x bow |z, c) = log |x| t=1 e fx t V j e f j (5) where |x| is the length of x and x t is the word index of t th word in x.", "The modified variational lower bound for CVAE with bag-of-word loss is (see Appendix A for kgCVAE): L (θ, φ; x, c) = L(θ, φ; x, c) + E q φ (z|c,x,y) [log p(x bow |z, c)] (6) We will show that the bag-of-word loss in Equation 6 is very effective against the vanishing latent variable and it is also complementary to the KL annealing technique.", "Experiment Setup Dataset We chose the Switchboard (SW) 1 Release 2 Corpus (Godfrey and Holliman, 1997) to evaluate the proposed models.", "SW has 2400 two-sided telephone conversations with manually transcribed speech and alignment.", "In the beginning of the call, a computer operator gave the callers recorded prompts that define the desired topic of discussion.", "There are 70 available topics.", "We randomly split the data into 2316/60/62 dialogs for train/validate/test.", "The pre-processing includes (1) tokenize using the NLTK tokenizer (Bird et al., 2009 ); (2) remove non-verbal symbols and repeated words due to false starts; (3) keep the top 10K frequent word types as the vocabulary.", "The final data have 207, 833/5, 225/5, 481 (c, x) pairs for train/validate/test.", "Furthermore, a subset of SW was manually labeled with dialog acts (Stolcke et al., 2000) .", "We extracted dialog act labels based on the dialog act recognizer proposed in (Ribeiro et al., 2015) .", "The features include the uni-gram and bi-gram of the utterance, and the contextual features of the last 3 utterances.", "We trained a Support Vector Machine (SVM) (Suykens and Vandewalle, 1999) with linear kernel on the subset of SW with human annotations.", "There are 42 types of dialog acts and the SVM achieved 77.3% accuracy on held-out data.", "Then the rest of SW data are labelled with dialog acts using the trained SVM dialog act recognizer.", "Training We trained with the following hyperparameters (according to the loss on the validate dataset): word embedding has size 200 and is shared across everywhere.", "We initialize the word embedding from Glove embedding pre-trained on Twitter (Pennington et al., 2014) .", "The utterance encoder has a hidden size of 300 for each direction.", "The context encoder has a hidden size of 600 and the response decoder has a hidden size of 400.", "The prior network and the MLP for predicting y both have 1 hidden layer of size 400 and tanh non-linearity.", "The latent variable z has a size of 200.", "The context window k is 10.", "All the initial weights are sampled from a uniform distribution [-0.08, 0.08].", "The mini-batch size is 30.", "The models are trained end-to-end using the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.001 and gradient clipping at 5.", "We selected the best models based on the variational lower bound on the validate data.", "Finally, we use the BOW loss along with KL annealing of 10,000 batches to achieve the best performance.", "Section 5.4 gives a detailed argument for the importance of the BOW loss.", "Results Experiments Setup We compared three neural dialog models: a strong baseline model, CVAE, and kgCVAE.", "The baseline model is an encoder-decoder neural dialog model without latent variables similar to (Serban et al., 2016a) .", "The baseline model's encoder uses the same context encoder to encode the dialog history and the meta features as shown in Figure 3 .", "The encoded context c is directly fed into the decoder networks as the initial state.", "The hyperparameters of the baseline are the same as the ones reported in Section 4.2 and the baseline is trained to minimize the standard cross entropy loss of the decoder RNN model without any auxiliary loss.", "Also, to compare the diversity introduced by the stochasticity in the proposed latent variable versus the softmax of RNN at each decoding step, we generate N responses from the baseline by sam-pling from the softmax.", "For CVAE/kgCVAE, we sample N times from the latent z and only use greedy decoders so that the randomness comes entirely from the latent variable z. Quantitative Analysis Automatically evaluating an open-domain generative dialog model is an open research challenge .", "Following our one-tomany hypothesis, we propose the following metrics.", "We assume that for a given dialog context c, there exist M c reference responses r j , j ∈ [1, M c ].", "Meanwhile a model can generate N hypothesis re- sponses h i , i ∈ [1, N ].", "The generalized responselevel precision/recall for a given dialog context is: precision(c) = N i=1 max j∈[1,Mc] d(r j , h i ) N recall(c) = Mc j=1 max i∈[1,N ] d(r j , h i )) M c where d(r j , h i ) is a distance function which lies between 0 to 1 and measures the similarities between r j and h i .", "The final score is averaged over the entire test dataset and we report the performance with 3 types of distance functions in order to evaluate the systems from various linguistic points of view: Smoothed Sentence-level BLEU (Chen and Cherry, 2014): BLEU is a popular metric that measures the geometric mean of modified ngram precision with a length penalty (Papineni et al., 2002; Li et al., 2015) .", "We use BLEU-1 to 4 as our lexical similarity metric and normalize the score to 0 to 1 scale.", "Cosine Distance of Bag-of-word Embedding: a simple method to obtain sentence embeddings is to take the average or extrema of all the word embeddings in the sentences (Forgues et al., 2014; Adi et al., 2016) .", "The d(r j , h i ) is the cosine distance of the two embedding vectors.", "We used Glove embedding described in Section 4 and denote the average method as A-bow and extrema method as E-bow.", "3.", "Dialog Act Match: to measure the similarity at the discourse level, the same dialogact tagger from 4.1 is applied to label all the generated responses of each model.", "We set d(r j , h i ) = 1 if r j and h i have the same dialog acts, otherwise d(r j , h i ) = 0.", "One challenge of using the above metrics is that there is only one, rather than multiple reference responses/contexts.", "This impacts reliability of our measures.", "Inspired by (Sordoni et al., 2015) , we utilized information retrieval techniques (see Appendix A) to gather 10 extra candidate reference responses/context from other conversations with the same topics.", "Then the 10 candidate references are filtered by two experts, which serve as the ground truth to train the reference response classifier.", "The result is 6.69 extra references in average per context.", "The average number of distinct reference dialog acts is 4.2.", "Table 1 The proposed models outperform the baseline in terms of recall in all the metrics with statistical significance.", "This confirms our hypothesis that generating responses with discourse-level diversity can lead to a more comprehensive coverage of the potential responses than promoting only word-level diversity.", "As for precision, we observed that the baseline has higher or similar scores than CVAE in all metrics, which is expected since the baseline tends to generate the mostly likely and safe responses repeatedly in the N hypotheses.", "However, kgCVAE is able to achieve the highest precision and recall in the 4 metrics at the same time (BLEU1-4, E-BOW).", "One reason for kgCVAE's good performance is that the predicted dialog act label in kgCVAE can regularize the generation process of its RNN decoder by forcing it to generate more coherent and precise words.", "We further analyze the precision/recall of BLEU-4 by looking at the average score versus the number of distinct reference dialog acts.", "A low number of distinct dialog acts represents the situation where the dialog context has a strong constraint on the range of the next response (low entropy), while a high number indicates the opposite (highentropy).", "Figure 4 shows that CVAE/kgCVAE achieves significantly higher recall than the baseline in higher entropy contexts.", "Also it shows that CVAE suffers from lower precision, especially in low entropy contexts.", "Finally, kgCVAE gets higher precision than both the baseline and CVAE in the full spectrum of context entropy.", "Table 2 shows the outputs generated from the baseline and kgCVAE.", "In example 1, caller A begins with an open-ended question.", "The kgCVAE model generated highly diverse answers that cover multiple plausible dialog acts.", "Further, we notice that the generated text exhibits similar dialog acts compared to the ones predicted separately by the model, implying the consistency of natural language generation based on y.", "On the contrary, the responses from the baseline model are limited to local n-gram variations and share a similar prefix, i.e.", "\"I'm\".", "Example 2 is a situation where caller A is telling B stories.", "The ground truth response is a back-channel and the range of valid answers is more constrained than example 1 since B is playing the role of a listener.", "The baseline successfully predicts \"uh-huh\".", "The kgCVAE model is also able to generate various ways of back-channeling.", "This implies that the latent z is able to capture context-sensitive variations, i.e.", "in low-entropy dialog contexts modeling lexical diversity while in high-entropy ones modeling discourse-level diversity.", "Moreover, kgCVAE is occasionally able to generate more sophisticated grounding (sample 4) beyond a simple back-channel, which is also an acceptable response given the dialog context.", "Qualitative Analysis In addition, past work (Kingma and Welling, 2013) has shown that the recognition network is able to learn to cluster high-dimension data, so we conjecture that posterior z outputted from the recognition network should cluster the responses into meaningful groups.", "Figure 5 visualizes the posterior z of responses in the test dataset in 2D space using t-SNE (Maaten and Hinton, 2008) .", "We found that the learned latent space is highly correlated with the dialog act and length of responses, which confirms our assumption.", "Results for Bag-of-Word Loss Finally, we evaluate the effectiveness of bag-ofword (BOW) loss for training VAE/CVAE with the RNN decoder.", "To compare with past work (Bowman et al., 2015) , we conducted the same language modelling (LM) task on Penn Treebank using VAE.", "The network architecture is same except we use GRU instead of LSTM.", "We compared four different training setups: (1) standard VAE without any heuristics; (2) VAE with KL annealing (KLA); (3) VAE with BOW loss; (4) VAE with both BOW loss and KLA.", "Intuitively, a well trained model should lead to a low reconstruction loss and small but non-trivial KL cost.", "For all models with KLA, the KL weight increases linearly from 0 to 1 in the first 5000 batches.", "Table 3 shows the reconstruction perplexity and the KL cost on the test dataset.", "The standard VAE fails to learn a meaningful latent variable by hav- Table 2 : Generated responses from the baselines and kgCVAE in two examples.", "KgCVAE also provides the predicted dialog act for each response.", "The context only shows the last utterance due to space limit (the actual context window size is 10).", "ing a KL cost close to 0 and a reconstruction perplexity similar to a small LSTM LM (Zaremba et al., 2014) .", "KLA helps to improve the reconstruction loss, but it requires early stopping since the models will fall back to the standard VAE after the KL weight becomes 1.", "At last, the models with BOW loss achieved significantly lower perplexity and larger KL cost.", "Figure 6 visualizes the evolution of the KL cost.", "We can see that for the standard model, the KL cost crashes to 0 at the beginning of training and never recovers.", "On the contrary, the model with only KLA learns to encode substantial information in latent z when the KL cost weight is small.", "However, after the KL weight is increased to 1 (after 5000 batch), the model once again decides to ignore the latent z and falls back to the naive implementation.", "The model with BOW loss, however, consistently converges to a non-trivial KL cost even without KLA, which confirms the importance of BOW loss for training latent variable models with the RNN decoder.", "Last but not least, our experiments showed that the conclusions drawn from LM using VAE also apply to training CVAE/kgCVAE, so we used BOW loss together with KLA for all previous experiments.", "Conclusion and Future Work In conclusion, we identified the one-to-many nature of open-domain conversation and proposed two novel models that show superior performance in generating diverse and appropriate responses at the discourse level.", "While the current paper addresses diversifying responses in respect to dialogue acts, this work is part of a larger research direction that targets leveraging both past linguistic findings and the learning power of deep neural networks to learn better representation of the latent factors in dialog.", "In turn, the output of this novel neural dialog model will be easier to explain and control by humans.", "In addition to dialog acts, we plan to apply our kgCVAE model to capture other different linguistic phenomena including sentiment, named entities,etc.", "Last but not least, the recognition network in our model will serve as the foundation for designing a datadriven dialog manager, which automatically discovers useful high-level intents.", "All of the above suggest a promising research direction." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "4.1", "4.2", "5.1", "5.2", "1.", "2.", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Encoder-decoder Dialog Models", "Conditional Variational Autoencoder", "Conditional Variational Autoencoder (CVAE) for Dialog Generation", "Knowledge-Guided CVAE (kgCVAE)", "Optimization Challenges", "Dataset", "Training", "Experiments Setup", "Quantitative Analysis", "Smoothed Sentence-level BLEU (Chen and", "Cosine", "Qualitative Analysis", "Results for Bag-of-Word Loss", "Conclusion and Future Work" ] }
GEM-SciDuet-train-100#paper-1263#slide-1
Our Key Insights
Response generation in conversation is a ONE-TO-MANY mapping problem at the discourse A similar dialog context can have many different yet valid responses. Learn a probabilistic distribution over the valid responses instead of only keep the most likely
Response generation in conversation is a ONE-TO-MANY mapping problem at the discourse A similar dialog context can have many different yet valid responses. Learn a probabilistic distribution over the valid responses instead of only keep the most likely
[]
GEM-SciDuet-train-100#paper-1263#slide-2
1263
Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders
While recent neural encoder-decoder models have shown great promise in modeling open-domain conversations, they often generate dull and generic responses. Unlike past work that has focused on diversifying the output of the decoder at word-level to alleviate this problem, we present a novel framework based on conditional variational autoencoders that captures the discourse-level diversity in the encoder. Our model uses latent variables to learn a distribution over potential conversational intents and generates diverse responses using only greedy decoders. We have further developed a novel variant that is integrated with linguistic prior knowledge for better performance. Finally, the training procedure is improved by introducing a bag-of-word loss. Our proposed models have been validated to generate significantly more diverse responses than baseline approaches and exhibit competence in discourse-level decision-making.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196 ], "paper_content_text": [ "Introduction The dialog manager is one of the key components of dialog systems, which is responsible for modeling the decision-making process.", "Specifically, it typically takes a new utterance and the dialog context as input, and generates discourse-level decisions (Bohus and Rudnicky, 2003; Williams and Young, 2007) .", "Advanced dialog managers usually have a list of potential actions that enable them to have diverse behavior during a conversation, e.g.", "different strategies to recover from non-understanding .", "However, the conventional approach of designing a dialog manager (Williams and Young, 2007) does not scale well to open-domain conversation models because of the vast quantity of possible decisions.", "Thus, there has been a growing interest in applying encoder-decoder models for modeling open-domain conversation (Vinyals and Le, 2015; Serban et al., 2016a) .", "The basic approach treats a conversation as a transduction task, in which the dialog history is the source sequence and the next response is the target sequence.", "The model is then trained end-to-end on large conversation corpora using the maximum-likelihood estimation (MLE) objective without the need for manual crafting.", "However recent research has found that encoder-decoder models tend to generate generic and dull responses (e.g., I don't know), rather than meaningful and specific answers (Li et al., 2015; Serban et al., 2016b) .", "There have been many attempts to explain and solve this limitation, and they can be broadly divided into two categories (see Section 2 for details): (1) the first category argues that the dialog history is only one of the factors that decide the next response.", "Other features should be extracted and provided to the models as conditionals in order to generate more specific responses (Xing et al., 2016; Li et al., 2016a) ; (2) the second category aims to improve the encoder-decoder model itself, including decoding with beam search and its variations (Wiseman and Rush, 2016) , encouraging responses that have long-term payoff (Li et al., 2016b) , etc.", "Building upon the past work in dialog managers and encoder-decoder models, the key idea of this paper is to model dialogs as a one-to-many problem at the discourse level.", "Previous studies indicate that there are many factors in open-domain dialogs that decide the next response, and it is nontrivial to extract all of them.", "Intuitively, given a similar dialog history (and other observed inputs), there may exist many valid responses (at the discourse-level), each corresponding to a certain configuration of the latent variables that are not presented in the input.", "To uncover the potential responses, we strive to model a probabilistic distribution over the distributed utterance embeddings of the potential responses using a latent variable ( Figure 1 ).", "This allows us to generate diverse responses by drawing samples from the learned distribution and reconstruct their words via a decoder neural network.", "Specifically, our contributions are three-fold: 1.", "We present a novel neural dialog model adapted from conditional variational autoencoders (CVAE) , which introduces a latent variable that can capture discourse-level variations as described above 2.", "We propose Knowledge-Guided CVAE (kgC-VAE), which enables easy integration of expert knowledge and results in performance improvement and model interpretability.", "3.", "We develop a training method in addressing the difficulty of optimizing CVAE for natural language generation (Bowman et al., 2015) .", "We evaluate our models on human-human conversation data and yield promising results in: (a) generating appropriate and discourse-level diverse responses, and (b) showing that the proposed training method is more effective than the previous techniques.", "Related Work Our work is related to both recent advancement in encoder-decoder dialog models and generative models based on CVAE.", "Encoder-decoder Dialog Models Since the emergence of the neural dialog model, the problem of output diversity has received much attention in the research community.", "Ideal output responses should be both coherent and diverse.", "However, most models end up with generic and dull responses.", "To tackle this problem, one line of research has focused on augmenting the input of encoder-decoder models with richer context information, in order to generate more spe-cific responses.", "Li et al., (2016a) captured speakers' characteristics by encoding background information and speaking style into the distributed embeddings, which are used to re-rank the generated response from an encoder-decoder model.", "Xing et al., (2016) maintain topic encoding based on Latent Dirichlet Allocation (LDA) (Blei et al., 2003) of the conversation to encourage the model to output more topic coherent responses.", "On the other hand, many attempts have also been made to improve the architecture of encoderdecoder models.", "Li et al,.", "(2015) proposed to optimize the standard encoder-decoder by maximizing the mutual information between input and output, which in turn reduces generic responses.", "This approach penalized unconditionally high frequency responses, and favored responses that have high conditional probability given the input.", "Wiseman and Rush (2016) focused on improving the decoder network by alleviating the biases between training and testing.", "They introduced a searchbased loss that directly optimizes the networks for beam search decoding.", "The resulting model achieves better performance on word ordering, parsing and machine translation.", "Besides improving beam search, Li et al., (2016b) pointed out that the MLE objective of an encoder-decoder model is unable to approximate the real-world goal of the conversation.", "Thus, they initialized a encoderdecoder model with MLE objective and leveraged reinforcement learning to fine tune the model by optimizing three heuristic rewards functions: informativity, coherence, and ease of answering.", "Conditional Variational Autoencoder The variational autoencoder (VAE) (Kingma and Welling, 2013; Rezende et al., 2014) is one of the most popular frameworks for image generation.", "The basic idea of VAE is to encode the input x into a probability distribution z instead of a point encoding in the autoencoder.", "Then VAE applies a decoder network to reconstruct the original input using samples from z.", "To generate images, VAE first obtains a sample of z from the prior distribution, e.g.", "N (0, I), and then produces an image via the decoder network.", "A more advanced model, the conditional VAE (CVAE), is a recent modification of VAE to generate diverse images conditioned on certain attributes, e.g.", "generating different human faces given skin color .", "Inspired by CVAE, we view the dialog contexts as the conditional attributes and adapt CVAE to generate diverse responses instead of images.", "Although VAE/CVAE has achieved impressive results in image generation, adapting this to natural language generators is non-trivial.", "Bowman et al., (2015) have used VAE with Long-Short Term Memory (LSTM)-based recognition and decoder networks to generate sentences from a latent Gaussian variable.", "They showed that their model is able to generate diverse sentences with even a greedy LSTM decoder.", "They also reported the difficulty of training because the LSTM decoder tends to ignore the latent variable.", "We refer to this issue as the vanishing latent variable problem.", "Serban et al., (2016b) have applied a latent variable hierarchical encoder-decoder dialog model to introduce utterance-level variations and facilitate longer responses.", "To improve upon the past models, we firstly introduce a novel mechanism to leverage linguistic knowledge in training end-to-end neural dialog models, and we also propose a novel training technique that mitigates the vanishing latent variable problem.", "Proposed Models Conditional Variational Autoencoder (CVAE) for Dialog Generation Each dyadic conversation can be represented via three random variables: the dialog context c (context window size k − 1), the response utterance x (the k th utterance) and a latent variable z, which is used to capture the latent distribution over the valid responses.", "Further, c is composed of the dialog history: the preceding k-1 utterances; conversational floor (1 if the utterance is from the same speaker of x, otherwise 0) and meta features m (e.g.", "the topic).", "We then define the conditional distribution p(x, z|c) = p(x|z, c)p(z|c) and our goal is to use deep neural networks (parametrized by θ) to approximate p(z|c) and p(x|z, c).", "We refer to p θ (z|c) as the prior network and p θ (x, |z, c) as the response decoder.", "Then the generative process of x is (Figure 2 (a)): 1.", "Sample a latent variable z from the prior network p θ (z|c).", "2.", "Generate x through the response decoder p θ (x|z, c).", "CVAE is trained to maximize the conditional log likelihood of x given c, which involves an intractable marginalization over the latent variable z.", "As proposed in , CVAE can be efficiently trained with the Stochastic Gradient Variational Bayes (SGVB) framework (Kingma and Welling, 2013) by maximizing the variational lower bound of the conditional log likelihood.", "We assume the z follows multivariate Gaussian distribution with a diagonal covariance matrix and introduce a recognition network q φ (z|x, c) to approximate the true posterior distribution p(z|x, c).", "have shown that the variational lower bound can be written as: Figure 3 demonstrates an overview of our model.", "The utterance encoder is a bidirectional recurrent neural network (BRNN) (Schuster and Paliwal, 1997) with a gated recurrent unit (GRU) (Chung et al., 2014) to encode each utterance into fixedsize vectors by concatenating the last hidden states of the forward and backward RNN u L(θ, φ; x, c) = −KL(q φ (z|x, c) p θ (z|c)) + E q φ (z|c,x) [log p θ (x|z, c)] (1) ≤ log p(x|c) i = [ h i , h i ].", "x is simply u k .", "The context encoder is a 1-layer GRU network that encodes the preceding k-1 utterances by taking u 1:k−1 and the corresponding conversation floor as inputs.", "The last hidden state h c of the context encoder is concatenated with meta features and c = [h c , m].", "Since we assume z follows isotropic Gaussian distribution, the recognition network q φ (z|x, c) ∼ N (µ, σ 2 I) and the prior network p θ (z|c) ∼ N (µ , σ 2 I), and then we have: µ log(σ 2 ) = W r x c + b r (2) µ log(σ 2 ) = MLP p (c) (3) We then use the reparametrization trick (Kingma and Welling, 2013) to obtain samples of z either from N (z; µ, σ 2 I) predicted by the recognition network (training) or N (z; µ , σ 2 I) predicted by the prior network (testing).", "Finally, the response decoder is a 1-layer GRU network with initial state s 0 = W i [z, c]+b i .", "The response decoder then predicts the words in x sequentially.", "Knowledge-Guided CVAE (kgCVAE) In practice, training CVAE is a challenging optimization problem and often requires large amount of data.", "On the other hand, past research in spoken dialog systems and discourse analysis has suggested that many linguistic cues capture crucial features in representing natural conversation.", "For example, dialog acts (Poesio and Traum, 1998) have been widely used in the dialog managers (Litman and Allen, 1987; Raux et al., 2005; Zhao and Eskenazi, 2016) to represent the propositional function of the system.", "Therefore, we conjecture that it will be beneficial for the model to learn meaningful latent z if it is provided with explicitly extracted discourse features during the training.", "In order to incorporate the linguistic features into the basic CVAE model, we first denote the set of linguistic features as y.", "Then we assume that the generation of x depends on c, z and y. y relies on z and c as shown in Figure 2 .", "Specifically, during training the initial state of the response decoder is s 0 = W i [z, c, y] + b i and the input at every step is [e t , y] where e t is the word embedding of t th word in x.", "In addition, there is an MLP to predict y = MLP y (z, c) based on z and c. In the testing stage, the predicted y is used by the response decoder instead of the oracle decoders.", "We denote the modified model as knowledge-guided CVAE (kgCVAE) and developers can add desired discourse features that they wish the latent variable z to capture.", "KgCVAE model is trained by maximizing: L(θ, φ; x, c, y) = −KL(q φ (z|x, c, y) P θ (z|c)) + E q φ (z|c,x,y) [log p(x|z, c, y)] + E q φ (z|c,x,y) [log p(y|z, c)] (4) Since now the reconstruction of y is a part of the loss function, kgCVAE can more efficiently encode y-related information into z than discovering it only based on the surface-level x and c. Another advantage of kgCVAE is that it can output a highlevel label (e.g.", "dialog act) along with the wordlevel responses, which allows easier interpretation of the model's outputs.", "Optimization Challenges A straightforward VAE with RNN decoder fails to encode meaningful information in z due to the vanishing latent variable problem (Bowman et al., 2015) .", "Bowman et al., (2015) proposed two solutions: (1) KL annealing: gradually increasing the weight of the KL term from 0 to 1 during training; (2) word drop decoding: setting a certain percentage of the target words to 0.", "We found that CVAE suffers from the same issue when the decoder is an RNN.", "Also we did not consider word drop decoding because Bowman et al,.", "(2015) have shown that it may hurt the performance when the drop rate is too high.", "As a result, we propose a simple yet novel technique to tackle the vanishing latent variable problem: bag-of-word loss.", "The idea is to introduce an auxiliary loss that requires the decoder network to predict the bag-of-words in the response x as shown in Figure 3 (b).", "We decompose x into two variables: x o with word order and x bow without order, and assume that x o and x bow are conditionally independent given z and c: p(x, z|c) = p(x o |z, c)p(x bow |z, c)p(z|c).", "Due to the conditional independence assumption, the latent variable is forced to capture global information about the target response.", "Let f = MLP b (z, x) ∈ R V where V is vocabulary size, and we have: log p(x bow |z, c) = log |x| t=1 e fx t V j e f j (5) where |x| is the length of x and x t is the word index of t th word in x.", "The modified variational lower bound for CVAE with bag-of-word loss is (see Appendix A for kgCVAE): L (θ, φ; x, c) = L(θ, φ; x, c) + E q φ (z|c,x,y) [log p(x bow |z, c)] (6) We will show that the bag-of-word loss in Equation 6 is very effective against the vanishing latent variable and it is also complementary to the KL annealing technique.", "Experiment Setup Dataset We chose the Switchboard (SW) 1 Release 2 Corpus (Godfrey and Holliman, 1997) to evaluate the proposed models.", "SW has 2400 two-sided telephone conversations with manually transcribed speech and alignment.", "In the beginning of the call, a computer operator gave the callers recorded prompts that define the desired topic of discussion.", "There are 70 available topics.", "We randomly split the data into 2316/60/62 dialogs for train/validate/test.", "The pre-processing includes (1) tokenize using the NLTK tokenizer (Bird et al., 2009 ); (2) remove non-verbal symbols and repeated words due to false starts; (3) keep the top 10K frequent word types as the vocabulary.", "The final data have 207, 833/5, 225/5, 481 (c, x) pairs for train/validate/test.", "Furthermore, a subset of SW was manually labeled with dialog acts (Stolcke et al., 2000) .", "We extracted dialog act labels based on the dialog act recognizer proposed in (Ribeiro et al., 2015) .", "The features include the uni-gram and bi-gram of the utterance, and the contextual features of the last 3 utterances.", "We trained a Support Vector Machine (SVM) (Suykens and Vandewalle, 1999) with linear kernel on the subset of SW with human annotations.", "There are 42 types of dialog acts and the SVM achieved 77.3% accuracy on held-out data.", "Then the rest of SW data are labelled with dialog acts using the trained SVM dialog act recognizer.", "Training We trained with the following hyperparameters (according to the loss on the validate dataset): word embedding has size 200 and is shared across everywhere.", "We initialize the word embedding from Glove embedding pre-trained on Twitter (Pennington et al., 2014) .", "The utterance encoder has a hidden size of 300 for each direction.", "The context encoder has a hidden size of 600 and the response decoder has a hidden size of 400.", "The prior network and the MLP for predicting y both have 1 hidden layer of size 400 and tanh non-linearity.", "The latent variable z has a size of 200.", "The context window k is 10.", "All the initial weights are sampled from a uniform distribution [-0.08, 0.08].", "The mini-batch size is 30.", "The models are trained end-to-end using the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.001 and gradient clipping at 5.", "We selected the best models based on the variational lower bound on the validate data.", "Finally, we use the BOW loss along with KL annealing of 10,000 batches to achieve the best performance.", "Section 5.4 gives a detailed argument for the importance of the BOW loss.", "Results Experiments Setup We compared three neural dialog models: a strong baseline model, CVAE, and kgCVAE.", "The baseline model is an encoder-decoder neural dialog model without latent variables similar to (Serban et al., 2016a) .", "The baseline model's encoder uses the same context encoder to encode the dialog history and the meta features as shown in Figure 3 .", "The encoded context c is directly fed into the decoder networks as the initial state.", "The hyperparameters of the baseline are the same as the ones reported in Section 4.2 and the baseline is trained to minimize the standard cross entropy loss of the decoder RNN model without any auxiliary loss.", "Also, to compare the diversity introduced by the stochasticity in the proposed latent variable versus the softmax of RNN at each decoding step, we generate N responses from the baseline by sam-pling from the softmax.", "For CVAE/kgCVAE, we sample N times from the latent z and only use greedy decoders so that the randomness comes entirely from the latent variable z. Quantitative Analysis Automatically evaluating an open-domain generative dialog model is an open research challenge .", "Following our one-tomany hypothesis, we propose the following metrics.", "We assume that for a given dialog context c, there exist M c reference responses r j , j ∈ [1, M c ].", "Meanwhile a model can generate N hypothesis re- sponses h i , i ∈ [1, N ].", "The generalized responselevel precision/recall for a given dialog context is: precision(c) = N i=1 max j∈[1,Mc] d(r j , h i ) N recall(c) = Mc j=1 max i∈[1,N ] d(r j , h i )) M c where d(r j , h i ) is a distance function which lies between 0 to 1 and measures the similarities between r j and h i .", "The final score is averaged over the entire test dataset and we report the performance with 3 types of distance functions in order to evaluate the systems from various linguistic points of view: Smoothed Sentence-level BLEU (Chen and Cherry, 2014): BLEU is a popular metric that measures the geometric mean of modified ngram precision with a length penalty (Papineni et al., 2002; Li et al., 2015) .", "We use BLEU-1 to 4 as our lexical similarity metric and normalize the score to 0 to 1 scale.", "Cosine Distance of Bag-of-word Embedding: a simple method to obtain sentence embeddings is to take the average or extrema of all the word embeddings in the sentences (Forgues et al., 2014; Adi et al., 2016) .", "The d(r j , h i ) is the cosine distance of the two embedding vectors.", "We used Glove embedding described in Section 4 and denote the average method as A-bow and extrema method as E-bow.", "3.", "Dialog Act Match: to measure the similarity at the discourse level, the same dialogact tagger from 4.1 is applied to label all the generated responses of each model.", "We set d(r j , h i ) = 1 if r j and h i have the same dialog acts, otherwise d(r j , h i ) = 0.", "One challenge of using the above metrics is that there is only one, rather than multiple reference responses/contexts.", "This impacts reliability of our measures.", "Inspired by (Sordoni et al., 2015) , we utilized information retrieval techniques (see Appendix A) to gather 10 extra candidate reference responses/context from other conversations with the same topics.", "Then the 10 candidate references are filtered by two experts, which serve as the ground truth to train the reference response classifier.", "The result is 6.69 extra references in average per context.", "The average number of distinct reference dialog acts is 4.2.", "Table 1 The proposed models outperform the baseline in terms of recall in all the metrics with statistical significance.", "This confirms our hypothesis that generating responses with discourse-level diversity can lead to a more comprehensive coverage of the potential responses than promoting only word-level diversity.", "As for precision, we observed that the baseline has higher or similar scores than CVAE in all metrics, which is expected since the baseline tends to generate the mostly likely and safe responses repeatedly in the N hypotheses.", "However, kgCVAE is able to achieve the highest precision and recall in the 4 metrics at the same time (BLEU1-4, E-BOW).", "One reason for kgCVAE's good performance is that the predicted dialog act label in kgCVAE can regularize the generation process of its RNN decoder by forcing it to generate more coherent and precise words.", "We further analyze the precision/recall of BLEU-4 by looking at the average score versus the number of distinct reference dialog acts.", "A low number of distinct dialog acts represents the situation where the dialog context has a strong constraint on the range of the next response (low entropy), while a high number indicates the opposite (highentropy).", "Figure 4 shows that CVAE/kgCVAE achieves significantly higher recall than the baseline in higher entropy contexts.", "Also it shows that CVAE suffers from lower precision, especially in low entropy contexts.", "Finally, kgCVAE gets higher precision than both the baseline and CVAE in the full spectrum of context entropy.", "Table 2 shows the outputs generated from the baseline and kgCVAE.", "In example 1, caller A begins with an open-ended question.", "The kgCVAE model generated highly diverse answers that cover multiple plausible dialog acts.", "Further, we notice that the generated text exhibits similar dialog acts compared to the ones predicted separately by the model, implying the consistency of natural language generation based on y.", "On the contrary, the responses from the baseline model are limited to local n-gram variations and share a similar prefix, i.e.", "\"I'm\".", "Example 2 is a situation where caller A is telling B stories.", "The ground truth response is a back-channel and the range of valid answers is more constrained than example 1 since B is playing the role of a listener.", "The baseline successfully predicts \"uh-huh\".", "The kgCVAE model is also able to generate various ways of back-channeling.", "This implies that the latent z is able to capture context-sensitive variations, i.e.", "in low-entropy dialog contexts modeling lexical diversity while in high-entropy ones modeling discourse-level diversity.", "Moreover, kgCVAE is occasionally able to generate more sophisticated grounding (sample 4) beyond a simple back-channel, which is also an acceptable response given the dialog context.", "Qualitative Analysis In addition, past work (Kingma and Welling, 2013) has shown that the recognition network is able to learn to cluster high-dimension data, so we conjecture that posterior z outputted from the recognition network should cluster the responses into meaningful groups.", "Figure 5 visualizes the posterior z of responses in the test dataset in 2D space using t-SNE (Maaten and Hinton, 2008) .", "We found that the learned latent space is highly correlated with the dialog act and length of responses, which confirms our assumption.", "Results for Bag-of-Word Loss Finally, we evaluate the effectiveness of bag-ofword (BOW) loss for training VAE/CVAE with the RNN decoder.", "To compare with past work (Bowman et al., 2015) , we conducted the same language modelling (LM) task on Penn Treebank using VAE.", "The network architecture is same except we use GRU instead of LSTM.", "We compared four different training setups: (1) standard VAE without any heuristics; (2) VAE with KL annealing (KLA); (3) VAE with BOW loss; (4) VAE with both BOW loss and KLA.", "Intuitively, a well trained model should lead to a low reconstruction loss and small but non-trivial KL cost.", "For all models with KLA, the KL weight increases linearly from 0 to 1 in the first 5000 batches.", "Table 3 shows the reconstruction perplexity and the KL cost on the test dataset.", "The standard VAE fails to learn a meaningful latent variable by hav- Table 2 : Generated responses from the baselines and kgCVAE in two examples.", "KgCVAE also provides the predicted dialog act for each response.", "The context only shows the last utterance due to space limit (the actual context window size is 10).", "ing a KL cost close to 0 and a reconstruction perplexity similar to a small LSTM LM (Zaremba et al., 2014) .", "KLA helps to improve the reconstruction loss, but it requires early stopping since the models will fall back to the standard VAE after the KL weight becomes 1.", "At last, the models with BOW loss achieved significantly lower perplexity and larger KL cost.", "Figure 6 visualizes the evolution of the KL cost.", "We can see that for the standard model, the KL cost crashes to 0 at the beginning of training and never recovers.", "On the contrary, the model with only KLA learns to encode substantial information in latent z when the KL cost weight is small.", "However, after the KL weight is increased to 1 (after 5000 batch), the model once again decides to ignore the latent z and falls back to the naive implementation.", "The model with BOW loss, however, consistently converges to a non-trivial KL cost even without KLA, which confirms the importance of BOW loss for training latent variable models with the RNN decoder.", "Last but not least, our experiments showed that the conclusions drawn from LM using VAE also apply to training CVAE/kgCVAE, so we used BOW loss together with KLA for all previous experiments.", "Conclusion and Future Work In conclusion, we identified the one-to-many nature of open-domain conversation and proposed two novel models that show superior performance in generating diverse and appropriate responses at the discourse level.", "While the current paper addresses diversifying responses in respect to dialogue acts, this work is part of a larger research direction that targets leveraging both past linguistic findings and the learning power of deep neural networks to learn better representation of the latent factors in dialog.", "In turn, the output of this novel neural dialog model will be easier to explain and control by humans.", "In addition to dialog acts, we plan to apply our kgCVAE model to capture other different linguistic phenomena including sentiment, named entities,etc.", "Last but not least, the recognition network in our model will serve as the foundation for designing a datadriven dialog manager, which automatically discovers useful high-level intents.", "All of the above suggest a promising research direction." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "4.1", "4.2", "5.1", "5.2", "1.", "2.", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Encoder-decoder Dialog Models", "Conditional Variational Autoencoder", "Conditional Variational Autoencoder (CVAE) for Dialog Generation", "Knowledge-Guided CVAE (kgCVAE)", "Optimization Challenges", "Dataset", "Training", "Experiments Setup", "Quantitative Analysis", "Smoothed Sentence-level BLEU (Chen and", "Cosine", "Qualitative Analysis", "Results for Bag-of-Word Loss", "Conclusion and Future Work" ] }
GEM-SciDuet-train-100#paper-1263#slide-2
Our Contributions
Present an E2E dialog model adapted from Conditional Variational Autoencoder Enable integration of expert knowledge via knowledge-guided CVAE. Improve the training method of optimizing CVAE/VAE for text generation.
Present an E2E dialog model adapted from Conditional Variational Autoencoder Enable integration of expert knowledge via knowledge-guided CVAE. Improve the training method of optimizing CVAE/VAE for text generation.
[]
GEM-SciDuet-train-100#paper-1263#slide-3
1263
Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders
While recent neural encoder-decoder models have shown great promise in modeling open-domain conversations, they often generate dull and generic responses. Unlike past work that has focused on diversifying the output of the decoder at word-level to alleviate this problem, we present a novel framework based on conditional variational autoencoders that captures the discourse-level diversity in the encoder. Our model uses latent variables to learn a distribution over potential conversational intents and generates diverse responses using only greedy decoders. We have further developed a novel variant that is integrated with linguistic prior knowledge for better performance. Finally, the training procedure is improved by introducing a bag-of-word loss. Our proposed models have been validated to generate significantly more diverse responses than baseline approaches and exhibit competence in discourse-level decision-making.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196 ], "paper_content_text": [ "Introduction The dialog manager is one of the key components of dialog systems, which is responsible for modeling the decision-making process.", "Specifically, it typically takes a new utterance and the dialog context as input, and generates discourse-level decisions (Bohus and Rudnicky, 2003; Williams and Young, 2007) .", "Advanced dialog managers usually have a list of potential actions that enable them to have diverse behavior during a conversation, e.g.", "different strategies to recover from non-understanding .", "However, the conventional approach of designing a dialog manager (Williams and Young, 2007) does not scale well to open-domain conversation models because of the vast quantity of possible decisions.", "Thus, there has been a growing interest in applying encoder-decoder models for modeling open-domain conversation (Vinyals and Le, 2015; Serban et al., 2016a) .", "The basic approach treats a conversation as a transduction task, in which the dialog history is the source sequence and the next response is the target sequence.", "The model is then trained end-to-end on large conversation corpora using the maximum-likelihood estimation (MLE) objective without the need for manual crafting.", "However recent research has found that encoder-decoder models tend to generate generic and dull responses (e.g., I don't know), rather than meaningful and specific answers (Li et al., 2015; Serban et al., 2016b) .", "There have been many attempts to explain and solve this limitation, and they can be broadly divided into two categories (see Section 2 for details): (1) the first category argues that the dialog history is only one of the factors that decide the next response.", "Other features should be extracted and provided to the models as conditionals in order to generate more specific responses (Xing et al., 2016; Li et al., 2016a) ; (2) the second category aims to improve the encoder-decoder model itself, including decoding with beam search and its variations (Wiseman and Rush, 2016) , encouraging responses that have long-term payoff (Li et al., 2016b) , etc.", "Building upon the past work in dialog managers and encoder-decoder models, the key idea of this paper is to model dialogs as a one-to-many problem at the discourse level.", "Previous studies indicate that there are many factors in open-domain dialogs that decide the next response, and it is nontrivial to extract all of them.", "Intuitively, given a similar dialog history (and other observed inputs), there may exist many valid responses (at the discourse-level), each corresponding to a certain configuration of the latent variables that are not presented in the input.", "To uncover the potential responses, we strive to model a probabilistic distribution over the distributed utterance embeddings of the potential responses using a latent variable ( Figure 1 ).", "This allows us to generate diverse responses by drawing samples from the learned distribution and reconstruct their words via a decoder neural network.", "Specifically, our contributions are three-fold: 1.", "We present a novel neural dialog model adapted from conditional variational autoencoders (CVAE) , which introduces a latent variable that can capture discourse-level variations as described above 2.", "We propose Knowledge-Guided CVAE (kgC-VAE), which enables easy integration of expert knowledge and results in performance improvement and model interpretability.", "3.", "We develop a training method in addressing the difficulty of optimizing CVAE for natural language generation (Bowman et al., 2015) .", "We evaluate our models on human-human conversation data and yield promising results in: (a) generating appropriate and discourse-level diverse responses, and (b) showing that the proposed training method is more effective than the previous techniques.", "Related Work Our work is related to both recent advancement in encoder-decoder dialog models and generative models based on CVAE.", "Encoder-decoder Dialog Models Since the emergence of the neural dialog model, the problem of output diversity has received much attention in the research community.", "Ideal output responses should be both coherent and diverse.", "However, most models end up with generic and dull responses.", "To tackle this problem, one line of research has focused on augmenting the input of encoder-decoder models with richer context information, in order to generate more spe-cific responses.", "Li et al., (2016a) captured speakers' characteristics by encoding background information and speaking style into the distributed embeddings, which are used to re-rank the generated response from an encoder-decoder model.", "Xing et al., (2016) maintain topic encoding based on Latent Dirichlet Allocation (LDA) (Blei et al., 2003) of the conversation to encourage the model to output more topic coherent responses.", "On the other hand, many attempts have also been made to improve the architecture of encoderdecoder models.", "Li et al,.", "(2015) proposed to optimize the standard encoder-decoder by maximizing the mutual information between input and output, which in turn reduces generic responses.", "This approach penalized unconditionally high frequency responses, and favored responses that have high conditional probability given the input.", "Wiseman and Rush (2016) focused on improving the decoder network by alleviating the biases between training and testing.", "They introduced a searchbased loss that directly optimizes the networks for beam search decoding.", "The resulting model achieves better performance on word ordering, parsing and machine translation.", "Besides improving beam search, Li et al., (2016b) pointed out that the MLE objective of an encoder-decoder model is unable to approximate the real-world goal of the conversation.", "Thus, they initialized a encoderdecoder model with MLE objective and leveraged reinforcement learning to fine tune the model by optimizing three heuristic rewards functions: informativity, coherence, and ease of answering.", "Conditional Variational Autoencoder The variational autoencoder (VAE) (Kingma and Welling, 2013; Rezende et al., 2014) is one of the most popular frameworks for image generation.", "The basic idea of VAE is to encode the input x into a probability distribution z instead of a point encoding in the autoencoder.", "Then VAE applies a decoder network to reconstruct the original input using samples from z.", "To generate images, VAE first obtains a sample of z from the prior distribution, e.g.", "N (0, I), and then produces an image via the decoder network.", "A more advanced model, the conditional VAE (CVAE), is a recent modification of VAE to generate diverse images conditioned on certain attributes, e.g.", "generating different human faces given skin color .", "Inspired by CVAE, we view the dialog contexts as the conditional attributes and adapt CVAE to generate diverse responses instead of images.", "Although VAE/CVAE has achieved impressive results in image generation, adapting this to natural language generators is non-trivial.", "Bowman et al., (2015) have used VAE with Long-Short Term Memory (LSTM)-based recognition and decoder networks to generate sentences from a latent Gaussian variable.", "They showed that their model is able to generate diverse sentences with even a greedy LSTM decoder.", "They also reported the difficulty of training because the LSTM decoder tends to ignore the latent variable.", "We refer to this issue as the vanishing latent variable problem.", "Serban et al., (2016b) have applied a latent variable hierarchical encoder-decoder dialog model to introduce utterance-level variations and facilitate longer responses.", "To improve upon the past models, we firstly introduce a novel mechanism to leverage linguistic knowledge in training end-to-end neural dialog models, and we also propose a novel training technique that mitigates the vanishing latent variable problem.", "Proposed Models Conditional Variational Autoencoder (CVAE) for Dialog Generation Each dyadic conversation can be represented via three random variables: the dialog context c (context window size k − 1), the response utterance x (the k th utterance) and a latent variable z, which is used to capture the latent distribution over the valid responses.", "Further, c is composed of the dialog history: the preceding k-1 utterances; conversational floor (1 if the utterance is from the same speaker of x, otherwise 0) and meta features m (e.g.", "the topic).", "We then define the conditional distribution p(x, z|c) = p(x|z, c)p(z|c) and our goal is to use deep neural networks (parametrized by θ) to approximate p(z|c) and p(x|z, c).", "We refer to p θ (z|c) as the prior network and p θ (x, |z, c) as the response decoder.", "Then the generative process of x is (Figure 2 (a)): 1.", "Sample a latent variable z from the prior network p θ (z|c).", "2.", "Generate x through the response decoder p θ (x|z, c).", "CVAE is trained to maximize the conditional log likelihood of x given c, which involves an intractable marginalization over the latent variable z.", "As proposed in , CVAE can be efficiently trained with the Stochastic Gradient Variational Bayes (SGVB) framework (Kingma and Welling, 2013) by maximizing the variational lower bound of the conditional log likelihood.", "We assume the z follows multivariate Gaussian distribution with a diagonal covariance matrix and introduce a recognition network q φ (z|x, c) to approximate the true posterior distribution p(z|x, c).", "have shown that the variational lower bound can be written as: Figure 3 demonstrates an overview of our model.", "The utterance encoder is a bidirectional recurrent neural network (BRNN) (Schuster and Paliwal, 1997) with a gated recurrent unit (GRU) (Chung et al., 2014) to encode each utterance into fixedsize vectors by concatenating the last hidden states of the forward and backward RNN u L(θ, φ; x, c) = −KL(q φ (z|x, c) p θ (z|c)) + E q φ (z|c,x) [log p θ (x|z, c)] (1) ≤ log p(x|c) i = [ h i , h i ].", "x is simply u k .", "The context encoder is a 1-layer GRU network that encodes the preceding k-1 utterances by taking u 1:k−1 and the corresponding conversation floor as inputs.", "The last hidden state h c of the context encoder is concatenated with meta features and c = [h c , m].", "Since we assume z follows isotropic Gaussian distribution, the recognition network q φ (z|x, c) ∼ N (µ, σ 2 I) and the prior network p θ (z|c) ∼ N (µ , σ 2 I), and then we have: µ log(σ 2 ) = W r x c + b r (2) µ log(σ 2 ) = MLP p (c) (3) We then use the reparametrization trick (Kingma and Welling, 2013) to obtain samples of z either from N (z; µ, σ 2 I) predicted by the recognition network (training) or N (z; µ , σ 2 I) predicted by the prior network (testing).", "Finally, the response decoder is a 1-layer GRU network with initial state s 0 = W i [z, c]+b i .", "The response decoder then predicts the words in x sequentially.", "Knowledge-Guided CVAE (kgCVAE) In practice, training CVAE is a challenging optimization problem and often requires large amount of data.", "On the other hand, past research in spoken dialog systems and discourse analysis has suggested that many linguistic cues capture crucial features in representing natural conversation.", "For example, dialog acts (Poesio and Traum, 1998) have been widely used in the dialog managers (Litman and Allen, 1987; Raux et al., 2005; Zhao and Eskenazi, 2016) to represent the propositional function of the system.", "Therefore, we conjecture that it will be beneficial for the model to learn meaningful latent z if it is provided with explicitly extracted discourse features during the training.", "In order to incorporate the linguistic features into the basic CVAE model, we first denote the set of linguistic features as y.", "Then we assume that the generation of x depends on c, z and y. y relies on z and c as shown in Figure 2 .", "Specifically, during training the initial state of the response decoder is s 0 = W i [z, c, y] + b i and the input at every step is [e t , y] where e t is the word embedding of t th word in x.", "In addition, there is an MLP to predict y = MLP y (z, c) based on z and c. In the testing stage, the predicted y is used by the response decoder instead of the oracle decoders.", "We denote the modified model as knowledge-guided CVAE (kgCVAE) and developers can add desired discourse features that they wish the latent variable z to capture.", "KgCVAE model is trained by maximizing: L(θ, φ; x, c, y) = −KL(q φ (z|x, c, y) P θ (z|c)) + E q φ (z|c,x,y) [log p(x|z, c, y)] + E q φ (z|c,x,y) [log p(y|z, c)] (4) Since now the reconstruction of y is a part of the loss function, kgCVAE can more efficiently encode y-related information into z than discovering it only based on the surface-level x and c. Another advantage of kgCVAE is that it can output a highlevel label (e.g.", "dialog act) along with the wordlevel responses, which allows easier interpretation of the model's outputs.", "Optimization Challenges A straightforward VAE with RNN decoder fails to encode meaningful information in z due to the vanishing latent variable problem (Bowman et al., 2015) .", "Bowman et al., (2015) proposed two solutions: (1) KL annealing: gradually increasing the weight of the KL term from 0 to 1 during training; (2) word drop decoding: setting a certain percentage of the target words to 0.", "We found that CVAE suffers from the same issue when the decoder is an RNN.", "Also we did not consider word drop decoding because Bowman et al,.", "(2015) have shown that it may hurt the performance when the drop rate is too high.", "As a result, we propose a simple yet novel technique to tackle the vanishing latent variable problem: bag-of-word loss.", "The idea is to introduce an auxiliary loss that requires the decoder network to predict the bag-of-words in the response x as shown in Figure 3 (b).", "We decompose x into two variables: x o with word order and x bow without order, and assume that x o and x bow are conditionally independent given z and c: p(x, z|c) = p(x o |z, c)p(x bow |z, c)p(z|c).", "Due to the conditional independence assumption, the latent variable is forced to capture global information about the target response.", "Let f = MLP b (z, x) ∈ R V where V is vocabulary size, and we have: log p(x bow |z, c) = log |x| t=1 e fx t V j e f j (5) where |x| is the length of x and x t is the word index of t th word in x.", "The modified variational lower bound for CVAE with bag-of-word loss is (see Appendix A for kgCVAE): L (θ, φ; x, c) = L(θ, φ; x, c) + E q φ (z|c,x,y) [log p(x bow |z, c)] (6) We will show that the bag-of-word loss in Equation 6 is very effective against the vanishing latent variable and it is also complementary to the KL annealing technique.", "Experiment Setup Dataset We chose the Switchboard (SW) 1 Release 2 Corpus (Godfrey and Holliman, 1997) to evaluate the proposed models.", "SW has 2400 two-sided telephone conversations with manually transcribed speech and alignment.", "In the beginning of the call, a computer operator gave the callers recorded prompts that define the desired topic of discussion.", "There are 70 available topics.", "We randomly split the data into 2316/60/62 dialogs for train/validate/test.", "The pre-processing includes (1) tokenize using the NLTK tokenizer (Bird et al., 2009 ); (2) remove non-verbal symbols and repeated words due to false starts; (3) keep the top 10K frequent word types as the vocabulary.", "The final data have 207, 833/5, 225/5, 481 (c, x) pairs for train/validate/test.", "Furthermore, a subset of SW was manually labeled with dialog acts (Stolcke et al., 2000) .", "We extracted dialog act labels based on the dialog act recognizer proposed in (Ribeiro et al., 2015) .", "The features include the uni-gram and bi-gram of the utterance, and the contextual features of the last 3 utterances.", "We trained a Support Vector Machine (SVM) (Suykens and Vandewalle, 1999) with linear kernel on the subset of SW with human annotations.", "There are 42 types of dialog acts and the SVM achieved 77.3% accuracy on held-out data.", "Then the rest of SW data are labelled with dialog acts using the trained SVM dialog act recognizer.", "Training We trained with the following hyperparameters (according to the loss on the validate dataset): word embedding has size 200 and is shared across everywhere.", "We initialize the word embedding from Glove embedding pre-trained on Twitter (Pennington et al., 2014) .", "The utterance encoder has a hidden size of 300 for each direction.", "The context encoder has a hidden size of 600 and the response decoder has a hidden size of 400.", "The prior network and the MLP for predicting y both have 1 hidden layer of size 400 and tanh non-linearity.", "The latent variable z has a size of 200.", "The context window k is 10.", "All the initial weights are sampled from a uniform distribution [-0.08, 0.08].", "The mini-batch size is 30.", "The models are trained end-to-end using the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.001 and gradient clipping at 5.", "We selected the best models based on the variational lower bound on the validate data.", "Finally, we use the BOW loss along with KL annealing of 10,000 batches to achieve the best performance.", "Section 5.4 gives a detailed argument for the importance of the BOW loss.", "Results Experiments Setup We compared three neural dialog models: a strong baseline model, CVAE, and kgCVAE.", "The baseline model is an encoder-decoder neural dialog model without latent variables similar to (Serban et al., 2016a) .", "The baseline model's encoder uses the same context encoder to encode the dialog history and the meta features as shown in Figure 3 .", "The encoded context c is directly fed into the decoder networks as the initial state.", "The hyperparameters of the baseline are the same as the ones reported in Section 4.2 and the baseline is trained to minimize the standard cross entropy loss of the decoder RNN model without any auxiliary loss.", "Also, to compare the diversity introduced by the stochasticity in the proposed latent variable versus the softmax of RNN at each decoding step, we generate N responses from the baseline by sam-pling from the softmax.", "For CVAE/kgCVAE, we sample N times from the latent z and only use greedy decoders so that the randomness comes entirely from the latent variable z. Quantitative Analysis Automatically evaluating an open-domain generative dialog model is an open research challenge .", "Following our one-tomany hypothesis, we propose the following metrics.", "We assume that for a given dialog context c, there exist M c reference responses r j , j ∈ [1, M c ].", "Meanwhile a model can generate N hypothesis re- sponses h i , i ∈ [1, N ].", "The generalized responselevel precision/recall for a given dialog context is: precision(c) = N i=1 max j∈[1,Mc] d(r j , h i ) N recall(c) = Mc j=1 max i∈[1,N ] d(r j , h i )) M c where d(r j , h i ) is a distance function which lies between 0 to 1 and measures the similarities between r j and h i .", "The final score is averaged over the entire test dataset and we report the performance with 3 types of distance functions in order to evaluate the systems from various linguistic points of view: Smoothed Sentence-level BLEU (Chen and Cherry, 2014): BLEU is a popular metric that measures the geometric mean of modified ngram precision with a length penalty (Papineni et al., 2002; Li et al., 2015) .", "We use BLEU-1 to 4 as our lexical similarity metric and normalize the score to 0 to 1 scale.", "Cosine Distance of Bag-of-word Embedding: a simple method to obtain sentence embeddings is to take the average or extrema of all the word embeddings in the sentences (Forgues et al., 2014; Adi et al., 2016) .", "The d(r j , h i ) is the cosine distance of the two embedding vectors.", "We used Glove embedding described in Section 4 and denote the average method as A-bow and extrema method as E-bow.", "3.", "Dialog Act Match: to measure the similarity at the discourse level, the same dialogact tagger from 4.1 is applied to label all the generated responses of each model.", "We set d(r j , h i ) = 1 if r j and h i have the same dialog acts, otherwise d(r j , h i ) = 0.", "One challenge of using the above metrics is that there is only one, rather than multiple reference responses/contexts.", "This impacts reliability of our measures.", "Inspired by (Sordoni et al., 2015) , we utilized information retrieval techniques (see Appendix A) to gather 10 extra candidate reference responses/context from other conversations with the same topics.", "Then the 10 candidate references are filtered by two experts, which serve as the ground truth to train the reference response classifier.", "The result is 6.69 extra references in average per context.", "The average number of distinct reference dialog acts is 4.2.", "Table 1 The proposed models outperform the baseline in terms of recall in all the metrics with statistical significance.", "This confirms our hypothesis that generating responses with discourse-level diversity can lead to a more comprehensive coverage of the potential responses than promoting only word-level diversity.", "As for precision, we observed that the baseline has higher or similar scores than CVAE in all metrics, which is expected since the baseline tends to generate the mostly likely and safe responses repeatedly in the N hypotheses.", "However, kgCVAE is able to achieve the highest precision and recall in the 4 metrics at the same time (BLEU1-4, E-BOW).", "One reason for kgCVAE's good performance is that the predicted dialog act label in kgCVAE can regularize the generation process of its RNN decoder by forcing it to generate more coherent and precise words.", "We further analyze the precision/recall of BLEU-4 by looking at the average score versus the number of distinct reference dialog acts.", "A low number of distinct dialog acts represents the situation where the dialog context has a strong constraint on the range of the next response (low entropy), while a high number indicates the opposite (highentropy).", "Figure 4 shows that CVAE/kgCVAE achieves significantly higher recall than the baseline in higher entropy contexts.", "Also it shows that CVAE suffers from lower precision, especially in low entropy contexts.", "Finally, kgCVAE gets higher precision than both the baseline and CVAE in the full spectrum of context entropy.", "Table 2 shows the outputs generated from the baseline and kgCVAE.", "In example 1, caller A begins with an open-ended question.", "The kgCVAE model generated highly diverse answers that cover multiple plausible dialog acts.", "Further, we notice that the generated text exhibits similar dialog acts compared to the ones predicted separately by the model, implying the consistency of natural language generation based on y.", "On the contrary, the responses from the baseline model are limited to local n-gram variations and share a similar prefix, i.e.", "\"I'm\".", "Example 2 is a situation where caller A is telling B stories.", "The ground truth response is a back-channel and the range of valid answers is more constrained than example 1 since B is playing the role of a listener.", "The baseline successfully predicts \"uh-huh\".", "The kgCVAE model is also able to generate various ways of back-channeling.", "This implies that the latent z is able to capture context-sensitive variations, i.e.", "in low-entropy dialog contexts modeling lexical diversity while in high-entropy ones modeling discourse-level diversity.", "Moreover, kgCVAE is occasionally able to generate more sophisticated grounding (sample 4) beyond a simple back-channel, which is also an acceptable response given the dialog context.", "Qualitative Analysis In addition, past work (Kingma and Welling, 2013) has shown that the recognition network is able to learn to cluster high-dimension data, so we conjecture that posterior z outputted from the recognition network should cluster the responses into meaningful groups.", "Figure 5 visualizes the posterior z of responses in the test dataset in 2D space using t-SNE (Maaten and Hinton, 2008) .", "We found that the learned latent space is highly correlated with the dialog act and length of responses, which confirms our assumption.", "Results for Bag-of-Word Loss Finally, we evaluate the effectiveness of bag-ofword (BOW) loss for training VAE/CVAE with the RNN decoder.", "To compare with past work (Bowman et al., 2015) , we conducted the same language modelling (LM) task on Penn Treebank using VAE.", "The network architecture is same except we use GRU instead of LSTM.", "We compared four different training setups: (1) standard VAE without any heuristics; (2) VAE with KL annealing (KLA); (3) VAE with BOW loss; (4) VAE with both BOW loss and KLA.", "Intuitively, a well trained model should lead to a low reconstruction loss and small but non-trivial KL cost.", "For all models with KLA, the KL weight increases linearly from 0 to 1 in the first 5000 batches.", "Table 3 shows the reconstruction perplexity and the KL cost on the test dataset.", "The standard VAE fails to learn a meaningful latent variable by hav- Table 2 : Generated responses from the baselines and kgCVAE in two examples.", "KgCVAE also provides the predicted dialog act for each response.", "The context only shows the last utterance due to space limit (the actual context window size is 10).", "ing a KL cost close to 0 and a reconstruction perplexity similar to a small LSTM LM (Zaremba et al., 2014) .", "KLA helps to improve the reconstruction loss, but it requires early stopping since the models will fall back to the standard VAE after the KL weight becomes 1.", "At last, the models with BOW loss achieved significantly lower perplexity and larger KL cost.", "Figure 6 visualizes the evolution of the KL cost.", "We can see that for the standard model, the KL cost crashes to 0 at the beginning of training and never recovers.", "On the contrary, the model with only KLA learns to encode substantial information in latent z when the KL cost weight is small.", "However, after the KL weight is increased to 1 (after 5000 batch), the model once again decides to ignore the latent z and falls back to the naive implementation.", "The model with BOW loss, however, consistently converges to a non-trivial KL cost even without KLA, which confirms the importance of BOW loss for training latent variable models with the RNN decoder.", "Last but not least, our experiments showed that the conclusions drawn from LM using VAE also apply to training CVAE/kgCVAE, so we used BOW loss together with KLA for all previous experiments.", "Conclusion and Future Work In conclusion, we identified the one-to-many nature of open-domain conversation and proposed two novel models that show superior performance in generating diverse and appropriate responses at the discourse level.", "While the current paper addresses diversifying responses in respect to dialogue acts, this work is part of a larger research direction that targets leveraging both past linguistic findings and the learning power of deep neural networks to learn better representation of the latent factors in dialog.", "In turn, the output of this novel neural dialog model will be easier to explain and control by humans.", "In addition to dialog acts, we plan to apply our kgCVAE model to capture other different linguistic phenomena including sentiment, named entities,etc.", "Last but not least, the recognition network in our model will serve as the foundation for designing a datadriven dialog manager, which automatically discovers useful high-level intents.", "All of the above suggest a promising research direction." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "4.1", "4.2", "5.1", "5.2", "1.", "2.", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Encoder-decoder Dialog Models", "Conditional Variational Autoencoder", "Conditional Variational Autoencoder (CVAE) for Dialog Generation", "Knowledge-Guided CVAE (kgCVAE)", "Optimization Challenges", "Dataset", "Training", "Experiments Setup", "Quantitative Analysis", "Smoothed Sentence-level BLEU (Chen and", "Cosine", "Qualitative Analysis", "Results for Bag-of-Word Loss", "Conclusion and Future Work" ] }
GEM-SciDuet-train-100#paper-1263#slide-3
Conditional Variational Auto Encoder CVAE
C is dialog context B: Do you like cats? A: Yes I do Z is the latent variable (gaussian) X is the next response B: So do I. Trained by Stochastic Gradient Variational Bayes (SGVB) [Kingma and Welling 2013]
C is dialog context B: Do you like cats? A: Yes I do Z is the latent variable (gaussian) X is the next response B: So do I. Trained by Stochastic Gradient Variational Bayes (SGVB) [Kingma and Welling 2013]
[]
GEM-SciDuet-train-100#paper-1263#slide-5
1263
Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders
While recent neural encoder-decoder models have shown great promise in modeling open-domain conversations, they often generate dull and generic responses. Unlike past work that has focused on diversifying the output of the decoder at word-level to alleviate this problem, we present a novel framework based on conditional variational autoencoders that captures the discourse-level diversity in the encoder. Our model uses latent variables to learn a distribution over potential conversational intents and generates diverse responses using only greedy decoders. We have further developed a novel variant that is integrated with linguistic prior knowledge for better performance. Finally, the training procedure is improved by introducing a bag-of-word loss. Our proposed models have been validated to generate significantly more diverse responses than baseline approaches and exhibit competence in discourse-level decision-making.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196 ], "paper_content_text": [ "Introduction The dialog manager is one of the key components of dialog systems, which is responsible for modeling the decision-making process.", "Specifically, it typically takes a new utterance and the dialog context as input, and generates discourse-level decisions (Bohus and Rudnicky, 2003; Williams and Young, 2007) .", "Advanced dialog managers usually have a list of potential actions that enable them to have diverse behavior during a conversation, e.g.", "different strategies to recover from non-understanding .", "However, the conventional approach of designing a dialog manager (Williams and Young, 2007) does not scale well to open-domain conversation models because of the vast quantity of possible decisions.", "Thus, there has been a growing interest in applying encoder-decoder models for modeling open-domain conversation (Vinyals and Le, 2015; Serban et al., 2016a) .", "The basic approach treats a conversation as a transduction task, in which the dialog history is the source sequence and the next response is the target sequence.", "The model is then trained end-to-end on large conversation corpora using the maximum-likelihood estimation (MLE) objective without the need for manual crafting.", "However recent research has found that encoder-decoder models tend to generate generic and dull responses (e.g., I don't know), rather than meaningful and specific answers (Li et al., 2015; Serban et al., 2016b) .", "There have been many attempts to explain and solve this limitation, and they can be broadly divided into two categories (see Section 2 for details): (1) the first category argues that the dialog history is only one of the factors that decide the next response.", "Other features should be extracted and provided to the models as conditionals in order to generate more specific responses (Xing et al., 2016; Li et al., 2016a) ; (2) the second category aims to improve the encoder-decoder model itself, including decoding with beam search and its variations (Wiseman and Rush, 2016) , encouraging responses that have long-term payoff (Li et al., 2016b) , etc.", "Building upon the past work in dialog managers and encoder-decoder models, the key idea of this paper is to model dialogs as a one-to-many problem at the discourse level.", "Previous studies indicate that there are many factors in open-domain dialogs that decide the next response, and it is nontrivial to extract all of them.", "Intuitively, given a similar dialog history (and other observed inputs), there may exist many valid responses (at the discourse-level), each corresponding to a certain configuration of the latent variables that are not presented in the input.", "To uncover the potential responses, we strive to model a probabilistic distribution over the distributed utterance embeddings of the potential responses using a latent variable ( Figure 1 ).", "This allows us to generate diverse responses by drawing samples from the learned distribution and reconstruct their words via a decoder neural network.", "Specifically, our contributions are three-fold: 1.", "We present a novel neural dialog model adapted from conditional variational autoencoders (CVAE) , which introduces a latent variable that can capture discourse-level variations as described above 2.", "We propose Knowledge-Guided CVAE (kgC-VAE), which enables easy integration of expert knowledge and results in performance improvement and model interpretability.", "3.", "We develop a training method in addressing the difficulty of optimizing CVAE for natural language generation (Bowman et al., 2015) .", "We evaluate our models on human-human conversation data and yield promising results in: (a) generating appropriate and discourse-level diverse responses, and (b) showing that the proposed training method is more effective than the previous techniques.", "Related Work Our work is related to both recent advancement in encoder-decoder dialog models and generative models based on CVAE.", "Encoder-decoder Dialog Models Since the emergence of the neural dialog model, the problem of output diversity has received much attention in the research community.", "Ideal output responses should be both coherent and diverse.", "However, most models end up with generic and dull responses.", "To tackle this problem, one line of research has focused on augmenting the input of encoder-decoder models with richer context information, in order to generate more spe-cific responses.", "Li et al., (2016a) captured speakers' characteristics by encoding background information and speaking style into the distributed embeddings, which are used to re-rank the generated response from an encoder-decoder model.", "Xing et al., (2016) maintain topic encoding based on Latent Dirichlet Allocation (LDA) (Blei et al., 2003) of the conversation to encourage the model to output more topic coherent responses.", "On the other hand, many attempts have also been made to improve the architecture of encoderdecoder models.", "Li et al,.", "(2015) proposed to optimize the standard encoder-decoder by maximizing the mutual information between input and output, which in turn reduces generic responses.", "This approach penalized unconditionally high frequency responses, and favored responses that have high conditional probability given the input.", "Wiseman and Rush (2016) focused on improving the decoder network by alleviating the biases between training and testing.", "They introduced a searchbased loss that directly optimizes the networks for beam search decoding.", "The resulting model achieves better performance on word ordering, parsing and machine translation.", "Besides improving beam search, Li et al., (2016b) pointed out that the MLE objective of an encoder-decoder model is unable to approximate the real-world goal of the conversation.", "Thus, they initialized a encoderdecoder model with MLE objective and leveraged reinforcement learning to fine tune the model by optimizing three heuristic rewards functions: informativity, coherence, and ease of answering.", "Conditional Variational Autoencoder The variational autoencoder (VAE) (Kingma and Welling, 2013; Rezende et al., 2014) is one of the most popular frameworks for image generation.", "The basic idea of VAE is to encode the input x into a probability distribution z instead of a point encoding in the autoencoder.", "Then VAE applies a decoder network to reconstruct the original input using samples from z.", "To generate images, VAE first obtains a sample of z from the prior distribution, e.g.", "N (0, I), and then produces an image via the decoder network.", "A more advanced model, the conditional VAE (CVAE), is a recent modification of VAE to generate diverse images conditioned on certain attributes, e.g.", "generating different human faces given skin color .", "Inspired by CVAE, we view the dialog contexts as the conditional attributes and adapt CVAE to generate diverse responses instead of images.", "Although VAE/CVAE has achieved impressive results in image generation, adapting this to natural language generators is non-trivial.", "Bowman et al., (2015) have used VAE with Long-Short Term Memory (LSTM)-based recognition and decoder networks to generate sentences from a latent Gaussian variable.", "They showed that their model is able to generate diverse sentences with even a greedy LSTM decoder.", "They also reported the difficulty of training because the LSTM decoder tends to ignore the latent variable.", "We refer to this issue as the vanishing latent variable problem.", "Serban et al., (2016b) have applied a latent variable hierarchical encoder-decoder dialog model to introduce utterance-level variations and facilitate longer responses.", "To improve upon the past models, we firstly introduce a novel mechanism to leverage linguistic knowledge in training end-to-end neural dialog models, and we also propose a novel training technique that mitigates the vanishing latent variable problem.", "Proposed Models Conditional Variational Autoencoder (CVAE) for Dialog Generation Each dyadic conversation can be represented via three random variables: the dialog context c (context window size k − 1), the response utterance x (the k th utterance) and a latent variable z, which is used to capture the latent distribution over the valid responses.", "Further, c is composed of the dialog history: the preceding k-1 utterances; conversational floor (1 if the utterance is from the same speaker of x, otherwise 0) and meta features m (e.g.", "the topic).", "We then define the conditional distribution p(x, z|c) = p(x|z, c)p(z|c) and our goal is to use deep neural networks (parametrized by θ) to approximate p(z|c) and p(x|z, c).", "We refer to p θ (z|c) as the prior network and p θ (x, |z, c) as the response decoder.", "Then the generative process of x is (Figure 2 (a)): 1.", "Sample a latent variable z from the prior network p θ (z|c).", "2.", "Generate x through the response decoder p θ (x|z, c).", "CVAE is trained to maximize the conditional log likelihood of x given c, which involves an intractable marginalization over the latent variable z.", "As proposed in , CVAE can be efficiently trained with the Stochastic Gradient Variational Bayes (SGVB) framework (Kingma and Welling, 2013) by maximizing the variational lower bound of the conditional log likelihood.", "We assume the z follows multivariate Gaussian distribution with a diagonal covariance matrix and introduce a recognition network q φ (z|x, c) to approximate the true posterior distribution p(z|x, c).", "have shown that the variational lower bound can be written as: Figure 3 demonstrates an overview of our model.", "The utterance encoder is a bidirectional recurrent neural network (BRNN) (Schuster and Paliwal, 1997) with a gated recurrent unit (GRU) (Chung et al., 2014) to encode each utterance into fixedsize vectors by concatenating the last hidden states of the forward and backward RNN u L(θ, φ; x, c) = −KL(q φ (z|x, c) p θ (z|c)) + E q φ (z|c,x) [log p θ (x|z, c)] (1) ≤ log p(x|c) i = [ h i , h i ].", "x is simply u k .", "The context encoder is a 1-layer GRU network that encodes the preceding k-1 utterances by taking u 1:k−1 and the corresponding conversation floor as inputs.", "The last hidden state h c of the context encoder is concatenated with meta features and c = [h c , m].", "Since we assume z follows isotropic Gaussian distribution, the recognition network q φ (z|x, c) ∼ N (µ, σ 2 I) and the prior network p θ (z|c) ∼ N (µ , σ 2 I), and then we have: µ log(σ 2 ) = W r x c + b r (2) µ log(σ 2 ) = MLP p (c) (3) We then use the reparametrization trick (Kingma and Welling, 2013) to obtain samples of z either from N (z; µ, σ 2 I) predicted by the recognition network (training) or N (z; µ , σ 2 I) predicted by the prior network (testing).", "Finally, the response decoder is a 1-layer GRU network with initial state s 0 = W i [z, c]+b i .", "The response decoder then predicts the words in x sequentially.", "Knowledge-Guided CVAE (kgCVAE) In practice, training CVAE is a challenging optimization problem and often requires large amount of data.", "On the other hand, past research in spoken dialog systems and discourse analysis has suggested that many linguistic cues capture crucial features in representing natural conversation.", "For example, dialog acts (Poesio and Traum, 1998) have been widely used in the dialog managers (Litman and Allen, 1987; Raux et al., 2005; Zhao and Eskenazi, 2016) to represent the propositional function of the system.", "Therefore, we conjecture that it will be beneficial for the model to learn meaningful latent z if it is provided with explicitly extracted discourse features during the training.", "In order to incorporate the linguistic features into the basic CVAE model, we first denote the set of linguistic features as y.", "Then we assume that the generation of x depends on c, z and y. y relies on z and c as shown in Figure 2 .", "Specifically, during training the initial state of the response decoder is s 0 = W i [z, c, y] + b i and the input at every step is [e t , y] where e t is the word embedding of t th word in x.", "In addition, there is an MLP to predict y = MLP y (z, c) based on z and c. In the testing stage, the predicted y is used by the response decoder instead of the oracle decoders.", "We denote the modified model as knowledge-guided CVAE (kgCVAE) and developers can add desired discourse features that they wish the latent variable z to capture.", "KgCVAE model is trained by maximizing: L(θ, φ; x, c, y) = −KL(q φ (z|x, c, y) P θ (z|c)) + E q φ (z|c,x,y) [log p(x|z, c, y)] + E q φ (z|c,x,y) [log p(y|z, c)] (4) Since now the reconstruction of y is a part of the loss function, kgCVAE can more efficiently encode y-related information into z than discovering it only based on the surface-level x and c. Another advantage of kgCVAE is that it can output a highlevel label (e.g.", "dialog act) along with the wordlevel responses, which allows easier interpretation of the model's outputs.", "Optimization Challenges A straightforward VAE with RNN decoder fails to encode meaningful information in z due to the vanishing latent variable problem (Bowman et al., 2015) .", "Bowman et al., (2015) proposed two solutions: (1) KL annealing: gradually increasing the weight of the KL term from 0 to 1 during training; (2) word drop decoding: setting a certain percentage of the target words to 0.", "We found that CVAE suffers from the same issue when the decoder is an RNN.", "Also we did not consider word drop decoding because Bowman et al,.", "(2015) have shown that it may hurt the performance when the drop rate is too high.", "As a result, we propose a simple yet novel technique to tackle the vanishing latent variable problem: bag-of-word loss.", "The idea is to introduce an auxiliary loss that requires the decoder network to predict the bag-of-words in the response x as shown in Figure 3 (b).", "We decompose x into two variables: x o with word order and x bow without order, and assume that x o and x bow are conditionally independent given z and c: p(x, z|c) = p(x o |z, c)p(x bow |z, c)p(z|c).", "Due to the conditional independence assumption, the latent variable is forced to capture global information about the target response.", "Let f = MLP b (z, x) ∈ R V where V is vocabulary size, and we have: log p(x bow |z, c) = log |x| t=1 e fx t V j e f j (5) where |x| is the length of x and x t is the word index of t th word in x.", "The modified variational lower bound for CVAE with bag-of-word loss is (see Appendix A for kgCVAE): L (θ, φ; x, c) = L(θ, φ; x, c) + E q φ (z|c,x,y) [log p(x bow |z, c)] (6) We will show that the bag-of-word loss in Equation 6 is very effective against the vanishing latent variable and it is also complementary to the KL annealing technique.", "Experiment Setup Dataset We chose the Switchboard (SW) 1 Release 2 Corpus (Godfrey and Holliman, 1997) to evaluate the proposed models.", "SW has 2400 two-sided telephone conversations with manually transcribed speech and alignment.", "In the beginning of the call, a computer operator gave the callers recorded prompts that define the desired topic of discussion.", "There are 70 available topics.", "We randomly split the data into 2316/60/62 dialogs for train/validate/test.", "The pre-processing includes (1) tokenize using the NLTK tokenizer (Bird et al., 2009 ); (2) remove non-verbal symbols and repeated words due to false starts; (3) keep the top 10K frequent word types as the vocabulary.", "The final data have 207, 833/5, 225/5, 481 (c, x) pairs for train/validate/test.", "Furthermore, a subset of SW was manually labeled with dialog acts (Stolcke et al., 2000) .", "We extracted dialog act labels based on the dialog act recognizer proposed in (Ribeiro et al., 2015) .", "The features include the uni-gram and bi-gram of the utterance, and the contextual features of the last 3 utterances.", "We trained a Support Vector Machine (SVM) (Suykens and Vandewalle, 1999) with linear kernel on the subset of SW with human annotations.", "There are 42 types of dialog acts and the SVM achieved 77.3% accuracy on held-out data.", "Then the rest of SW data are labelled with dialog acts using the trained SVM dialog act recognizer.", "Training We trained with the following hyperparameters (according to the loss on the validate dataset): word embedding has size 200 and is shared across everywhere.", "We initialize the word embedding from Glove embedding pre-trained on Twitter (Pennington et al., 2014) .", "The utterance encoder has a hidden size of 300 for each direction.", "The context encoder has a hidden size of 600 and the response decoder has a hidden size of 400.", "The prior network and the MLP for predicting y both have 1 hidden layer of size 400 and tanh non-linearity.", "The latent variable z has a size of 200.", "The context window k is 10.", "All the initial weights are sampled from a uniform distribution [-0.08, 0.08].", "The mini-batch size is 30.", "The models are trained end-to-end using the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.001 and gradient clipping at 5.", "We selected the best models based on the variational lower bound on the validate data.", "Finally, we use the BOW loss along with KL annealing of 10,000 batches to achieve the best performance.", "Section 5.4 gives a detailed argument for the importance of the BOW loss.", "Results Experiments Setup We compared three neural dialog models: a strong baseline model, CVAE, and kgCVAE.", "The baseline model is an encoder-decoder neural dialog model without latent variables similar to (Serban et al., 2016a) .", "The baseline model's encoder uses the same context encoder to encode the dialog history and the meta features as shown in Figure 3 .", "The encoded context c is directly fed into the decoder networks as the initial state.", "The hyperparameters of the baseline are the same as the ones reported in Section 4.2 and the baseline is trained to minimize the standard cross entropy loss of the decoder RNN model without any auxiliary loss.", "Also, to compare the diversity introduced by the stochasticity in the proposed latent variable versus the softmax of RNN at each decoding step, we generate N responses from the baseline by sam-pling from the softmax.", "For CVAE/kgCVAE, we sample N times from the latent z and only use greedy decoders so that the randomness comes entirely from the latent variable z. Quantitative Analysis Automatically evaluating an open-domain generative dialog model is an open research challenge .", "Following our one-tomany hypothesis, we propose the following metrics.", "We assume that for a given dialog context c, there exist M c reference responses r j , j ∈ [1, M c ].", "Meanwhile a model can generate N hypothesis re- sponses h i , i ∈ [1, N ].", "The generalized responselevel precision/recall for a given dialog context is: precision(c) = N i=1 max j∈[1,Mc] d(r j , h i ) N recall(c) = Mc j=1 max i∈[1,N ] d(r j , h i )) M c where d(r j , h i ) is a distance function which lies between 0 to 1 and measures the similarities between r j and h i .", "The final score is averaged over the entire test dataset and we report the performance with 3 types of distance functions in order to evaluate the systems from various linguistic points of view: Smoothed Sentence-level BLEU (Chen and Cherry, 2014): BLEU is a popular metric that measures the geometric mean of modified ngram precision with a length penalty (Papineni et al., 2002; Li et al., 2015) .", "We use BLEU-1 to 4 as our lexical similarity metric and normalize the score to 0 to 1 scale.", "Cosine Distance of Bag-of-word Embedding: a simple method to obtain sentence embeddings is to take the average or extrema of all the word embeddings in the sentences (Forgues et al., 2014; Adi et al., 2016) .", "The d(r j , h i ) is the cosine distance of the two embedding vectors.", "We used Glove embedding described in Section 4 and denote the average method as A-bow and extrema method as E-bow.", "3.", "Dialog Act Match: to measure the similarity at the discourse level, the same dialogact tagger from 4.1 is applied to label all the generated responses of each model.", "We set d(r j , h i ) = 1 if r j and h i have the same dialog acts, otherwise d(r j , h i ) = 0.", "One challenge of using the above metrics is that there is only one, rather than multiple reference responses/contexts.", "This impacts reliability of our measures.", "Inspired by (Sordoni et al., 2015) , we utilized information retrieval techniques (see Appendix A) to gather 10 extra candidate reference responses/context from other conversations with the same topics.", "Then the 10 candidate references are filtered by two experts, which serve as the ground truth to train the reference response classifier.", "The result is 6.69 extra references in average per context.", "The average number of distinct reference dialog acts is 4.2.", "Table 1 The proposed models outperform the baseline in terms of recall in all the metrics with statistical significance.", "This confirms our hypothesis that generating responses with discourse-level diversity can lead to a more comprehensive coverage of the potential responses than promoting only word-level diversity.", "As for precision, we observed that the baseline has higher or similar scores than CVAE in all metrics, which is expected since the baseline tends to generate the mostly likely and safe responses repeatedly in the N hypotheses.", "However, kgCVAE is able to achieve the highest precision and recall in the 4 metrics at the same time (BLEU1-4, E-BOW).", "One reason for kgCVAE's good performance is that the predicted dialog act label in kgCVAE can regularize the generation process of its RNN decoder by forcing it to generate more coherent and precise words.", "We further analyze the precision/recall of BLEU-4 by looking at the average score versus the number of distinct reference dialog acts.", "A low number of distinct dialog acts represents the situation where the dialog context has a strong constraint on the range of the next response (low entropy), while a high number indicates the opposite (highentropy).", "Figure 4 shows that CVAE/kgCVAE achieves significantly higher recall than the baseline in higher entropy contexts.", "Also it shows that CVAE suffers from lower precision, especially in low entropy contexts.", "Finally, kgCVAE gets higher precision than both the baseline and CVAE in the full spectrum of context entropy.", "Table 2 shows the outputs generated from the baseline and kgCVAE.", "In example 1, caller A begins with an open-ended question.", "The kgCVAE model generated highly diverse answers that cover multiple plausible dialog acts.", "Further, we notice that the generated text exhibits similar dialog acts compared to the ones predicted separately by the model, implying the consistency of natural language generation based on y.", "On the contrary, the responses from the baseline model are limited to local n-gram variations and share a similar prefix, i.e.", "\"I'm\".", "Example 2 is a situation where caller A is telling B stories.", "The ground truth response is a back-channel and the range of valid answers is more constrained than example 1 since B is playing the role of a listener.", "The baseline successfully predicts \"uh-huh\".", "The kgCVAE model is also able to generate various ways of back-channeling.", "This implies that the latent z is able to capture context-sensitive variations, i.e.", "in low-entropy dialog contexts modeling lexical diversity while in high-entropy ones modeling discourse-level diversity.", "Moreover, kgCVAE is occasionally able to generate more sophisticated grounding (sample 4) beyond a simple back-channel, which is also an acceptable response given the dialog context.", "Qualitative Analysis In addition, past work (Kingma and Welling, 2013) has shown that the recognition network is able to learn to cluster high-dimension data, so we conjecture that posterior z outputted from the recognition network should cluster the responses into meaningful groups.", "Figure 5 visualizes the posterior z of responses in the test dataset in 2D space using t-SNE (Maaten and Hinton, 2008) .", "We found that the learned latent space is highly correlated with the dialog act and length of responses, which confirms our assumption.", "Results for Bag-of-Word Loss Finally, we evaluate the effectiveness of bag-ofword (BOW) loss for training VAE/CVAE with the RNN decoder.", "To compare with past work (Bowman et al., 2015) , we conducted the same language modelling (LM) task on Penn Treebank using VAE.", "The network architecture is same except we use GRU instead of LSTM.", "We compared four different training setups: (1) standard VAE without any heuristics; (2) VAE with KL annealing (KLA); (3) VAE with BOW loss; (4) VAE with both BOW loss and KLA.", "Intuitively, a well trained model should lead to a low reconstruction loss and small but non-trivial KL cost.", "For all models with KLA, the KL weight increases linearly from 0 to 1 in the first 5000 batches.", "Table 3 shows the reconstruction perplexity and the KL cost on the test dataset.", "The standard VAE fails to learn a meaningful latent variable by hav- Table 2 : Generated responses from the baselines and kgCVAE in two examples.", "KgCVAE also provides the predicted dialog act for each response.", "The context only shows the last utterance due to space limit (the actual context window size is 10).", "ing a KL cost close to 0 and a reconstruction perplexity similar to a small LSTM LM (Zaremba et al., 2014) .", "KLA helps to improve the reconstruction loss, but it requires early stopping since the models will fall back to the standard VAE after the KL weight becomes 1.", "At last, the models with BOW loss achieved significantly lower perplexity and larger KL cost.", "Figure 6 visualizes the evolution of the KL cost.", "We can see that for the standard model, the KL cost crashes to 0 at the beginning of training and never recovers.", "On the contrary, the model with only KLA learns to encode substantial information in latent z when the KL cost weight is small.", "However, after the KL weight is increased to 1 (after 5000 batch), the model once again decides to ignore the latent z and falls back to the naive implementation.", "The model with BOW loss, however, consistently converges to a non-trivial KL cost even without KLA, which confirms the importance of BOW loss for training latent variable models with the RNN decoder.", "Last but not least, our experiments showed that the conclusions drawn from LM using VAE also apply to training CVAE/kgCVAE, so we used BOW loss together with KLA for all previous experiments.", "Conclusion and Future Work In conclusion, we identified the one-to-many nature of open-domain conversation and proposed two novel models that show superior performance in generating diverse and appropriate responses at the discourse level.", "While the current paper addresses diversifying responses in respect to dialogue acts, this work is part of a larger research direction that targets leveraging both past linguistic findings and the learning power of deep neural networks to learn better representation of the latent factors in dialog.", "In turn, the output of this novel neural dialog model will be easier to explain and control by humans.", "In addition to dialog acts, we plan to apply our kgCVAE model to capture other different linguistic phenomena including sentiment, named entities,etc.", "Last but not least, the recognition network in our model will serve as the foundation for designing a datadriven dialog manager, which automatically discovers useful high-level intents.", "All of the above suggest a promising research direction." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "4.1", "4.2", "5.1", "5.2", "1.", "2.", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Encoder-decoder Dialog Models", "Conditional Variational Autoencoder", "Conditional Variational Autoencoder (CVAE) for Dialog Generation", "Knowledge-Guided CVAE (kgCVAE)", "Optimization Challenges", "Dataset", "Training", "Experiments Setup", "Quantitative Analysis", "Smoothed Sentence-level BLEU (Chen and", "Cosine", "Qualitative Analysis", "Results for Bag-of-Word Loss", "Conclusion and Future Work" ] }
GEM-SciDuet-train-100#paper-1263#slide-5
Training of kgCVAE
I like cats </s> like cats pe eee dialog act 1 * ' | Utterance Encoder eeeeee {__} Context Encoder al (__) Response Decoder | <> like Xbow ! Conversation Floor
I like cats </s> like cats pe eee dialog act 1 * ' | Utterance Encoder eeeeee {__} Context Encoder al (__) Response Decoder | <> like Xbow ! Conversation Floor
[]
GEM-SciDuet-train-100#paper-1263#slide-7
1263
Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders
While recent neural encoder-decoder models have shown great promise in modeling open-domain conversations, they often generate dull and generic responses. Unlike past work that has focused on diversifying the output of the decoder at word-level to alleviate this problem, we present a novel framework based on conditional variational autoencoders that captures the discourse-level diversity in the encoder. Our model uses latent variables to learn a distribution over potential conversational intents and generates diverse responses using only greedy decoders. We have further developed a novel variant that is integrated with linguistic prior knowledge for better performance. Finally, the training procedure is improved by introducing a bag-of-word loss. Our proposed models have been validated to generate significantly more diverse responses than baseline approaches and exhibit competence in discourse-level decision-making.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196 ], "paper_content_text": [ "Introduction The dialog manager is one of the key components of dialog systems, which is responsible for modeling the decision-making process.", "Specifically, it typically takes a new utterance and the dialog context as input, and generates discourse-level decisions (Bohus and Rudnicky, 2003; Williams and Young, 2007) .", "Advanced dialog managers usually have a list of potential actions that enable them to have diverse behavior during a conversation, e.g.", "different strategies to recover from non-understanding .", "However, the conventional approach of designing a dialog manager (Williams and Young, 2007) does not scale well to open-domain conversation models because of the vast quantity of possible decisions.", "Thus, there has been a growing interest in applying encoder-decoder models for modeling open-domain conversation (Vinyals and Le, 2015; Serban et al., 2016a) .", "The basic approach treats a conversation as a transduction task, in which the dialog history is the source sequence and the next response is the target sequence.", "The model is then trained end-to-end on large conversation corpora using the maximum-likelihood estimation (MLE) objective without the need for manual crafting.", "However recent research has found that encoder-decoder models tend to generate generic and dull responses (e.g., I don't know), rather than meaningful and specific answers (Li et al., 2015; Serban et al., 2016b) .", "There have been many attempts to explain and solve this limitation, and they can be broadly divided into two categories (see Section 2 for details): (1) the first category argues that the dialog history is only one of the factors that decide the next response.", "Other features should be extracted and provided to the models as conditionals in order to generate more specific responses (Xing et al., 2016; Li et al., 2016a) ; (2) the second category aims to improve the encoder-decoder model itself, including decoding with beam search and its variations (Wiseman and Rush, 2016) , encouraging responses that have long-term payoff (Li et al., 2016b) , etc.", "Building upon the past work in dialog managers and encoder-decoder models, the key idea of this paper is to model dialogs as a one-to-many problem at the discourse level.", "Previous studies indicate that there are many factors in open-domain dialogs that decide the next response, and it is nontrivial to extract all of them.", "Intuitively, given a similar dialog history (and other observed inputs), there may exist many valid responses (at the discourse-level), each corresponding to a certain configuration of the latent variables that are not presented in the input.", "To uncover the potential responses, we strive to model a probabilistic distribution over the distributed utterance embeddings of the potential responses using a latent variable ( Figure 1 ).", "This allows us to generate diverse responses by drawing samples from the learned distribution and reconstruct their words via a decoder neural network.", "Specifically, our contributions are three-fold: 1.", "We present a novel neural dialog model adapted from conditional variational autoencoders (CVAE) , which introduces a latent variable that can capture discourse-level variations as described above 2.", "We propose Knowledge-Guided CVAE (kgC-VAE), which enables easy integration of expert knowledge and results in performance improvement and model interpretability.", "3.", "We develop a training method in addressing the difficulty of optimizing CVAE for natural language generation (Bowman et al., 2015) .", "We evaluate our models on human-human conversation data and yield promising results in: (a) generating appropriate and discourse-level diverse responses, and (b) showing that the proposed training method is more effective than the previous techniques.", "Related Work Our work is related to both recent advancement in encoder-decoder dialog models and generative models based on CVAE.", "Encoder-decoder Dialog Models Since the emergence of the neural dialog model, the problem of output diversity has received much attention in the research community.", "Ideal output responses should be both coherent and diverse.", "However, most models end up with generic and dull responses.", "To tackle this problem, one line of research has focused on augmenting the input of encoder-decoder models with richer context information, in order to generate more spe-cific responses.", "Li et al., (2016a) captured speakers' characteristics by encoding background information and speaking style into the distributed embeddings, which are used to re-rank the generated response from an encoder-decoder model.", "Xing et al., (2016) maintain topic encoding based on Latent Dirichlet Allocation (LDA) (Blei et al., 2003) of the conversation to encourage the model to output more topic coherent responses.", "On the other hand, many attempts have also been made to improve the architecture of encoderdecoder models.", "Li et al,.", "(2015) proposed to optimize the standard encoder-decoder by maximizing the mutual information between input and output, which in turn reduces generic responses.", "This approach penalized unconditionally high frequency responses, and favored responses that have high conditional probability given the input.", "Wiseman and Rush (2016) focused on improving the decoder network by alleviating the biases between training and testing.", "They introduced a searchbased loss that directly optimizes the networks for beam search decoding.", "The resulting model achieves better performance on word ordering, parsing and machine translation.", "Besides improving beam search, Li et al., (2016b) pointed out that the MLE objective of an encoder-decoder model is unable to approximate the real-world goal of the conversation.", "Thus, they initialized a encoderdecoder model with MLE objective and leveraged reinforcement learning to fine tune the model by optimizing three heuristic rewards functions: informativity, coherence, and ease of answering.", "Conditional Variational Autoencoder The variational autoencoder (VAE) (Kingma and Welling, 2013; Rezende et al., 2014) is one of the most popular frameworks for image generation.", "The basic idea of VAE is to encode the input x into a probability distribution z instead of a point encoding in the autoencoder.", "Then VAE applies a decoder network to reconstruct the original input using samples from z.", "To generate images, VAE first obtains a sample of z from the prior distribution, e.g.", "N (0, I), and then produces an image via the decoder network.", "A more advanced model, the conditional VAE (CVAE), is a recent modification of VAE to generate diverse images conditioned on certain attributes, e.g.", "generating different human faces given skin color .", "Inspired by CVAE, we view the dialog contexts as the conditional attributes and adapt CVAE to generate diverse responses instead of images.", "Although VAE/CVAE has achieved impressive results in image generation, adapting this to natural language generators is non-trivial.", "Bowman et al., (2015) have used VAE with Long-Short Term Memory (LSTM)-based recognition and decoder networks to generate sentences from a latent Gaussian variable.", "They showed that their model is able to generate diverse sentences with even a greedy LSTM decoder.", "They also reported the difficulty of training because the LSTM decoder tends to ignore the latent variable.", "We refer to this issue as the vanishing latent variable problem.", "Serban et al., (2016b) have applied a latent variable hierarchical encoder-decoder dialog model to introduce utterance-level variations and facilitate longer responses.", "To improve upon the past models, we firstly introduce a novel mechanism to leverage linguistic knowledge in training end-to-end neural dialog models, and we also propose a novel training technique that mitigates the vanishing latent variable problem.", "Proposed Models Conditional Variational Autoencoder (CVAE) for Dialog Generation Each dyadic conversation can be represented via three random variables: the dialog context c (context window size k − 1), the response utterance x (the k th utterance) and a latent variable z, which is used to capture the latent distribution over the valid responses.", "Further, c is composed of the dialog history: the preceding k-1 utterances; conversational floor (1 if the utterance is from the same speaker of x, otherwise 0) and meta features m (e.g.", "the topic).", "We then define the conditional distribution p(x, z|c) = p(x|z, c)p(z|c) and our goal is to use deep neural networks (parametrized by θ) to approximate p(z|c) and p(x|z, c).", "We refer to p θ (z|c) as the prior network and p θ (x, |z, c) as the response decoder.", "Then the generative process of x is (Figure 2 (a)): 1.", "Sample a latent variable z from the prior network p θ (z|c).", "2.", "Generate x through the response decoder p θ (x|z, c).", "CVAE is trained to maximize the conditional log likelihood of x given c, which involves an intractable marginalization over the latent variable z.", "As proposed in , CVAE can be efficiently trained with the Stochastic Gradient Variational Bayes (SGVB) framework (Kingma and Welling, 2013) by maximizing the variational lower bound of the conditional log likelihood.", "We assume the z follows multivariate Gaussian distribution with a diagonal covariance matrix and introduce a recognition network q φ (z|x, c) to approximate the true posterior distribution p(z|x, c).", "have shown that the variational lower bound can be written as: Figure 3 demonstrates an overview of our model.", "The utterance encoder is a bidirectional recurrent neural network (BRNN) (Schuster and Paliwal, 1997) with a gated recurrent unit (GRU) (Chung et al., 2014) to encode each utterance into fixedsize vectors by concatenating the last hidden states of the forward and backward RNN u L(θ, φ; x, c) = −KL(q φ (z|x, c) p θ (z|c)) + E q φ (z|c,x) [log p θ (x|z, c)] (1) ≤ log p(x|c) i = [ h i , h i ].", "x is simply u k .", "The context encoder is a 1-layer GRU network that encodes the preceding k-1 utterances by taking u 1:k−1 and the corresponding conversation floor as inputs.", "The last hidden state h c of the context encoder is concatenated with meta features and c = [h c , m].", "Since we assume z follows isotropic Gaussian distribution, the recognition network q φ (z|x, c) ∼ N (µ, σ 2 I) and the prior network p θ (z|c) ∼ N (µ , σ 2 I), and then we have: µ log(σ 2 ) = W r x c + b r (2) µ log(σ 2 ) = MLP p (c) (3) We then use the reparametrization trick (Kingma and Welling, 2013) to obtain samples of z either from N (z; µ, σ 2 I) predicted by the recognition network (training) or N (z; µ , σ 2 I) predicted by the prior network (testing).", "Finally, the response decoder is a 1-layer GRU network with initial state s 0 = W i [z, c]+b i .", "The response decoder then predicts the words in x sequentially.", "Knowledge-Guided CVAE (kgCVAE) In practice, training CVAE is a challenging optimization problem and often requires large amount of data.", "On the other hand, past research in spoken dialog systems and discourse analysis has suggested that many linguistic cues capture crucial features in representing natural conversation.", "For example, dialog acts (Poesio and Traum, 1998) have been widely used in the dialog managers (Litman and Allen, 1987; Raux et al., 2005; Zhao and Eskenazi, 2016) to represent the propositional function of the system.", "Therefore, we conjecture that it will be beneficial for the model to learn meaningful latent z if it is provided with explicitly extracted discourse features during the training.", "In order to incorporate the linguistic features into the basic CVAE model, we first denote the set of linguistic features as y.", "Then we assume that the generation of x depends on c, z and y. y relies on z and c as shown in Figure 2 .", "Specifically, during training the initial state of the response decoder is s 0 = W i [z, c, y] + b i and the input at every step is [e t , y] where e t is the word embedding of t th word in x.", "In addition, there is an MLP to predict y = MLP y (z, c) based on z and c. In the testing stage, the predicted y is used by the response decoder instead of the oracle decoders.", "We denote the modified model as knowledge-guided CVAE (kgCVAE) and developers can add desired discourse features that they wish the latent variable z to capture.", "KgCVAE model is trained by maximizing: L(θ, φ; x, c, y) = −KL(q φ (z|x, c, y) P θ (z|c)) + E q φ (z|c,x,y) [log p(x|z, c, y)] + E q φ (z|c,x,y) [log p(y|z, c)] (4) Since now the reconstruction of y is a part of the loss function, kgCVAE can more efficiently encode y-related information into z than discovering it only based on the surface-level x and c. Another advantage of kgCVAE is that it can output a highlevel label (e.g.", "dialog act) along with the wordlevel responses, which allows easier interpretation of the model's outputs.", "Optimization Challenges A straightforward VAE with RNN decoder fails to encode meaningful information in z due to the vanishing latent variable problem (Bowman et al., 2015) .", "Bowman et al., (2015) proposed two solutions: (1) KL annealing: gradually increasing the weight of the KL term from 0 to 1 during training; (2) word drop decoding: setting a certain percentage of the target words to 0.", "We found that CVAE suffers from the same issue when the decoder is an RNN.", "Also we did not consider word drop decoding because Bowman et al,.", "(2015) have shown that it may hurt the performance when the drop rate is too high.", "As a result, we propose a simple yet novel technique to tackle the vanishing latent variable problem: bag-of-word loss.", "The idea is to introduce an auxiliary loss that requires the decoder network to predict the bag-of-words in the response x as shown in Figure 3 (b).", "We decompose x into two variables: x o with word order and x bow without order, and assume that x o and x bow are conditionally independent given z and c: p(x, z|c) = p(x o |z, c)p(x bow |z, c)p(z|c).", "Due to the conditional independence assumption, the latent variable is forced to capture global information about the target response.", "Let f = MLP b (z, x) ∈ R V where V is vocabulary size, and we have: log p(x bow |z, c) = log |x| t=1 e fx t V j e f j (5) where |x| is the length of x and x t is the word index of t th word in x.", "The modified variational lower bound for CVAE with bag-of-word loss is (see Appendix A for kgCVAE): L (θ, φ; x, c) = L(θ, φ; x, c) + E q φ (z|c,x,y) [log p(x bow |z, c)] (6) We will show that the bag-of-word loss in Equation 6 is very effective against the vanishing latent variable and it is also complementary to the KL annealing technique.", "Experiment Setup Dataset We chose the Switchboard (SW) 1 Release 2 Corpus (Godfrey and Holliman, 1997) to evaluate the proposed models.", "SW has 2400 two-sided telephone conversations with manually transcribed speech and alignment.", "In the beginning of the call, a computer operator gave the callers recorded prompts that define the desired topic of discussion.", "There are 70 available topics.", "We randomly split the data into 2316/60/62 dialogs for train/validate/test.", "The pre-processing includes (1) tokenize using the NLTK tokenizer (Bird et al., 2009 ); (2) remove non-verbal symbols and repeated words due to false starts; (3) keep the top 10K frequent word types as the vocabulary.", "The final data have 207, 833/5, 225/5, 481 (c, x) pairs for train/validate/test.", "Furthermore, a subset of SW was manually labeled with dialog acts (Stolcke et al., 2000) .", "We extracted dialog act labels based on the dialog act recognizer proposed in (Ribeiro et al., 2015) .", "The features include the uni-gram and bi-gram of the utterance, and the contextual features of the last 3 utterances.", "We trained a Support Vector Machine (SVM) (Suykens and Vandewalle, 1999) with linear kernel on the subset of SW with human annotations.", "There are 42 types of dialog acts and the SVM achieved 77.3% accuracy on held-out data.", "Then the rest of SW data are labelled with dialog acts using the trained SVM dialog act recognizer.", "Training We trained with the following hyperparameters (according to the loss on the validate dataset): word embedding has size 200 and is shared across everywhere.", "We initialize the word embedding from Glove embedding pre-trained on Twitter (Pennington et al., 2014) .", "The utterance encoder has a hidden size of 300 for each direction.", "The context encoder has a hidden size of 600 and the response decoder has a hidden size of 400.", "The prior network and the MLP for predicting y both have 1 hidden layer of size 400 and tanh non-linearity.", "The latent variable z has a size of 200.", "The context window k is 10.", "All the initial weights are sampled from a uniform distribution [-0.08, 0.08].", "The mini-batch size is 30.", "The models are trained end-to-end using the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.001 and gradient clipping at 5.", "We selected the best models based on the variational lower bound on the validate data.", "Finally, we use the BOW loss along with KL annealing of 10,000 batches to achieve the best performance.", "Section 5.4 gives a detailed argument for the importance of the BOW loss.", "Results Experiments Setup We compared three neural dialog models: a strong baseline model, CVAE, and kgCVAE.", "The baseline model is an encoder-decoder neural dialog model without latent variables similar to (Serban et al., 2016a) .", "The baseline model's encoder uses the same context encoder to encode the dialog history and the meta features as shown in Figure 3 .", "The encoded context c is directly fed into the decoder networks as the initial state.", "The hyperparameters of the baseline are the same as the ones reported in Section 4.2 and the baseline is trained to minimize the standard cross entropy loss of the decoder RNN model without any auxiliary loss.", "Also, to compare the diversity introduced by the stochasticity in the proposed latent variable versus the softmax of RNN at each decoding step, we generate N responses from the baseline by sam-pling from the softmax.", "For CVAE/kgCVAE, we sample N times from the latent z and only use greedy decoders so that the randomness comes entirely from the latent variable z. Quantitative Analysis Automatically evaluating an open-domain generative dialog model is an open research challenge .", "Following our one-tomany hypothesis, we propose the following metrics.", "We assume that for a given dialog context c, there exist M c reference responses r j , j ∈ [1, M c ].", "Meanwhile a model can generate N hypothesis re- sponses h i , i ∈ [1, N ].", "The generalized responselevel precision/recall for a given dialog context is: precision(c) = N i=1 max j∈[1,Mc] d(r j , h i ) N recall(c) = Mc j=1 max i∈[1,N ] d(r j , h i )) M c where d(r j , h i ) is a distance function which lies between 0 to 1 and measures the similarities between r j and h i .", "The final score is averaged over the entire test dataset and we report the performance with 3 types of distance functions in order to evaluate the systems from various linguistic points of view: Smoothed Sentence-level BLEU (Chen and Cherry, 2014): BLEU is a popular metric that measures the geometric mean of modified ngram precision with a length penalty (Papineni et al., 2002; Li et al., 2015) .", "We use BLEU-1 to 4 as our lexical similarity metric and normalize the score to 0 to 1 scale.", "Cosine Distance of Bag-of-word Embedding: a simple method to obtain sentence embeddings is to take the average or extrema of all the word embeddings in the sentences (Forgues et al., 2014; Adi et al., 2016) .", "The d(r j , h i ) is the cosine distance of the two embedding vectors.", "We used Glove embedding described in Section 4 and denote the average method as A-bow and extrema method as E-bow.", "3.", "Dialog Act Match: to measure the similarity at the discourse level, the same dialogact tagger from 4.1 is applied to label all the generated responses of each model.", "We set d(r j , h i ) = 1 if r j and h i have the same dialog acts, otherwise d(r j , h i ) = 0.", "One challenge of using the above metrics is that there is only one, rather than multiple reference responses/contexts.", "This impacts reliability of our measures.", "Inspired by (Sordoni et al., 2015) , we utilized information retrieval techniques (see Appendix A) to gather 10 extra candidate reference responses/context from other conversations with the same topics.", "Then the 10 candidate references are filtered by two experts, which serve as the ground truth to train the reference response classifier.", "The result is 6.69 extra references in average per context.", "The average number of distinct reference dialog acts is 4.2.", "Table 1 The proposed models outperform the baseline in terms of recall in all the metrics with statistical significance.", "This confirms our hypothesis that generating responses with discourse-level diversity can lead to a more comprehensive coverage of the potential responses than promoting only word-level diversity.", "As for precision, we observed that the baseline has higher or similar scores than CVAE in all metrics, which is expected since the baseline tends to generate the mostly likely and safe responses repeatedly in the N hypotheses.", "However, kgCVAE is able to achieve the highest precision and recall in the 4 metrics at the same time (BLEU1-4, E-BOW).", "One reason for kgCVAE's good performance is that the predicted dialog act label in kgCVAE can regularize the generation process of its RNN decoder by forcing it to generate more coherent and precise words.", "We further analyze the precision/recall of BLEU-4 by looking at the average score versus the number of distinct reference dialog acts.", "A low number of distinct dialog acts represents the situation where the dialog context has a strong constraint on the range of the next response (low entropy), while a high number indicates the opposite (highentropy).", "Figure 4 shows that CVAE/kgCVAE achieves significantly higher recall than the baseline in higher entropy contexts.", "Also it shows that CVAE suffers from lower precision, especially in low entropy contexts.", "Finally, kgCVAE gets higher precision than both the baseline and CVAE in the full spectrum of context entropy.", "Table 2 shows the outputs generated from the baseline and kgCVAE.", "In example 1, caller A begins with an open-ended question.", "The kgCVAE model generated highly diverse answers that cover multiple plausible dialog acts.", "Further, we notice that the generated text exhibits similar dialog acts compared to the ones predicted separately by the model, implying the consistency of natural language generation based on y.", "On the contrary, the responses from the baseline model are limited to local n-gram variations and share a similar prefix, i.e.", "\"I'm\".", "Example 2 is a situation where caller A is telling B stories.", "The ground truth response is a back-channel and the range of valid answers is more constrained than example 1 since B is playing the role of a listener.", "The baseline successfully predicts \"uh-huh\".", "The kgCVAE model is also able to generate various ways of back-channeling.", "This implies that the latent z is able to capture context-sensitive variations, i.e.", "in low-entropy dialog contexts modeling lexical diversity while in high-entropy ones modeling discourse-level diversity.", "Moreover, kgCVAE is occasionally able to generate more sophisticated grounding (sample 4) beyond a simple back-channel, which is also an acceptable response given the dialog context.", "Qualitative Analysis In addition, past work (Kingma and Welling, 2013) has shown that the recognition network is able to learn to cluster high-dimension data, so we conjecture that posterior z outputted from the recognition network should cluster the responses into meaningful groups.", "Figure 5 visualizes the posterior z of responses in the test dataset in 2D space using t-SNE (Maaten and Hinton, 2008) .", "We found that the learned latent space is highly correlated with the dialog act and length of responses, which confirms our assumption.", "Results for Bag-of-Word Loss Finally, we evaluate the effectiveness of bag-ofword (BOW) loss for training VAE/CVAE with the RNN decoder.", "To compare with past work (Bowman et al., 2015) , we conducted the same language modelling (LM) task on Penn Treebank using VAE.", "The network architecture is same except we use GRU instead of LSTM.", "We compared four different training setups: (1) standard VAE without any heuristics; (2) VAE with KL annealing (KLA); (3) VAE with BOW loss; (4) VAE with both BOW loss and KLA.", "Intuitively, a well trained model should lead to a low reconstruction loss and small but non-trivial KL cost.", "For all models with KLA, the KL weight increases linearly from 0 to 1 in the first 5000 batches.", "Table 3 shows the reconstruction perplexity and the KL cost on the test dataset.", "The standard VAE fails to learn a meaningful latent variable by hav- Table 2 : Generated responses from the baselines and kgCVAE in two examples.", "KgCVAE also provides the predicted dialog act for each response.", "The context only shows the last utterance due to space limit (the actual context window size is 10).", "ing a KL cost close to 0 and a reconstruction perplexity similar to a small LSTM LM (Zaremba et al., 2014) .", "KLA helps to improve the reconstruction loss, but it requires early stopping since the models will fall back to the standard VAE after the KL weight becomes 1.", "At last, the models with BOW loss achieved significantly lower perplexity and larger KL cost.", "Figure 6 visualizes the evolution of the KL cost.", "We can see that for the standard model, the KL cost crashes to 0 at the beginning of training and never recovers.", "On the contrary, the model with only KLA learns to encode substantial information in latent z when the KL cost weight is small.", "However, after the KL weight is increased to 1 (after 5000 batch), the model once again decides to ignore the latent z and falls back to the naive implementation.", "The model with BOW loss, however, consistently converges to a non-trivial KL cost even without KLA, which confirms the importance of BOW loss for training latent variable models with the RNN decoder.", "Last but not least, our experiments showed that the conclusions drawn from LM using VAE also apply to training CVAE/kgCVAE, so we used BOW loss together with KLA for all previous experiments.", "Conclusion and Future Work In conclusion, we identified the one-to-many nature of open-domain conversation and proposed two novel models that show superior performance in generating diverse and appropriate responses at the discourse level.", "While the current paper addresses diversifying responses in respect to dialogue acts, this work is part of a larger research direction that targets leveraging both past linguistic findings and the learning power of deep neural networks to learn better representation of the latent factors in dialog.", "In turn, the output of this novel neural dialog model will be easier to explain and control by humans.", "In addition to dialog acts, we plan to apply our kgCVAE model to capture other different linguistic phenomena including sentiment, named entities,etc.", "Last but not least, the recognition network in our model will serve as the foundation for designing a datadriven dialog manager, which automatically discovers useful high-level intents.", "All of the above suggest a promising research direction." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "4.1", "4.2", "5.1", "5.2", "1.", "2.", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Encoder-decoder Dialog Models", "Conditional Variational Autoencoder", "Conditional Variational Autoencoder (CVAE) for Dialog Generation", "Knowledge-Guided CVAE (kgCVAE)", "Optimization Challenges", "Dataset", "Training", "Experiments Setup", "Quantitative Analysis", "Smoothed Sentence-level BLEU (Chen and", "Cosine", "Qualitative Analysis", "Results for Bag-of-Word Loss", "Conclusion and Future Work" ] }
GEM-SciDuet-train-100#paper-1263#slide-7
Optimization Challenge
Training CVAE with RNN decoder is hard due to the vanishing latent variable problem RNN decoder can cheat by using LM information and ignore Z! Bowman et al. [2015] described two methods to alleviate the problem : KL annealing (KLA): gradually increase the weight of KL term from 0 to 1 (need early stop). Word drop decoding: setting a proportion of target words to 0 (need careful parameter
Training CVAE with RNN decoder is hard due to the vanishing latent variable problem RNN decoder can cheat by using LM information and ignore Z! Bowman et al. [2015] described two methods to alleviate the problem : KL annealing (KLA): gradually increase the weight of KL term from 0 to 1 (need early stop). Word drop decoding: setting a proportion of target words to 0 (need careful parameter
[]
GEM-SciDuet-train-100#paper-1263#slide-8
1263
Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders
While recent neural encoder-decoder models have shown great promise in modeling open-domain conversations, they often generate dull and generic responses. Unlike past work that has focused on diversifying the output of the decoder at word-level to alleviate this problem, we present a novel framework based on conditional variational autoencoders that captures the discourse-level diversity in the encoder. Our model uses latent variables to learn a distribution over potential conversational intents and generates diverse responses using only greedy decoders. We have further developed a novel variant that is integrated with linguistic prior knowledge for better performance. Finally, the training procedure is improved by introducing a bag-of-word loss. Our proposed models have been validated to generate significantly more diverse responses than baseline approaches and exhibit competence in discourse-level decision-making.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196 ], "paper_content_text": [ "Introduction The dialog manager is one of the key components of dialog systems, which is responsible for modeling the decision-making process.", "Specifically, it typically takes a new utterance and the dialog context as input, and generates discourse-level decisions (Bohus and Rudnicky, 2003; Williams and Young, 2007) .", "Advanced dialog managers usually have a list of potential actions that enable them to have diverse behavior during a conversation, e.g.", "different strategies to recover from non-understanding .", "However, the conventional approach of designing a dialog manager (Williams and Young, 2007) does not scale well to open-domain conversation models because of the vast quantity of possible decisions.", "Thus, there has been a growing interest in applying encoder-decoder models for modeling open-domain conversation (Vinyals and Le, 2015; Serban et al., 2016a) .", "The basic approach treats a conversation as a transduction task, in which the dialog history is the source sequence and the next response is the target sequence.", "The model is then trained end-to-end on large conversation corpora using the maximum-likelihood estimation (MLE) objective without the need for manual crafting.", "However recent research has found that encoder-decoder models tend to generate generic and dull responses (e.g., I don't know), rather than meaningful and specific answers (Li et al., 2015; Serban et al., 2016b) .", "There have been many attempts to explain and solve this limitation, and they can be broadly divided into two categories (see Section 2 for details): (1) the first category argues that the dialog history is only one of the factors that decide the next response.", "Other features should be extracted and provided to the models as conditionals in order to generate more specific responses (Xing et al., 2016; Li et al., 2016a) ; (2) the second category aims to improve the encoder-decoder model itself, including decoding with beam search and its variations (Wiseman and Rush, 2016) , encouraging responses that have long-term payoff (Li et al., 2016b) , etc.", "Building upon the past work in dialog managers and encoder-decoder models, the key idea of this paper is to model dialogs as a one-to-many problem at the discourse level.", "Previous studies indicate that there are many factors in open-domain dialogs that decide the next response, and it is nontrivial to extract all of them.", "Intuitively, given a similar dialog history (and other observed inputs), there may exist many valid responses (at the discourse-level), each corresponding to a certain configuration of the latent variables that are not presented in the input.", "To uncover the potential responses, we strive to model a probabilistic distribution over the distributed utterance embeddings of the potential responses using a latent variable ( Figure 1 ).", "This allows us to generate diverse responses by drawing samples from the learned distribution and reconstruct their words via a decoder neural network.", "Specifically, our contributions are three-fold: 1.", "We present a novel neural dialog model adapted from conditional variational autoencoders (CVAE) , which introduces a latent variable that can capture discourse-level variations as described above 2.", "We propose Knowledge-Guided CVAE (kgC-VAE), which enables easy integration of expert knowledge and results in performance improvement and model interpretability.", "3.", "We develop a training method in addressing the difficulty of optimizing CVAE for natural language generation (Bowman et al., 2015) .", "We evaluate our models on human-human conversation data and yield promising results in: (a) generating appropriate and discourse-level diverse responses, and (b) showing that the proposed training method is more effective than the previous techniques.", "Related Work Our work is related to both recent advancement in encoder-decoder dialog models and generative models based on CVAE.", "Encoder-decoder Dialog Models Since the emergence of the neural dialog model, the problem of output diversity has received much attention in the research community.", "Ideal output responses should be both coherent and diverse.", "However, most models end up with generic and dull responses.", "To tackle this problem, one line of research has focused on augmenting the input of encoder-decoder models with richer context information, in order to generate more spe-cific responses.", "Li et al., (2016a) captured speakers' characteristics by encoding background information and speaking style into the distributed embeddings, which are used to re-rank the generated response from an encoder-decoder model.", "Xing et al., (2016) maintain topic encoding based on Latent Dirichlet Allocation (LDA) (Blei et al., 2003) of the conversation to encourage the model to output more topic coherent responses.", "On the other hand, many attempts have also been made to improve the architecture of encoderdecoder models.", "Li et al,.", "(2015) proposed to optimize the standard encoder-decoder by maximizing the mutual information between input and output, which in turn reduces generic responses.", "This approach penalized unconditionally high frequency responses, and favored responses that have high conditional probability given the input.", "Wiseman and Rush (2016) focused on improving the decoder network by alleviating the biases between training and testing.", "They introduced a searchbased loss that directly optimizes the networks for beam search decoding.", "The resulting model achieves better performance on word ordering, parsing and machine translation.", "Besides improving beam search, Li et al., (2016b) pointed out that the MLE objective of an encoder-decoder model is unable to approximate the real-world goal of the conversation.", "Thus, they initialized a encoderdecoder model with MLE objective and leveraged reinforcement learning to fine tune the model by optimizing three heuristic rewards functions: informativity, coherence, and ease of answering.", "Conditional Variational Autoencoder The variational autoencoder (VAE) (Kingma and Welling, 2013; Rezende et al., 2014) is one of the most popular frameworks for image generation.", "The basic idea of VAE is to encode the input x into a probability distribution z instead of a point encoding in the autoencoder.", "Then VAE applies a decoder network to reconstruct the original input using samples from z.", "To generate images, VAE first obtains a sample of z from the prior distribution, e.g.", "N (0, I), and then produces an image via the decoder network.", "A more advanced model, the conditional VAE (CVAE), is a recent modification of VAE to generate diverse images conditioned on certain attributes, e.g.", "generating different human faces given skin color .", "Inspired by CVAE, we view the dialog contexts as the conditional attributes and adapt CVAE to generate diverse responses instead of images.", "Although VAE/CVAE has achieved impressive results in image generation, adapting this to natural language generators is non-trivial.", "Bowman et al., (2015) have used VAE with Long-Short Term Memory (LSTM)-based recognition and decoder networks to generate sentences from a latent Gaussian variable.", "They showed that their model is able to generate diverse sentences with even a greedy LSTM decoder.", "They also reported the difficulty of training because the LSTM decoder tends to ignore the latent variable.", "We refer to this issue as the vanishing latent variable problem.", "Serban et al., (2016b) have applied a latent variable hierarchical encoder-decoder dialog model to introduce utterance-level variations and facilitate longer responses.", "To improve upon the past models, we firstly introduce a novel mechanism to leverage linguistic knowledge in training end-to-end neural dialog models, and we also propose a novel training technique that mitigates the vanishing latent variable problem.", "Proposed Models Conditional Variational Autoencoder (CVAE) for Dialog Generation Each dyadic conversation can be represented via three random variables: the dialog context c (context window size k − 1), the response utterance x (the k th utterance) and a latent variable z, which is used to capture the latent distribution over the valid responses.", "Further, c is composed of the dialog history: the preceding k-1 utterances; conversational floor (1 if the utterance is from the same speaker of x, otherwise 0) and meta features m (e.g.", "the topic).", "We then define the conditional distribution p(x, z|c) = p(x|z, c)p(z|c) and our goal is to use deep neural networks (parametrized by θ) to approximate p(z|c) and p(x|z, c).", "We refer to p θ (z|c) as the prior network and p θ (x, |z, c) as the response decoder.", "Then the generative process of x is (Figure 2 (a)): 1.", "Sample a latent variable z from the prior network p θ (z|c).", "2.", "Generate x through the response decoder p θ (x|z, c).", "CVAE is trained to maximize the conditional log likelihood of x given c, which involves an intractable marginalization over the latent variable z.", "As proposed in , CVAE can be efficiently trained with the Stochastic Gradient Variational Bayes (SGVB) framework (Kingma and Welling, 2013) by maximizing the variational lower bound of the conditional log likelihood.", "We assume the z follows multivariate Gaussian distribution with a diagonal covariance matrix and introduce a recognition network q φ (z|x, c) to approximate the true posterior distribution p(z|x, c).", "have shown that the variational lower bound can be written as: Figure 3 demonstrates an overview of our model.", "The utterance encoder is a bidirectional recurrent neural network (BRNN) (Schuster and Paliwal, 1997) with a gated recurrent unit (GRU) (Chung et al., 2014) to encode each utterance into fixedsize vectors by concatenating the last hidden states of the forward and backward RNN u L(θ, φ; x, c) = −KL(q φ (z|x, c) p θ (z|c)) + E q φ (z|c,x) [log p θ (x|z, c)] (1) ≤ log p(x|c) i = [ h i , h i ].", "x is simply u k .", "The context encoder is a 1-layer GRU network that encodes the preceding k-1 utterances by taking u 1:k−1 and the corresponding conversation floor as inputs.", "The last hidden state h c of the context encoder is concatenated with meta features and c = [h c , m].", "Since we assume z follows isotropic Gaussian distribution, the recognition network q φ (z|x, c) ∼ N (µ, σ 2 I) and the prior network p θ (z|c) ∼ N (µ , σ 2 I), and then we have: µ log(σ 2 ) = W r x c + b r (2) µ log(σ 2 ) = MLP p (c) (3) We then use the reparametrization trick (Kingma and Welling, 2013) to obtain samples of z either from N (z; µ, σ 2 I) predicted by the recognition network (training) or N (z; µ , σ 2 I) predicted by the prior network (testing).", "Finally, the response decoder is a 1-layer GRU network with initial state s 0 = W i [z, c]+b i .", "The response decoder then predicts the words in x sequentially.", "Knowledge-Guided CVAE (kgCVAE) In practice, training CVAE is a challenging optimization problem and often requires large amount of data.", "On the other hand, past research in spoken dialog systems and discourse analysis has suggested that many linguistic cues capture crucial features in representing natural conversation.", "For example, dialog acts (Poesio and Traum, 1998) have been widely used in the dialog managers (Litman and Allen, 1987; Raux et al., 2005; Zhao and Eskenazi, 2016) to represent the propositional function of the system.", "Therefore, we conjecture that it will be beneficial for the model to learn meaningful latent z if it is provided with explicitly extracted discourse features during the training.", "In order to incorporate the linguistic features into the basic CVAE model, we first denote the set of linguistic features as y.", "Then we assume that the generation of x depends on c, z and y. y relies on z and c as shown in Figure 2 .", "Specifically, during training the initial state of the response decoder is s 0 = W i [z, c, y] + b i and the input at every step is [e t , y] where e t is the word embedding of t th word in x.", "In addition, there is an MLP to predict y = MLP y (z, c) based on z and c. In the testing stage, the predicted y is used by the response decoder instead of the oracle decoders.", "We denote the modified model as knowledge-guided CVAE (kgCVAE) and developers can add desired discourse features that they wish the latent variable z to capture.", "KgCVAE model is trained by maximizing: L(θ, φ; x, c, y) = −KL(q φ (z|x, c, y) P θ (z|c)) + E q φ (z|c,x,y) [log p(x|z, c, y)] + E q φ (z|c,x,y) [log p(y|z, c)] (4) Since now the reconstruction of y is a part of the loss function, kgCVAE can more efficiently encode y-related information into z than discovering it only based on the surface-level x and c. Another advantage of kgCVAE is that it can output a highlevel label (e.g.", "dialog act) along with the wordlevel responses, which allows easier interpretation of the model's outputs.", "Optimization Challenges A straightforward VAE with RNN decoder fails to encode meaningful information in z due to the vanishing latent variable problem (Bowman et al., 2015) .", "Bowman et al., (2015) proposed two solutions: (1) KL annealing: gradually increasing the weight of the KL term from 0 to 1 during training; (2) word drop decoding: setting a certain percentage of the target words to 0.", "We found that CVAE suffers from the same issue when the decoder is an RNN.", "Also we did not consider word drop decoding because Bowman et al,.", "(2015) have shown that it may hurt the performance when the drop rate is too high.", "As a result, we propose a simple yet novel technique to tackle the vanishing latent variable problem: bag-of-word loss.", "The idea is to introduce an auxiliary loss that requires the decoder network to predict the bag-of-words in the response x as shown in Figure 3 (b).", "We decompose x into two variables: x o with word order and x bow without order, and assume that x o and x bow are conditionally independent given z and c: p(x, z|c) = p(x o |z, c)p(x bow |z, c)p(z|c).", "Due to the conditional independence assumption, the latent variable is forced to capture global information about the target response.", "Let f = MLP b (z, x) ∈ R V where V is vocabulary size, and we have: log p(x bow |z, c) = log |x| t=1 e fx t V j e f j (5) where |x| is the length of x and x t is the word index of t th word in x.", "The modified variational lower bound for CVAE with bag-of-word loss is (see Appendix A for kgCVAE): L (θ, φ; x, c) = L(θ, φ; x, c) + E q φ (z|c,x,y) [log p(x bow |z, c)] (6) We will show that the bag-of-word loss in Equation 6 is very effective against the vanishing latent variable and it is also complementary to the KL annealing technique.", "Experiment Setup Dataset We chose the Switchboard (SW) 1 Release 2 Corpus (Godfrey and Holliman, 1997) to evaluate the proposed models.", "SW has 2400 two-sided telephone conversations with manually transcribed speech and alignment.", "In the beginning of the call, a computer operator gave the callers recorded prompts that define the desired topic of discussion.", "There are 70 available topics.", "We randomly split the data into 2316/60/62 dialogs for train/validate/test.", "The pre-processing includes (1) tokenize using the NLTK tokenizer (Bird et al., 2009 ); (2) remove non-verbal symbols and repeated words due to false starts; (3) keep the top 10K frequent word types as the vocabulary.", "The final data have 207, 833/5, 225/5, 481 (c, x) pairs for train/validate/test.", "Furthermore, a subset of SW was manually labeled with dialog acts (Stolcke et al., 2000) .", "We extracted dialog act labels based on the dialog act recognizer proposed in (Ribeiro et al., 2015) .", "The features include the uni-gram and bi-gram of the utterance, and the contextual features of the last 3 utterances.", "We trained a Support Vector Machine (SVM) (Suykens and Vandewalle, 1999) with linear kernel on the subset of SW with human annotations.", "There are 42 types of dialog acts and the SVM achieved 77.3% accuracy on held-out data.", "Then the rest of SW data are labelled with dialog acts using the trained SVM dialog act recognizer.", "Training We trained with the following hyperparameters (according to the loss on the validate dataset): word embedding has size 200 and is shared across everywhere.", "We initialize the word embedding from Glove embedding pre-trained on Twitter (Pennington et al., 2014) .", "The utterance encoder has a hidden size of 300 for each direction.", "The context encoder has a hidden size of 600 and the response decoder has a hidden size of 400.", "The prior network and the MLP for predicting y both have 1 hidden layer of size 400 and tanh non-linearity.", "The latent variable z has a size of 200.", "The context window k is 10.", "All the initial weights are sampled from a uniform distribution [-0.08, 0.08].", "The mini-batch size is 30.", "The models are trained end-to-end using the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.001 and gradient clipping at 5.", "We selected the best models based on the variational lower bound on the validate data.", "Finally, we use the BOW loss along with KL annealing of 10,000 batches to achieve the best performance.", "Section 5.4 gives a detailed argument for the importance of the BOW loss.", "Results Experiments Setup We compared three neural dialog models: a strong baseline model, CVAE, and kgCVAE.", "The baseline model is an encoder-decoder neural dialog model without latent variables similar to (Serban et al., 2016a) .", "The baseline model's encoder uses the same context encoder to encode the dialog history and the meta features as shown in Figure 3 .", "The encoded context c is directly fed into the decoder networks as the initial state.", "The hyperparameters of the baseline are the same as the ones reported in Section 4.2 and the baseline is trained to minimize the standard cross entropy loss of the decoder RNN model without any auxiliary loss.", "Also, to compare the diversity introduced by the stochasticity in the proposed latent variable versus the softmax of RNN at each decoding step, we generate N responses from the baseline by sam-pling from the softmax.", "For CVAE/kgCVAE, we sample N times from the latent z and only use greedy decoders so that the randomness comes entirely from the latent variable z. Quantitative Analysis Automatically evaluating an open-domain generative dialog model is an open research challenge .", "Following our one-tomany hypothesis, we propose the following metrics.", "We assume that for a given dialog context c, there exist M c reference responses r j , j ∈ [1, M c ].", "Meanwhile a model can generate N hypothesis re- sponses h i , i ∈ [1, N ].", "The generalized responselevel precision/recall for a given dialog context is: precision(c) = N i=1 max j∈[1,Mc] d(r j , h i ) N recall(c) = Mc j=1 max i∈[1,N ] d(r j , h i )) M c where d(r j , h i ) is a distance function which lies between 0 to 1 and measures the similarities between r j and h i .", "The final score is averaged over the entire test dataset and we report the performance with 3 types of distance functions in order to evaluate the systems from various linguistic points of view: Smoothed Sentence-level BLEU (Chen and Cherry, 2014): BLEU is a popular metric that measures the geometric mean of modified ngram precision with a length penalty (Papineni et al., 2002; Li et al., 2015) .", "We use BLEU-1 to 4 as our lexical similarity metric and normalize the score to 0 to 1 scale.", "Cosine Distance of Bag-of-word Embedding: a simple method to obtain sentence embeddings is to take the average or extrema of all the word embeddings in the sentences (Forgues et al., 2014; Adi et al., 2016) .", "The d(r j , h i ) is the cosine distance of the two embedding vectors.", "We used Glove embedding described in Section 4 and denote the average method as A-bow and extrema method as E-bow.", "3.", "Dialog Act Match: to measure the similarity at the discourse level, the same dialogact tagger from 4.1 is applied to label all the generated responses of each model.", "We set d(r j , h i ) = 1 if r j and h i have the same dialog acts, otherwise d(r j , h i ) = 0.", "One challenge of using the above metrics is that there is only one, rather than multiple reference responses/contexts.", "This impacts reliability of our measures.", "Inspired by (Sordoni et al., 2015) , we utilized information retrieval techniques (see Appendix A) to gather 10 extra candidate reference responses/context from other conversations with the same topics.", "Then the 10 candidate references are filtered by two experts, which serve as the ground truth to train the reference response classifier.", "The result is 6.69 extra references in average per context.", "The average number of distinct reference dialog acts is 4.2.", "Table 1 The proposed models outperform the baseline in terms of recall in all the metrics with statistical significance.", "This confirms our hypothesis that generating responses with discourse-level diversity can lead to a more comprehensive coverage of the potential responses than promoting only word-level diversity.", "As for precision, we observed that the baseline has higher or similar scores than CVAE in all metrics, which is expected since the baseline tends to generate the mostly likely and safe responses repeatedly in the N hypotheses.", "However, kgCVAE is able to achieve the highest precision and recall in the 4 metrics at the same time (BLEU1-4, E-BOW).", "One reason for kgCVAE's good performance is that the predicted dialog act label in kgCVAE can regularize the generation process of its RNN decoder by forcing it to generate more coherent and precise words.", "We further analyze the precision/recall of BLEU-4 by looking at the average score versus the number of distinct reference dialog acts.", "A low number of distinct dialog acts represents the situation where the dialog context has a strong constraint on the range of the next response (low entropy), while a high number indicates the opposite (highentropy).", "Figure 4 shows that CVAE/kgCVAE achieves significantly higher recall than the baseline in higher entropy contexts.", "Also it shows that CVAE suffers from lower precision, especially in low entropy contexts.", "Finally, kgCVAE gets higher precision than both the baseline and CVAE in the full spectrum of context entropy.", "Table 2 shows the outputs generated from the baseline and kgCVAE.", "In example 1, caller A begins with an open-ended question.", "The kgCVAE model generated highly diverse answers that cover multiple plausible dialog acts.", "Further, we notice that the generated text exhibits similar dialog acts compared to the ones predicted separately by the model, implying the consistency of natural language generation based on y.", "On the contrary, the responses from the baseline model are limited to local n-gram variations and share a similar prefix, i.e.", "\"I'm\".", "Example 2 is a situation where caller A is telling B stories.", "The ground truth response is a back-channel and the range of valid answers is more constrained than example 1 since B is playing the role of a listener.", "The baseline successfully predicts \"uh-huh\".", "The kgCVAE model is also able to generate various ways of back-channeling.", "This implies that the latent z is able to capture context-sensitive variations, i.e.", "in low-entropy dialog contexts modeling lexical diversity while in high-entropy ones modeling discourse-level diversity.", "Moreover, kgCVAE is occasionally able to generate more sophisticated grounding (sample 4) beyond a simple back-channel, which is also an acceptable response given the dialog context.", "Qualitative Analysis In addition, past work (Kingma and Welling, 2013) has shown that the recognition network is able to learn to cluster high-dimension data, so we conjecture that posterior z outputted from the recognition network should cluster the responses into meaningful groups.", "Figure 5 visualizes the posterior z of responses in the test dataset in 2D space using t-SNE (Maaten and Hinton, 2008) .", "We found that the learned latent space is highly correlated with the dialog act and length of responses, which confirms our assumption.", "Results for Bag-of-Word Loss Finally, we evaluate the effectiveness of bag-ofword (BOW) loss for training VAE/CVAE with the RNN decoder.", "To compare with past work (Bowman et al., 2015) , we conducted the same language modelling (LM) task on Penn Treebank using VAE.", "The network architecture is same except we use GRU instead of LSTM.", "We compared four different training setups: (1) standard VAE without any heuristics; (2) VAE with KL annealing (KLA); (3) VAE with BOW loss; (4) VAE with both BOW loss and KLA.", "Intuitively, a well trained model should lead to a low reconstruction loss and small but non-trivial KL cost.", "For all models with KLA, the KL weight increases linearly from 0 to 1 in the first 5000 batches.", "Table 3 shows the reconstruction perplexity and the KL cost on the test dataset.", "The standard VAE fails to learn a meaningful latent variable by hav- Table 2 : Generated responses from the baselines and kgCVAE in two examples.", "KgCVAE also provides the predicted dialog act for each response.", "The context only shows the last utterance due to space limit (the actual context window size is 10).", "ing a KL cost close to 0 and a reconstruction perplexity similar to a small LSTM LM (Zaremba et al., 2014) .", "KLA helps to improve the reconstruction loss, but it requires early stopping since the models will fall back to the standard VAE after the KL weight becomes 1.", "At last, the models with BOW loss achieved significantly lower perplexity and larger KL cost.", "Figure 6 visualizes the evolution of the KL cost.", "We can see that for the standard model, the KL cost crashes to 0 at the beginning of training and never recovers.", "On the contrary, the model with only KLA learns to encode substantial information in latent z when the KL cost weight is small.", "However, after the KL weight is increased to 1 (after 5000 batch), the model once again decides to ignore the latent z and falls back to the naive implementation.", "The model with BOW loss, however, consistently converges to a non-trivial KL cost even without KLA, which confirms the importance of BOW loss for training latent variable models with the RNN decoder.", "Last but not least, our experiments showed that the conclusions drawn from LM using VAE also apply to training CVAE/kgCVAE, so we used BOW loss together with KLA for all previous experiments.", "Conclusion and Future Work In conclusion, we identified the one-to-many nature of open-domain conversation and proposed two novel models that show superior performance in generating diverse and appropriate responses at the discourse level.", "While the current paper addresses diversifying responses in respect to dialogue acts, this work is part of a larger research direction that targets leveraging both past linguistic findings and the learning power of deep neural networks to learn better representation of the latent factors in dialog.", "In turn, the output of this novel neural dialog model will be easier to explain and control by humans.", "In addition to dialog acts, we plan to apply our kgCVAE model to capture other different linguistic phenomena including sentiment, named entities,etc.", "Last but not least, the recognition network in our model will serve as the foundation for designing a datadriven dialog manager, which automatically discovers useful high-level intents.", "All of the above suggest a promising research direction." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "4.1", "4.2", "5.1", "5.2", "1.", "2.", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Encoder-decoder Dialog Models", "Conditional Variational Autoencoder", "Conditional Variational Autoencoder (CVAE) for Dialog Generation", "Knowledge-Guided CVAE (kgCVAE)", "Optimization Challenges", "Dataset", "Training", "Experiments Setup", "Quantitative Analysis", "Smoothed Sentence-level BLEU (Chen and", "Cosine", "Qualitative Analysis", "Results for Bag-of-Word Loss", "Conclusion and Future Work" ] }
GEM-SciDuet-train-100#paper-1263#slide-8
BOW Loss
Predict the bag-of-words in the responses X at once (word counts in the response) Break the dependency between words and eliminate the chance of cheating based on LM. z x RNN Loss c xwo FF Bag-of-word Loss
Predict the bag-of-words in the responses X at once (word counts in the response) Break the dependency between words and eliminate the chance of cheating based on LM. z x RNN Loss c xwo FF Bag-of-word Loss
[]
GEM-SciDuet-train-100#paper-1263#slide-9
1263
Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders
While recent neural encoder-decoder models have shown great promise in modeling open-domain conversations, they often generate dull and generic responses. Unlike past work that has focused on diversifying the output of the decoder at word-level to alleviate this problem, we present a novel framework based on conditional variational autoencoders that captures the discourse-level diversity in the encoder. Our model uses latent variables to learn a distribution over potential conversational intents and generates diverse responses using only greedy decoders. We have further developed a novel variant that is integrated with linguistic prior knowledge for better performance. Finally, the training procedure is improved by introducing a bag-of-word loss. Our proposed models have been validated to generate significantly more diverse responses than baseline approaches and exhibit competence in discourse-level decision-making.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196 ], "paper_content_text": [ "Introduction The dialog manager is one of the key components of dialog systems, which is responsible for modeling the decision-making process.", "Specifically, it typically takes a new utterance and the dialog context as input, and generates discourse-level decisions (Bohus and Rudnicky, 2003; Williams and Young, 2007) .", "Advanced dialog managers usually have a list of potential actions that enable them to have diverse behavior during a conversation, e.g.", "different strategies to recover from non-understanding .", "However, the conventional approach of designing a dialog manager (Williams and Young, 2007) does not scale well to open-domain conversation models because of the vast quantity of possible decisions.", "Thus, there has been a growing interest in applying encoder-decoder models for modeling open-domain conversation (Vinyals and Le, 2015; Serban et al., 2016a) .", "The basic approach treats a conversation as a transduction task, in which the dialog history is the source sequence and the next response is the target sequence.", "The model is then trained end-to-end on large conversation corpora using the maximum-likelihood estimation (MLE) objective without the need for manual crafting.", "However recent research has found that encoder-decoder models tend to generate generic and dull responses (e.g., I don't know), rather than meaningful and specific answers (Li et al., 2015; Serban et al., 2016b) .", "There have been many attempts to explain and solve this limitation, and they can be broadly divided into two categories (see Section 2 for details): (1) the first category argues that the dialog history is only one of the factors that decide the next response.", "Other features should be extracted and provided to the models as conditionals in order to generate more specific responses (Xing et al., 2016; Li et al., 2016a) ; (2) the second category aims to improve the encoder-decoder model itself, including decoding with beam search and its variations (Wiseman and Rush, 2016) , encouraging responses that have long-term payoff (Li et al., 2016b) , etc.", "Building upon the past work in dialog managers and encoder-decoder models, the key idea of this paper is to model dialogs as a one-to-many problem at the discourse level.", "Previous studies indicate that there are many factors in open-domain dialogs that decide the next response, and it is nontrivial to extract all of them.", "Intuitively, given a similar dialog history (and other observed inputs), there may exist many valid responses (at the discourse-level), each corresponding to a certain configuration of the latent variables that are not presented in the input.", "To uncover the potential responses, we strive to model a probabilistic distribution over the distributed utterance embeddings of the potential responses using a latent variable ( Figure 1 ).", "This allows us to generate diverse responses by drawing samples from the learned distribution and reconstruct their words via a decoder neural network.", "Specifically, our contributions are three-fold: 1.", "We present a novel neural dialog model adapted from conditional variational autoencoders (CVAE) , which introduces a latent variable that can capture discourse-level variations as described above 2.", "We propose Knowledge-Guided CVAE (kgC-VAE), which enables easy integration of expert knowledge and results in performance improvement and model interpretability.", "3.", "We develop a training method in addressing the difficulty of optimizing CVAE for natural language generation (Bowman et al., 2015) .", "We evaluate our models on human-human conversation data and yield promising results in: (a) generating appropriate and discourse-level diverse responses, and (b) showing that the proposed training method is more effective than the previous techniques.", "Related Work Our work is related to both recent advancement in encoder-decoder dialog models and generative models based on CVAE.", "Encoder-decoder Dialog Models Since the emergence of the neural dialog model, the problem of output diversity has received much attention in the research community.", "Ideal output responses should be both coherent and diverse.", "However, most models end up with generic and dull responses.", "To tackle this problem, one line of research has focused on augmenting the input of encoder-decoder models with richer context information, in order to generate more spe-cific responses.", "Li et al., (2016a) captured speakers' characteristics by encoding background information and speaking style into the distributed embeddings, which are used to re-rank the generated response from an encoder-decoder model.", "Xing et al., (2016) maintain topic encoding based on Latent Dirichlet Allocation (LDA) (Blei et al., 2003) of the conversation to encourage the model to output more topic coherent responses.", "On the other hand, many attempts have also been made to improve the architecture of encoderdecoder models.", "Li et al,.", "(2015) proposed to optimize the standard encoder-decoder by maximizing the mutual information between input and output, which in turn reduces generic responses.", "This approach penalized unconditionally high frequency responses, and favored responses that have high conditional probability given the input.", "Wiseman and Rush (2016) focused on improving the decoder network by alleviating the biases between training and testing.", "They introduced a searchbased loss that directly optimizes the networks for beam search decoding.", "The resulting model achieves better performance on word ordering, parsing and machine translation.", "Besides improving beam search, Li et al., (2016b) pointed out that the MLE objective of an encoder-decoder model is unable to approximate the real-world goal of the conversation.", "Thus, they initialized a encoderdecoder model with MLE objective and leveraged reinforcement learning to fine tune the model by optimizing three heuristic rewards functions: informativity, coherence, and ease of answering.", "Conditional Variational Autoencoder The variational autoencoder (VAE) (Kingma and Welling, 2013; Rezende et al., 2014) is one of the most popular frameworks for image generation.", "The basic idea of VAE is to encode the input x into a probability distribution z instead of a point encoding in the autoencoder.", "Then VAE applies a decoder network to reconstruct the original input using samples from z.", "To generate images, VAE first obtains a sample of z from the prior distribution, e.g.", "N (0, I), and then produces an image via the decoder network.", "A more advanced model, the conditional VAE (CVAE), is a recent modification of VAE to generate diverse images conditioned on certain attributes, e.g.", "generating different human faces given skin color .", "Inspired by CVAE, we view the dialog contexts as the conditional attributes and adapt CVAE to generate diverse responses instead of images.", "Although VAE/CVAE has achieved impressive results in image generation, adapting this to natural language generators is non-trivial.", "Bowman et al., (2015) have used VAE with Long-Short Term Memory (LSTM)-based recognition and decoder networks to generate sentences from a latent Gaussian variable.", "They showed that their model is able to generate diverse sentences with even a greedy LSTM decoder.", "They also reported the difficulty of training because the LSTM decoder tends to ignore the latent variable.", "We refer to this issue as the vanishing latent variable problem.", "Serban et al., (2016b) have applied a latent variable hierarchical encoder-decoder dialog model to introduce utterance-level variations and facilitate longer responses.", "To improve upon the past models, we firstly introduce a novel mechanism to leverage linguistic knowledge in training end-to-end neural dialog models, and we also propose a novel training technique that mitigates the vanishing latent variable problem.", "Proposed Models Conditional Variational Autoencoder (CVAE) for Dialog Generation Each dyadic conversation can be represented via three random variables: the dialog context c (context window size k − 1), the response utterance x (the k th utterance) and a latent variable z, which is used to capture the latent distribution over the valid responses.", "Further, c is composed of the dialog history: the preceding k-1 utterances; conversational floor (1 if the utterance is from the same speaker of x, otherwise 0) and meta features m (e.g.", "the topic).", "We then define the conditional distribution p(x, z|c) = p(x|z, c)p(z|c) and our goal is to use deep neural networks (parametrized by θ) to approximate p(z|c) and p(x|z, c).", "We refer to p θ (z|c) as the prior network and p θ (x, |z, c) as the response decoder.", "Then the generative process of x is (Figure 2 (a)): 1.", "Sample a latent variable z from the prior network p θ (z|c).", "2.", "Generate x through the response decoder p θ (x|z, c).", "CVAE is trained to maximize the conditional log likelihood of x given c, which involves an intractable marginalization over the latent variable z.", "As proposed in , CVAE can be efficiently trained with the Stochastic Gradient Variational Bayes (SGVB) framework (Kingma and Welling, 2013) by maximizing the variational lower bound of the conditional log likelihood.", "We assume the z follows multivariate Gaussian distribution with a diagonal covariance matrix and introduce a recognition network q φ (z|x, c) to approximate the true posterior distribution p(z|x, c).", "have shown that the variational lower bound can be written as: Figure 3 demonstrates an overview of our model.", "The utterance encoder is a bidirectional recurrent neural network (BRNN) (Schuster and Paliwal, 1997) with a gated recurrent unit (GRU) (Chung et al., 2014) to encode each utterance into fixedsize vectors by concatenating the last hidden states of the forward and backward RNN u L(θ, φ; x, c) = −KL(q φ (z|x, c) p θ (z|c)) + E q φ (z|c,x) [log p θ (x|z, c)] (1) ≤ log p(x|c) i = [ h i , h i ].", "x is simply u k .", "The context encoder is a 1-layer GRU network that encodes the preceding k-1 utterances by taking u 1:k−1 and the corresponding conversation floor as inputs.", "The last hidden state h c of the context encoder is concatenated with meta features and c = [h c , m].", "Since we assume z follows isotropic Gaussian distribution, the recognition network q φ (z|x, c) ∼ N (µ, σ 2 I) and the prior network p θ (z|c) ∼ N (µ , σ 2 I), and then we have: µ log(σ 2 ) = W r x c + b r (2) µ log(σ 2 ) = MLP p (c) (3) We then use the reparametrization trick (Kingma and Welling, 2013) to obtain samples of z either from N (z; µ, σ 2 I) predicted by the recognition network (training) or N (z; µ , σ 2 I) predicted by the prior network (testing).", "Finally, the response decoder is a 1-layer GRU network with initial state s 0 = W i [z, c]+b i .", "The response decoder then predicts the words in x sequentially.", "Knowledge-Guided CVAE (kgCVAE) In practice, training CVAE is a challenging optimization problem and often requires large amount of data.", "On the other hand, past research in spoken dialog systems and discourse analysis has suggested that many linguistic cues capture crucial features in representing natural conversation.", "For example, dialog acts (Poesio and Traum, 1998) have been widely used in the dialog managers (Litman and Allen, 1987; Raux et al., 2005; Zhao and Eskenazi, 2016) to represent the propositional function of the system.", "Therefore, we conjecture that it will be beneficial for the model to learn meaningful latent z if it is provided with explicitly extracted discourse features during the training.", "In order to incorporate the linguistic features into the basic CVAE model, we first denote the set of linguistic features as y.", "Then we assume that the generation of x depends on c, z and y. y relies on z and c as shown in Figure 2 .", "Specifically, during training the initial state of the response decoder is s 0 = W i [z, c, y] + b i and the input at every step is [e t , y] where e t is the word embedding of t th word in x.", "In addition, there is an MLP to predict y = MLP y (z, c) based on z and c. In the testing stage, the predicted y is used by the response decoder instead of the oracle decoders.", "We denote the modified model as knowledge-guided CVAE (kgCVAE) and developers can add desired discourse features that they wish the latent variable z to capture.", "KgCVAE model is trained by maximizing: L(θ, φ; x, c, y) = −KL(q φ (z|x, c, y) P θ (z|c)) + E q φ (z|c,x,y) [log p(x|z, c, y)] + E q φ (z|c,x,y) [log p(y|z, c)] (4) Since now the reconstruction of y is a part of the loss function, kgCVAE can more efficiently encode y-related information into z than discovering it only based on the surface-level x and c. Another advantage of kgCVAE is that it can output a highlevel label (e.g.", "dialog act) along with the wordlevel responses, which allows easier interpretation of the model's outputs.", "Optimization Challenges A straightforward VAE with RNN decoder fails to encode meaningful information in z due to the vanishing latent variable problem (Bowman et al., 2015) .", "Bowman et al., (2015) proposed two solutions: (1) KL annealing: gradually increasing the weight of the KL term from 0 to 1 during training; (2) word drop decoding: setting a certain percentage of the target words to 0.", "We found that CVAE suffers from the same issue when the decoder is an RNN.", "Also we did not consider word drop decoding because Bowman et al,.", "(2015) have shown that it may hurt the performance when the drop rate is too high.", "As a result, we propose a simple yet novel technique to tackle the vanishing latent variable problem: bag-of-word loss.", "The idea is to introduce an auxiliary loss that requires the decoder network to predict the bag-of-words in the response x as shown in Figure 3 (b).", "We decompose x into two variables: x o with word order and x bow without order, and assume that x o and x bow are conditionally independent given z and c: p(x, z|c) = p(x o |z, c)p(x bow |z, c)p(z|c).", "Due to the conditional independence assumption, the latent variable is forced to capture global information about the target response.", "Let f = MLP b (z, x) ∈ R V where V is vocabulary size, and we have: log p(x bow |z, c) = log |x| t=1 e fx t V j e f j (5) where |x| is the length of x and x t is the word index of t th word in x.", "The modified variational lower bound for CVAE with bag-of-word loss is (see Appendix A for kgCVAE): L (θ, φ; x, c) = L(θ, φ; x, c) + E q φ (z|c,x,y) [log p(x bow |z, c)] (6) We will show that the bag-of-word loss in Equation 6 is very effective against the vanishing latent variable and it is also complementary to the KL annealing technique.", "Experiment Setup Dataset We chose the Switchboard (SW) 1 Release 2 Corpus (Godfrey and Holliman, 1997) to evaluate the proposed models.", "SW has 2400 two-sided telephone conversations with manually transcribed speech and alignment.", "In the beginning of the call, a computer operator gave the callers recorded prompts that define the desired topic of discussion.", "There are 70 available topics.", "We randomly split the data into 2316/60/62 dialogs for train/validate/test.", "The pre-processing includes (1) tokenize using the NLTK tokenizer (Bird et al., 2009 ); (2) remove non-verbal symbols and repeated words due to false starts; (3) keep the top 10K frequent word types as the vocabulary.", "The final data have 207, 833/5, 225/5, 481 (c, x) pairs for train/validate/test.", "Furthermore, a subset of SW was manually labeled with dialog acts (Stolcke et al., 2000) .", "We extracted dialog act labels based on the dialog act recognizer proposed in (Ribeiro et al., 2015) .", "The features include the uni-gram and bi-gram of the utterance, and the contextual features of the last 3 utterances.", "We trained a Support Vector Machine (SVM) (Suykens and Vandewalle, 1999) with linear kernel on the subset of SW with human annotations.", "There are 42 types of dialog acts and the SVM achieved 77.3% accuracy on held-out data.", "Then the rest of SW data are labelled with dialog acts using the trained SVM dialog act recognizer.", "Training We trained with the following hyperparameters (according to the loss on the validate dataset): word embedding has size 200 and is shared across everywhere.", "We initialize the word embedding from Glove embedding pre-trained on Twitter (Pennington et al., 2014) .", "The utterance encoder has a hidden size of 300 for each direction.", "The context encoder has a hidden size of 600 and the response decoder has a hidden size of 400.", "The prior network and the MLP for predicting y both have 1 hidden layer of size 400 and tanh non-linearity.", "The latent variable z has a size of 200.", "The context window k is 10.", "All the initial weights are sampled from a uniform distribution [-0.08, 0.08].", "The mini-batch size is 30.", "The models are trained end-to-end using the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.001 and gradient clipping at 5.", "We selected the best models based on the variational lower bound on the validate data.", "Finally, we use the BOW loss along with KL annealing of 10,000 batches to achieve the best performance.", "Section 5.4 gives a detailed argument for the importance of the BOW loss.", "Results Experiments Setup We compared three neural dialog models: a strong baseline model, CVAE, and kgCVAE.", "The baseline model is an encoder-decoder neural dialog model without latent variables similar to (Serban et al., 2016a) .", "The baseline model's encoder uses the same context encoder to encode the dialog history and the meta features as shown in Figure 3 .", "The encoded context c is directly fed into the decoder networks as the initial state.", "The hyperparameters of the baseline are the same as the ones reported in Section 4.2 and the baseline is trained to minimize the standard cross entropy loss of the decoder RNN model without any auxiliary loss.", "Also, to compare the diversity introduced by the stochasticity in the proposed latent variable versus the softmax of RNN at each decoding step, we generate N responses from the baseline by sam-pling from the softmax.", "For CVAE/kgCVAE, we sample N times from the latent z and only use greedy decoders so that the randomness comes entirely from the latent variable z. Quantitative Analysis Automatically evaluating an open-domain generative dialog model is an open research challenge .", "Following our one-tomany hypothesis, we propose the following metrics.", "We assume that for a given dialog context c, there exist M c reference responses r j , j ∈ [1, M c ].", "Meanwhile a model can generate N hypothesis re- sponses h i , i ∈ [1, N ].", "The generalized responselevel precision/recall for a given dialog context is: precision(c) = N i=1 max j∈[1,Mc] d(r j , h i ) N recall(c) = Mc j=1 max i∈[1,N ] d(r j , h i )) M c where d(r j , h i ) is a distance function which lies between 0 to 1 and measures the similarities between r j and h i .", "The final score is averaged over the entire test dataset and we report the performance with 3 types of distance functions in order to evaluate the systems from various linguistic points of view: Smoothed Sentence-level BLEU (Chen and Cherry, 2014): BLEU is a popular metric that measures the geometric mean of modified ngram precision with a length penalty (Papineni et al., 2002; Li et al., 2015) .", "We use BLEU-1 to 4 as our lexical similarity metric and normalize the score to 0 to 1 scale.", "Cosine Distance of Bag-of-word Embedding: a simple method to obtain sentence embeddings is to take the average or extrema of all the word embeddings in the sentences (Forgues et al., 2014; Adi et al., 2016) .", "The d(r j , h i ) is the cosine distance of the two embedding vectors.", "We used Glove embedding described in Section 4 and denote the average method as A-bow and extrema method as E-bow.", "3.", "Dialog Act Match: to measure the similarity at the discourse level, the same dialogact tagger from 4.1 is applied to label all the generated responses of each model.", "We set d(r j , h i ) = 1 if r j and h i have the same dialog acts, otherwise d(r j , h i ) = 0.", "One challenge of using the above metrics is that there is only one, rather than multiple reference responses/contexts.", "This impacts reliability of our measures.", "Inspired by (Sordoni et al., 2015) , we utilized information retrieval techniques (see Appendix A) to gather 10 extra candidate reference responses/context from other conversations with the same topics.", "Then the 10 candidate references are filtered by two experts, which serve as the ground truth to train the reference response classifier.", "The result is 6.69 extra references in average per context.", "The average number of distinct reference dialog acts is 4.2.", "Table 1 The proposed models outperform the baseline in terms of recall in all the metrics with statistical significance.", "This confirms our hypothesis that generating responses with discourse-level diversity can lead to a more comprehensive coverage of the potential responses than promoting only word-level diversity.", "As for precision, we observed that the baseline has higher or similar scores than CVAE in all metrics, which is expected since the baseline tends to generate the mostly likely and safe responses repeatedly in the N hypotheses.", "However, kgCVAE is able to achieve the highest precision and recall in the 4 metrics at the same time (BLEU1-4, E-BOW).", "One reason for kgCVAE's good performance is that the predicted dialog act label in kgCVAE can regularize the generation process of its RNN decoder by forcing it to generate more coherent and precise words.", "We further analyze the precision/recall of BLEU-4 by looking at the average score versus the number of distinct reference dialog acts.", "A low number of distinct dialog acts represents the situation where the dialog context has a strong constraint on the range of the next response (low entropy), while a high number indicates the opposite (highentropy).", "Figure 4 shows that CVAE/kgCVAE achieves significantly higher recall than the baseline in higher entropy contexts.", "Also it shows that CVAE suffers from lower precision, especially in low entropy contexts.", "Finally, kgCVAE gets higher precision than both the baseline and CVAE in the full spectrum of context entropy.", "Table 2 shows the outputs generated from the baseline and kgCVAE.", "In example 1, caller A begins with an open-ended question.", "The kgCVAE model generated highly diverse answers that cover multiple plausible dialog acts.", "Further, we notice that the generated text exhibits similar dialog acts compared to the ones predicted separately by the model, implying the consistency of natural language generation based on y.", "On the contrary, the responses from the baseline model are limited to local n-gram variations and share a similar prefix, i.e.", "\"I'm\".", "Example 2 is a situation where caller A is telling B stories.", "The ground truth response is a back-channel and the range of valid answers is more constrained than example 1 since B is playing the role of a listener.", "The baseline successfully predicts \"uh-huh\".", "The kgCVAE model is also able to generate various ways of back-channeling.", "This implies that the latent z is able to capture context-sensitive variations, i.e.", "in low-entropy dialog contexts modeling lexical diversity while in high-entropy ones modeling discourse-level diversity.", "Moreover, kgCVAE is occasionally able to generate more sophisticated grounding (sample 4) beyond a simple back-channel, which is also an acceptable response given the dialog context.", "Qualitative Analysis In addition, past work (Kingma and Welling, 2013) has shown that the recognition network is able to learn to cluster high-dimension data, so we conjecture that posterior z outputted from the recognition network should cluster the responses into meaningful groups.", "Figure 5 visualizes the posterior z of responses in the test dataset in 2D space using t-SNE (Maaten and Hinton, 2008) .", "We found that the learned latent space is highly correlated with the dialog act and length of responses, which confirms our assumption.", "Results for Bag-of-Word Loss Finally, we evaluate the effectiveness of bag-ofword (BOW) loss for training VAE/CVAE with the RNN decoder.", "To compare with past work (Bowman et al., 2015) , we conducted the same language modelling (LM) task on Penn Treebank using VAE.", "The network architecture is same except we use GRU instead of LSTM.", "We compared four different training setups: (1) standard VAE without any heuristics; (2) VAE with KL annealing (KLA); (3) VAE with BOW loss; (4) VAE with both BOW loss and KLA.", "Intuitively, a well trained model should lead to a low reconstruction loss and small but non-trivial KL cost.", "For all models with KLA, the KL weight increases linearly from 0 to 1 in the first 5000 batches.", "Table 3 shows the reconstruction perplexity and the KL cost on the test dataset.", "The standard VAE fails to learn a meaningful latent variable by hav- Table 2 : Generated responses from the baselines and kgCVAE in two examples.", "KgCVAE also provides the predicted dialog act for each response.", "The context only shows the last utterance due to space limit (the actual context window size is 10).", "ing a KL cost close to 0 and a reconstruction perplexity similar to a small LSTM LM (Zaremba et al., 2014) .", "KLA helps to improve the reconstruction loss, but it requires early stopping since the models will fall back to the standard VAE after the KL weight becomes 1.", "At last, the models with BOW loss achieved significantly lower perplexity and larger KL cost.", "Figure 6 visualizes the evolution of the KL cost.", "We can see that for the standard model, the KL cost crashes to 0 at the beginning of training and never recovers.", "On the contrary, the model with only KLA learns to encode substantial information in latent z when the KL cost weight is small.", "However, after the KL weight is increased to 1 (after 5000 batch), the model once again decides to ignore the latent z and falls back to the naive implementation.", "The model with BOW loss, however, consistently converges to a non-trivial KL cost even without KLA, which confirms the importance of BOW loss for training latent variable models with the RNN decoder.", "Last but not least, our experiments showed that the conclusions drawn from LM using VAE also apply to training CVAE/kgCVAE, so we used BOW loss together with KLA for all previous experiments.", "Conclusion and Future Work In conclusion, we identified the one-to-many nature of open-domain conversation and proposed two novel models that show superior performance in generating diverse and appropriate responses at the discourse level.", "While the current paper addresses diversifying responses in respect to dialogue acts, this work is part of a larger research direction that targets leveraging both past linguistic findings and the learning power of deep neural networks to learn better representation of the latent factors in dialog.", "In turn, the output of this novel neural dialog model will be easier to explain and control by humans.", "In addition to dialog acts, we plan to apply our kgCVAE model to capture other different linguistic phenomena including sentiment, named entities,etc.", "Last but not least, the recognition network in our model will serve as the foundation for designing a datadriven dialog manager, which automatically discovers useful high-level intents.", "All of the above suggest a promising research direction." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "4.1", "4.2", "5.1", "5.2", "1.", "2.", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Encoder-decoder Dialog Models", "Conditional Variational Autoencoder", "Conditional Variational Autoencoder (CVAE) for Dialog Generation", "Knowledge-Guided CVAE (kgCVAE)", "Optimization Challenges", "Dataset", "Training", "Experiments Setup", "Quantitative Analysis", "Smoothed Sentence-level BLEU (Chen and", "Cosine", "Qualitative Analysis", "Results for Bag-of-Word Loss", "Conclusion and Future Work" ] }
GEM-SciDuet-train-100#paper-1263#slide-9
Dataset
Data Name Switchboard Release 2 Number of context-response pairs Vocabulary Size Top 10K Dialog Act Labels 42 types, tagged by SVM and human Number of Topics 70 tagged by humans
Data Name Switchboard Release 2 Number of context-response pairs Vocabulary Size Top 10K Dialog Act Labels 42 types, tagged by SVM and human Number of Topics 70 tagged by humans
[]
GEM-SciDuet-train-100#paper-1263#slide-10
1263
Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders
While recent neural encoder-decoder models have shown great promise in modeling open-domain conversations, they often generate dull and generic responses. Unlike past work that has focused on diversifying the output of the decoder at word-level to alleviate this problem, we present a novel framework based on conditional variational autoencoders that captures the discourse-level diversity in the encoder. Our model uses latent variables to learn a distribution over potential conversational intents and generates diverse responses using only greedy decoders. We have further developed a novel variant that is integrated with linguistic prior knowledge for better performance. Finally, the training procedure is improved by introducing a bag-of-word loss. Our proposed models have been validated to generate significantly more diverse responses than baseline approaches and exhibit competence in discourse-level decision-making.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196 ], "paper_content_text": [ "Introduction The dialog manager is one of the key components of dialog systems, which is responsible for modeling the decision-making process.", "Specifically, it typically takes a new utterance and the dialog context as input, and generates discourse-level decisions (Bohus and Rudnicky, 2003; Williams and Young, 2007) .", "Advanced dialog managers usually have a list of potential actions that enable them to have diverse behavior during a conversation, e.g.", "different strategies to recover from non-understanding .", "However, the conventional approach of designing a dialog manager (Williams and Young, 2007) does not scale well to open-domain conversation models because of the vast quantity of possible decisions.", "Thus, there has been a growing interest in applying encoder-decoder models for modeling open-domain conversation (Vinyals and Le, 2015; Serban et al., 2016a) .", "The basic approach treats a conversation as a transduction task, in which the dialog history is the source sequence and the next response is the target sequence.", "The model is then trained end-to-end on large conversation corpora using the maximum-likelihood estimation (MLE) objective without the need for manual crafting.", "However recent research has found that encoder-decoder models tend to generate generic and dull responses (e.g., I don't know), rather than meaningful and specific answers (Li et al., 2015; Serban et al., 2016b) .", "There have been many attempts to explain and solve this limitation, and they can be broadly divided into two categories (see Section 2 for details): (1) the first category argues that the dialog history is only one of the factors that decide the next response.", "Other features should be extracted and provided to the models as conditionals in order to generate more specific responses (Xing et al., 2016; Li et al., 2016a) ; (2) the second category aims to improve the encoder-decoder model itself, including decoding with beam search and its variations (Wiseman and Rush, 2016) , encouraging responses that have long-term payoff (Li et al., 2016b) , etc.", "Building upon the past work in dialog managers and encoder-decoder models, the key idea of this paper is to model dialogs as a one-to-many problem at the discourse level.", "Previous studies indicate that there are many factors in open-domain dialogs that decide the next response, and it is nontrivial to extract all of them.", "Intuitively, given a similar dialog history (and other observed inputs), there may exist many valid responses (at the discourse-level), each corresponding to a certain configuration of the latent variables that are not presented in the input.", "To uncover the potential responses, we strive to model a probabilistic distribution over the distributed utterance embeddings of the potential responses using a latent variable ( Figure 1 ).", "This allows us to generate diverse responses by drawing samples from the learned distribution and reconstruct their words via a decoder neural network.", "Specifically, our contributions are three-fold: 1.", "We present a novel neural dialog model adapted from conditional variational autoencoders (CVAE) , which introduces a latent variable that can capture discourse-level variations as described above 2.", "We propose Knowledge-Guided CVAE (kgC-VAE), which enables easy integration of expert knowledge and results in performance improvement and model interpretability.", "3.", "We develop a training method in addressing the difficulty of optimizing CVAE for natural language generation (Bowman et al., 2015) .", "We evaluate our models on human-human conversation data and yield promising results in: (a) generating appropriate and discourse-level diverse responses, and (b) showing that the proposed training method is more effective than the previous techniques.", "Related Work Our work is related to both recent advancement in encoder-decoder dialog models and generative models based on CVAE.", "Encoder-decoder Dialog Models Since the emergence of the neural dialog model, the problem of output diversity has received much attention in the research community.", "Ideal output responses should be both coherent and diverse.", "However, most models end up with generic and dull responses.", "To tackle this problem, one line of research has focused on augmenting the input of encoder-decoder models with richer context information, in order to generate more spe-cific responses.", "Li et al., (2016a) captured speakers' characteristics by encoding background information and speaking style into the distributed embeddings, which are used to re-rank the generated response from an encoder-decoder model.", "Xing et al., (2016) maintain topic encoding based on Latent Dirichlet Allocation (LDA) (Blei et al., 2003) of the conversation to encourage the model to output more topic coherent responses.", "On the other hand, many attempts have also been made to improve the architecture of encoderdecoder models.", "Li et al,.", "(2015) proposed to optimize the standard encoder-decoder by maximizing the mutual information between input and output, which in turn reduces generic responses.", "This approach penalized unconditionally high frequency responses, and favored responses that have high conditional probability given the input.", "Wiseman and Rush (2016) focused on improving the decoder network by alleviating the biases between training and testing.", "They introduced a searchbased loss that directly optimizes the networks for beam search decoding.", "The resulting model achieves better performance on word ordering, parsing and machine translation.", "Besides improving beam search, Li et al., (2016b) pointed out that the MLE objective of an encoder-decoder model is unable to approximate the real-world goal of the conversation.", "Thus, they initialized a encoderdecoder model with MLE objective and leveraged reinforcement learning to fine tune the model by optimizing three heuristic rewards functions: informativity, coherence, and ease of answering.", "Conditional Variational Autoencoder The variational autoencoder (VAE) (Kingma and Welling, 2013; Rezende et al., 2014) is one of the most popular frameworks for image generation.", "The basic idea of VAE is to encode the input x into a probability distribution z instead of a point encoding in the autoencoder.", "Then VAE applies a decoder network to reconstruct the original input using samples from z.", "To generate images, VAE first obtains a sample of z from the prior distribution, e.g.", "N (0, I), and then produces an image via the decoder network.", "A more advanced model, the conditional VAE (CVAE), is a recent modification of VAE to generate diverse images conditioned on certain attributes, e.g.", "generating different human faces given skin color .", "Inspired by CVAE, we view the dialog contexts as the conditional attributes and adapt CVAE to generate diverse responses instead of images.", "Although VAE/CVAE has achieved impressive results in image generation, adapting this to natural language generators is non-trivial.", "Bowman et al., (2015) have used VAE with Long-Short Term Memory (LSTM)-based recognition and decoder networks to generate sentences from a latent Gaussian variable.", "They showed that their model is able to generate diverse sentences with even a greedy LSTM decoder.", "They also reported the difficulty of training because the LSTM decoder tends to ignore the latent variable.", "We refer to this issue as the vanishing latent variable problem.", "Serban et al., (2016b) have applied a latent variable hierarchical encoder-decoder dialog model to introduce utterance-level variations and facilitate longer responses.", "To improve upon the past models, we firstly introduce a novel mechanism to leverage linguistic knowledge in training end-to-end neural dialog models, and we also propose a novel training technique that mitigates the vanishing latent variable problem.", "Proposed Models Conditional Variational Autoencoder (CVAE) for Dialog Generation Each dyadic conversation can be represented via three random variables: the dialog context c (context window size k − 1), the response utterance x (the k th utterance) and a latent variable z, which is used to capture the latent distribution over the valid responses.", "Further, c is composed of the dialog history: the preceding k-1 utterances; conversational floor (1 if the utterance is from the same speaker of x, otherwise 0) and meta features m (e.g.", "the topic).", "We then define the conditional distribution p(x, z|c) = p(x|z, c)p(z|c) and our goal is to use deep neural networks (parametrized by θ) to approximate p(z|c) and p(x|z, c).", "We refer to p θ (z|c) as the prior network and p θ (x, |z, c) as the response decoder.", "Then the generative process of x is (Figure 2 (a)): 1.", "Sample a latent variable z from the prior network p θ (z|c).", "2.", "Generate x through the response decoder p θ (x|z, c).", "CVAE is trained to maximize the conditional log likelihood of x given c, which involves an intractable marginalization over the latent variable z.", "As proposed in , CVAE can be efficiently trained with the Stochastic Gradient Variational Bayes (SGVB) framework (Kingma and Welling, 2013) by maximizing the variational lower bound of the conditional log likelihood.", "We assume the z follows multivariate Gaussian distribution with a diagonal covariance matrix and introduce a recognition network q φ (z|x, c) to approximate the true posterior distribution p(z|x, c).", "have shown that the variational lower bound can be written as: Figure 3 demonstrates an overview of our model.", "The utterance encoder is a bidirectional recurrent neural network (BRNN) (Schuster and Paliwal, 1997) with a gated recurrent unit (GRU) (Chung et al., 2014) to encode each utterance into fixedsize vectors by concatenating the last hidden states of the forward and backward RNN u L(θ, φ; x, c) = −KL(q φ (z|x, c) p θ (z|c)) + E q φ (z|c,x) [log p θ (x|z, c)] (1) ≤ log p(x|c) i = [ h i , h i ].", "x is simply u k .", "The context encoder is a 1-layer GRU network that encodes the preceding k-1 utterances by taking u 1:k−1 and the corresponding conversation floor as inputs.", "The last hidden state h c of the context encoder is concatenated with meta features and c = [h c , m].", "Since we assume z follows isotropic Gaussian distribution, the recognition network q φ (z|x, c) ∼ N (µ, σ 2 I) and the prior network p θ (z|c) ∼ N (µ , σ 2 I), and then we have: µ log(σ 2 ) = W r x c + b r (2) µ log(σ 2 ) = MLP p (c) (3) We then use the reparametrization trick (Kingma and Welling, 2013) to obtain samples of z either from N (z; µ, σ 2 I) predicted by the recognition network (training) or N (z; µ , σ 2 I) predicted by the prior network (testing).", "Finally, the response decoder is a 1-layer GRU network with initial state s 0 = W i [z, c]+b i .", "The response decoder then predicts the words in x sequentially.", "Knowledge-Guided CVAE (kgCVAE) In practice, training CVAE is a challenging optimization problem and often requires large amount of data.", "On the other hand, past research in spoken dialog systems and discourse analysis has suggested that many linguistic cues capture crucial features in representing natural conversation.", "For example, dialog acts (Poesio and Traum, 1998) have been widely used in the dialog managers (Litman and Allen, 1987; Raux et al., 2005; Zhao and Eskenazi, 2016) to represent the propositional function of the system.", "Therefore, we conjecture that it will be beneficial for the model to learn meaningful latent z if it is provided with explicitly extracted discourse features during the training.", "In order to incorporate the linguistic features into the basic CVAE model, we first denote the set of linguistic features as y.", "Then we assume that the generation of x depends on c, z and y. y relies on z and c as shown in Figure 2 .", "Specifically, during training the initial state of the response decoder is s 0 = W i [z, c, y] + b i and the input at every step is [e t , y] where e t is the word embedding of t th word in x.", "In addition, there is an MLP to predict y = MLP y (z, c) based on z and c. In the testing stage, the predicted y is used by the response decoder instead of the oracle decoders.", "We denote the modified model as knowledge-guided CVAE (kgCVAE) and developers can add desired discourse features that they wish the latent variable z to capture.", "KgCVAE model is trained by maximizing: L(θ, φ; x, c, y) = −KL(q φ (z|x, c, y) P θ (z|c)) + E q φ (z|c,x,y) [log p(x|z, c, y)] + E q φ (z|c,x,y) [log p(y|z, c)] (4) Since now the reconstruction of y is a part of the loss function, kgCVAE can more efficiently encode y-related information into z than discovering it only based on the surface-level x and c. Another advantage of kgCVAE is that it can output a highlevel label (e.g.", "dialog act) along with the wordlevel responses, which allows easier interpretation of the model's outputs.", "Optimization Challenges A straightforward VAE with RNN decoder fails to encode meaningful information in z due to the vanishing latent variable problem (Bowman et al., 2015) .", "Bowman et al., (2015) proposed two solutions: (1) KL annealing: gradually increasing the weight of the KL term from 0 to 1 during training; (2) word drop decoding: setting a certain percentage of the target words to 0.", "We found that CVAE suffers from the same issue when the decoder is an RNN.", "Also we did not consider word drop decoding because Bowman et al,.", "(2015) have shown that it may hurt the performance when the drop rate is too high.", "As a result, we propose a simple yet novel technique to tackle the vanishing latent variable problem: bag-of-word loss.", "The idea is to introduce an auxiliary loss that requires the decoder network to predict the bag-of-words in the response x as shown in Figure 3 (b).", "We decompose x into two variables: x o with word order and x bow without order, and assume that x o and x bow are conditionally independent given z and c: p(x, z|c) = p(x o |z, c)p(x bow |z, c)p(z|c).", "Due to the conditional independence assumption, the latent variable is forced to capture global information about the target response.", "Let f = MLP b (z, x) ∈ R V where V is vocabulary size, and we have: log p(x bow |z, c) = log |x| t=1 e fx t V j e f j (5) where |x| is the length of x and x t is the word index of t th word in x.", "The modified variational lower bound for CVAE with bag-of-word loss is (see Appendix A for kgCVAE): L (θ, φ; x, c) = L(θ, φ; x, c) + E q φ (z|c,x,y) [log p(x bow |z, c)] (6) We will show that the bag-of-word loss in Equation 6 is very effective against the vanishing latent variable and it is also complementary to the KL annealing technique.", "Experiment Setup Dataset We chose the Switchboard (SW) 1 Release 2 Corpus (Godfrey and Holliman, 1997) to evaluate the proposed models.", "SW has 2400 two-sided telephone conversations with manually transcribed speech and alignment.", "In the beginning of the call, a computer operator gave the callers recorded prompts that define the desired topic of discussion.", "There are 70 available topics.", "We randomly split the data into 2316/60/62 dialogs for train/validate/test.", "The pre-processing includes (1) tokenize using the NLTK tokenizer (Bird et al., 2009 ); (2) remove non-verbal symbols and repeated words due to false starts; (3) keep the top 10K frequent word types as the vocabulary.", "The final data have 207, 833/5, 225/5, 481 (c, x) pairs for train/validate/test.", "Furthermore, a subset of SW was manually labeled with dialog acts (Stolcke et al., 2000) .", "We extracted dialog act labels based on the dialog act recognizer proposed in (Ribeiro et al., 2015) .", "The features include the uni-gram and bi-gram of the utterance, and the contextual features of the last 3 utterances.", "We trained a Support Vector Machine (SVM) (Suykens and Vandewalle, 1999) with linear kernel on the subset of SW with human annotations.", "There are 42 types of dialog acts and the SVM achieved 77.3% accuracy on held-out data.", "Then the rest of SW data are labelled with dialog acts using the trained SVM dialog act recognizer.", "Training We trained with the following hyperparameters (according to the loss on the validate dataset): word embedding has size 200 and is shared across everywhere.", "We initialize the word embedding from Glove embedding pre-trained on Twitter (Pennington et al., 2014) .", "The utterance encoder has a hidden size of 300 for each direction.", "The context encoder has a hidden size of 600 and the response decoder has a hidden size of 400.", "The prior network and the MLP for predicting y both have 1 hidden layer of size 400 and tanh non-linearity.", "The latent variable z has a size of 200.", "The context window k is 10.", "All the initial weights are sampled from a uniform distribution [-0.08, 0.08].", "The mini-batch size is 30.", "The models are trained end-to-end using the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.001 and gradient clipping at 5.", "We selected the best models based on the variational lower bound on the validate data.", "Finally, we use the BOW loss along with KL annealing of 10,000 batches to achieve the best performance.", "Section 5.4 gives a detailed argument for the importance of the BOW loss.", "Results Experiments Setup We compared three neural dialog models: a strong baseline model, CVAE, and kgCVAE.", "The baseline model is an encoder-decoder neural dialog model without latent variables similar to (Serban et al., 2016a) .", "The baseline model's encoder uses the same context encoder to encode the dialog history and the meta features as shown in Figure 3 .", "The encoded context c is directly fed into the decoder networks as the initial state.", "The hyperparameters of the baseline are the same as the ones reported in Section 4.2 and the baseline is trained to minimize the standard cross entropy loss of the decoder RNN model without any auxiliary loss.", "Also, to compare the diversity introduced by the stochasticity in the proposed latent variable versus the softmax of RNN at each decoding step, we generate N responses from the baseline by sam-pling from the softmax.", "For CVAE/kgCVAE, we sample N times from the latent z and only use greedy decoders so that the randomness comes entirely from the latent variable z. Quantitative Analysis Automatically evaluating an open-domain generative dialog model is an open research challenge .", "Following our one-tomany hypothesis, we propose the following metrics.", "We assume that for a given dialog context c, there exist M c reference responses r j , j ∈ [1, M c ].", "Meanwhile a model can generate N hypothesis re- sponses h i , i ∈ [1, N ].", "The generalized responselevel precision/recall for a given dialog context is: precision(c) = N i=1 max j∈[1,Mc] d(r j , h i ) N recall(c) = Mc j=1 max i∈[1,N ] d(r j , h i )) M c where d(r j , h i ) is a distance function which lies between 0 to 1 and measures the similarities between r j and h i .", "The final score is averaged over the entire test dataset and we report the performance with 3 types of distance functions in order to evaluate the systems from various linguistic points of view: Smoothed Sentence-level BLEU (Chen and Cherry, 2014): BLEU is a popular metric that measures the geometric mean of modified ngram precision with a length penalty (Papineni et al., 2002; Li et al., 2015) .", "We use BLEU-1 to 4 as our lexical similarity metric and normalize the score to 0 to 1 scale.", "Cosine Distance of Bag-of-word Embedding: a simple method to obtain sentence embeddings is to take the average or extrema of all the word embeddings in the sentences (Forgues et al., 2014; Adi et al., 2016) .", "The d(r j , h i ) is the cosine distance of the two embedding vectors.", "We used Glove embedding described in Section 4 and denote the average method as A-bow and extrema method as E-bow.", "3.", "Dialog Act Match: to measure the similarity at the discourse level, the same dialogact tagger from 4.1 is applied to label all the generated responses of each model.", "We set d(r j , h i ) = 1 if r j and h i have the same dialog acts, otherwise d(r j , h i ) = 0.", "One challenge of using the above metrics is that there is only one, rather than multiple reference responses/contexts.", "This impacts reliability of our measures.", "Inspired by (Sordoni et al., 2015) , we utilized information retrieval techniques (see Appendix A) to gather 10 extra candidate reference responses/context from other conversations with the same topics.", "Then the 10 candidate references are filtered by two experts, which serve as the ground truth to train the reference response classifier.", "The result is 6.69 extra references in average per context.", "The average number of distinct reference dialog acts is 4.2.", "Table 1 The proposed models outperform the baseline in terms of recall in all the metrics with statistical significance.", "This confirms our hypothesis that generating responses with discourse-level diversity can lead to a more comprehensive coverage of the potential responses than promoting only word-level diversity.", "As for precision, we observed that the baseline has higher or similar scores than CVAE in all metrics, which is expected since the baseline tends to generate the mostly likely and safe responses repeatedly in the N hypotheses.", "However, kgCVAE is able to achieve the highest precision and recall in the 4 metrics at the same time (BLEU1-4, E-BOW).", "One reason for kgCVAE's good performance is that the predicted dialog act label in kgCVAE can regularize the generation process of its RNN decoder by forcing it to generate more coherent and precise words.", "We further analyze the precision/recall of BLEU-4 by looking at the average score versus the number of distinct reference dialog acts.", "A low number of distinct dialog acts represents the situation where the dialog context has a strong constraint on the range of the next response (low entropy), while a high number indicates the opposite (highentropy).", "Figure 4 shows that CVAE/kgCVAE achieves significantly higher recall than the baseline in higher entropy contexts.", "Also it shows that CVAE suffers from lower precision, especially in low entropy contexts.", "Finally, kgCVAE gets higher precision than both the baseline and CVAE in the full spectrum of context entropy.", "Table 2 shows the outputs generated from the baseline and kgCVAE.", "In example 1, caller A begins with an open-ended question.", "The kgCVAE model generated highly diverse answers that cover multiple plausible dialog acts.", "Further, we notice that the generated text exhibits similar dialog acts compared to the ones predicted separately by the model, implying the consistency of natural language generation based on y.", "On the contrary, the responses from the baseline model are limited to local n-gram variations and share a similar prefix, i.e.", "\"I'm\".", "Example 2 is a situation where caller A is telling B stories.", "The ground truth response is a back-channel and the range of valid answers is more constrained than example 1 since B is playing the role of a listener.", "The baseline successfully predicts \"uh-huh\".", "The kgCVAE model is also able to generate various ways of back-channeling.", "This implies that the latent z is able to capture context-sensitive variations, i.e.", "in low-entropy dialog contexts modeling lexical diversity while in high-entropy ones modeling discourse-level diversity.", "Moreover, kgCVAE is occasionally able to generate more sophisticated grounding (sample 4) beyond a simple back-channel, which is also an acceptable response given the dialog context.", "Qualitative Analysis In addition, past work (Kingma and Welling, 2013) has shown that the recognition network is able to learn to cluster high-dimension data, so we conjecture that posterior z outputted from the recognition network should cluster the responses into meaningful groups.", "Figure 5 visualizes the posterior z of responses in the test dataset in 2D space using t-SNE (Maaten and Hinton, 2008) .", "We found that the learned latent space is highly correlated with the dialog act and length of responses, which confirms our assumption.", "Results for Bag-of-Word Loss Finally, we evaluate the effectiveness of bag-ofword (BOW) loss for training VAE/CVAE with the RNN decoder.", "To compare with past work (Bowman et al., 2015) , we conducted the same language modelling (LM) task on Penn Treebank using VAE.", "The network architecture is same except we use GRU instead of LSTM.", "We compared four different training setups: (1) standard VAE without any heuristics; (2) VAE with KL annealing (KLA); (3) VAE with BOW loss; (4) VAE with both BOW loss and KLA.", "Intuitively, a well trained model should lead to a low reconstruction loss and small but non-trivial KL cost.", "For all models with KLA, the KL weight increases linearly from 0 to 1 in the first 5000 batches.", "Table 3 shows the reconstruction perplexity and the KL cost on the test dataset.", "The standard VAE fails to learn a meaningful latent variable by hav- Table 2 : Generated responses from the baselines and kgCVAE in two examples.", "KgCVAE also provides the predicted dialog act for each response.", "The context only shows the last utterance due to space limit (the actual context window size is 10).", "ing a KL cost close to 0 and a reconstruction perplexity similar to a small LSTM LM (Zaremba et al., 2014) .", "KLA helps to improve the reconstruction loss, but it requires early stopping since the models will fall back to the standard VAE after the KL weight becomes 1.", "At last, the models with BOW loss achieved significantly lower perplexity and larger KL cost.", "Figure 6 visualizes the evolution of the KL cost.", "We can see that for the standard model, the KL cost crashes to 0 at the beginning of training and never recovers.", "On the contrary, the model with only KLA learns to encode substantial information in latent z when the KL cost weight is small.", "However, after the KL weight is increased to 1 (after 5000 batch), the model once again decides to ignore the latent z and falls back to the naive implementation.", "The model with BOW loss, however, consistently converges to a non-trivial KL cost even without KLA, which confirms the importance of BOW loss for training latent variable models with the RNN decoder.", "Last but not least, our experiments showed that the conclusions drawn from LM using VAE also apply to training CVAE/kgCVAE, so we used BOW loss together with KLA for all previous experiments.", "Conclusion and Future Work In conclusion, we identified the one-to-many nature of open-domain conversation and proposed two novel models that show superior performance in generating diverse and appropriate responses at the discourse level.", "While the current paper addresses diversifying responses in respect to dialogue acts, this work is part of a larger research direction that targets leveraging both past linguistic findings and the learning power of deep neural networks to learn better representation of the latent factors in dialog.", "In turn, the output of this novel neural dialog model will be easier to explain and control by humans.", "In addition to dialog acts, we plan to apply our kgCVAE model to capture other different linguistic phenomena including sentiment, named entities,etc.", "Last but not least, the recognition network in our model will serve as the foundation for designing a datadriven dialog manager, which automatically discovers useful high-level intents.", "All of the above suggest a promising research direction." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "4.1", "4.2", "5.1", "5.2", "1.", "2.", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Encoder-decoder Dialog Models", "Conditional Variational Autoencoder", "Conditional Variational Autoencoder (CVAE) for Dialog Generation", "Knowledge-Guided CVAE (kgCVAE)", "Optimization Challenges", "Dataset", "Training", "Experiments Setup", "Quantitative Analysis", "Smoothed Sentence-level BLEU (Chen and", "Cosine", "Qualitative Analysis", "Results for Bag-of-Word Loss", "Conclusion and Future Work" ] }
GEM-SciDuet-train-100#paper-1263#slide-10
Quantitative Metrics
Ref resp1 Hyp resp 1 Ref resp Mc Hyp resp N d(r, h) is a distance function [0, 1] to measure the similarity between a reference and a hypothesis.
Ref resp1 Hyp resp 1 Ref resp Mc Hyp resp N d(r, h) is a distance function [0, 1] to measure the similarity between a reference and a hypothesis.
[]
GEM-SciDuet-train-100#paper-1263#slide-11
1263
Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders
While recent neural encoder-decoder models have shown great promise in modeling open-domain conversations, they often generate dull and generic responses. Unlike past work that has focused on diversifying the output of the decoder at word-level to alleviate this problem, we present a novel framework based on conditional variational autoencoders that captures the discourse-level diversity in the encoder. Our model uses latent variables to learn a distribution over potential conversational intents and generates diverse responses using only greedy decoders. We have further developed a novel variant that is integrated with linguistic prior knowledge for better performance. Finally, the training procedure is improved by introducing a bag-of-word loss. Our proposed models have been validated to generate significantly more diverse responses than baseline approaches and exhibit competence in discourse-level decision-making.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196 ], "paper_content_text": [ "Introduction The dialog manager is one of the key components of dialog systems, which is responsible for modeling the decision-making process.", "Specifically, it typically takes a new utterance and the dialog context as input, and generates discourse-level decisions (Bohus and Rudnicky, 2003; Williams and Young, 2007) .", "Advanced dialog managers usually have a list of potential actions that enable them to have diverse behavior during a conversation, e.g.", "different strategies to recover from non-understanding .", "However, the conventional approach of designing a dialog manager (Williams and Young, 2007) does not scale well to open-domain conversation models because of the vast quantity of possible decisions.", "Thus, there has been a growing interest in applying encoder-decoder models for modeling open-domain conversation (Vinyals and Le, 2015; Serban et al., 2016a) .", "The basic approach treats a conversation as a transduction task, in which the dialog history is the source sequence and the next response is the target sequence.", "The model is then trained end-to-end on large conversation corpora using the maximum-likelihood estimation (MLE) objective without the need for manual crafting.", "However recent research has found that encoder-decoder models tend to generate generic and dull responses (e.g., I don't know), rather than meaningful and specific answers (Li et al., 2015; Serban et al., 2016b) .", "There have been many attempts to explain and solve this limitation, and they can be broadly divided into two categories (see Section 2 for details): (1) the first category argues that the dialog history is only one of the factors that decide the next response.", "Other features should be extracted and provided to the models as conditionals in order to generate more specific responses (Xing et al., 2016; Li et al., 2016a) ; (2) the second category aims to improve the encoder-decoder model itself, including decoding with beam search and its variations (Wiseman and Rush, 2016) , encouraging responses that have long-term payoff (Li et al., 2016b) , etc.", "Building upon the past work in dialog managers and encoder-decoder models, the key idea of this paper is to model dialogs as a one-to-many problem at the discourse level.", "Previous studies indicate that there are many factors in open-domain dialogs that decide the next response, and it is nontrivial to extract all of them.", "Intuitively, given a similar dialog history (and other observed inputs), there may exist many valid responses (at the discourse-level), each corresponding to a certain configuration of the latent variables that are not presented in the input.", "To uncover the potential responses, we strive to model a probabilistic distribution over the distributed utterance embeddings of the potential responses using a latent variable ( Figure 1 ).", "This allows us to generate diverse responses by drawing samples from the learned distribution and reconstruct their words via a decoder neural network.", "Specifically, our contributions are three-fold: 1.", "We present a novel neural dialog model adapted from conditional variational autoencoders (CVAE) , which introduces a latent variable that can capture discourse-level variations as described above 2.", "We propose Knowledge-Guided CVAE (kgC-VAE), which enables easy integration of expert knowledge and results in performance improvement and model interpretability.", "3.", "We develop a training method in addressing the difficulty of optimizing CVAE for natural language generation (Bowman et al., 2015) .", "We evaluate our models on human-human conversation data and yield promising results in: (a) generating appropriate and discourse-level diverse responses, and (b) showing that the proposed training method is more effective than the previous techniques.", "Related Work Our work is related to both recent advancement in encoder-decoder dialog models and generative models based on CVAE.", "Encoder-decoder Dialog Models Since the emergence of the neural dialog model, the problem of output diversity has received much attention in the research community.", "Ideal output responses should be both coherent and diverse.", "However, most models end up with generic and dull responses.", "To tackle this problem, one line of research has focused on augmenting the input of encoder-decoder models with richer context information, in order to generate more spe-cific responses.", "Li et al., (2016a) captured speakers' characteristics by encoding background information and speaking style into the distributed embeddings, which are used to re-rank the generated response from an encoder-decoder model.", "Xing et al., (2016) maintain topic encoding based on Latent Dirichlet Allocation (LDA) (Blei et al., 2003) of the conversation to encourage the model to output more topic coherent responses.", "On the other hand, many attempts have also been made to improve the architecture of encoderdecoder models.", "Li et al,.", "(2015) proposed to optimize the standard encoder-decoder by maximizing the mutual information between input and output, which in turn reduces generic responses.", "This approach penalized unconditionally high frequency responses, and favored responses that have high conditional probability given the input.", "Wiseman and Rush (2016) focused on improving the decoder network by alleviating the biases between training and testing.", "They introduced a searchbased loss that directly optimizes the networks for beam search decoding.", "The resulting model achieves better performance on word ordering, parsing and machine translation.", "Besides improving beam search, Li et al., (2016b) pointed out that the MLE objective of an encoder-decoder model is unable to approximate the real-world goal of the conversation.", "Thus, they initialized a encoderdecoder model with MLE objective and leveraged reinforcement learning to fine tune the model by optimizing three heuristic rewards functions: informativity, coherence, and ease of answering.", "Conditional Variational Autoencoder The variational autoencoder (VAE) (Kingma and Welling, 2013; Rezende et al., 2014) is one of the most popular frameworks for image generation.", "The basic idea of VAE is to encode the input x into a probability distribution z instead of a point encoding in the autoencoder.", "Then VAE applies a decoder network to reconstruct the original input using samples from z.", "To generate images, VAE first obtains a sample of z from the prior distribution, e.g.", "N (0, I), and then produces an image via the decoder network.", "A more advanced model, the conditional VAE (CVAE), is a recent modification of VAE to generate diverse images conditioned on certain attributes, e.g.", "generating different human faces given skin color .", "Inspired by CVAE, we view the dialog contexts as the conditional attributes and adapt CVAE to generate diverse responses instead of images.", "Although VAE/CVAE has achieved impressive results in image generation, adapting this to natural language generators is non-trivial.", "Bowman et al., (2015) have used VAE with Long-Short Term Memory (LSTM)-based recognition and decoder networks to generate sentences from a latent Gaussian variable.", "They showed that their model is able to generate diverse sentences with even a greedy LSTM decoder.", "They also reported the difficulty of training because the LSTM decoder tends to ignore the latent variable.", "We refer to this issue as the vanishing latent variable problem.", "Serban et al., (2016b) have applied a latent variable hierarchical encoder-decoder dialog model to introduce utterance-level variations and facilitate longer responses.", "To improve upon the past models, we firstly introduce a novel mechanism to leverage linguistic knowledge in training end-to-end neural dialog models, and we also propose a novel training technique that mitigates the vanishing latent variable problem.", "Proposed Models Conditional Variational Autoencoder (CVAE) for Dialog Generation Each dyadic conversation can be represented via three random variables: the dialog context c (context window size k − 1), the response utterance x (the k th utterance) and a latent variable z, which is used to capture the latent distribution over the valid responses.", "Further, c is composed of the dialog history: the preceding k-1 utterances; conversational floor (1 if the utterance is from the same speaker of x, otherwise 0) and meta features m (e.g.", "the topic).", "We then define the conditional distribution p(x, z|c) = p(x|z, c)p(z|c) and our goal is to use deep neural networks (parametrized by θ) to approximate p(z|c) and p(x|z, c).", "We refer to p θ (z|c) as the prior network and p θ (x, |z, c) as the response decoder.", "Then the generative process of x is (Figure 2 (a)): 1.", "Sample a latent variable z from the prior network p θ (z|c).", "2.", "Generate x through the response decoder p θ (x|z, c).", "CVAE is trained to maximize the conditional log likelihood of x given c, which involves an intractable marginalization over the latent variable z.", "As proposed in , CVAE can be efficiently trained with the Stochastic Gradient Variational Bayes (SGVB) framework (Kingma and Welling, 2013) by maximizing the variational lower bound of the conditional log likelihood.", "We assume the z follows multivariate Gaussian distribution with a diagonal covariance matrix and introduce a recognition network q φ (z|x, c) to approximate the true posterior distribution p(z|x, c).", "have shown that the variational lower bound can be written as: Figure 3 demonstrates an overview of our model.", "The utterance encoder is a bidirectional recurrent neural network (BRNN) (Schuster and Paliwal, 1997) with a gated recurrent unit (GRU) (Chung et al., 2014) to encode each utterance into fixedsize vectors by concatenating the last hidden states of the forward and backward RNN u L(θ, φ; x, c) = −KL(q φ (z|x, c) p θ (z|c)) + E q φ (z|c,x) [log p θ (x|z, c)] (1) ≤ log p(x|c) i = [ h i , h i ].", "x is simply u k .", "The context encoder is a 1-layer GRU network that encodes the preceding k-1 utterances by taking u 1:k−1 and the corresponding conversation floor as inputs.", "The last hidden state h c of the context encoder is concatenated with meta features and c = [h c , m].", "Since we assume z follows isotropic Gaussian distribution, the recognition network q φ (z|x, c) ∼ N (µ, σ 2 I) and the prior network p θ (z|c) ∼ N (µ , σ 2 I), and then we have: µ log(σ 2 ) = W r x c + b r (2) µ log(σ 2 ) = MLP p (c) (3) We then use the reparametrization trick (Kingma and Welling, 2013) to obtain samples of z either from N (z; µ, σ 2 I) predicted by the recognition network (training) or N (z; µ , σ 2 I) predicted by the prior network (testing).", "Finally, the response decoder is a 1-layer GRU network with initial state s 0 = W i [z, c]+b i .", "The response decoder then predicts the words in x sequentially.", "Knowledge-Guided CVAE (kgCVAE) In practice, training CVAE is a challenging optimization problem and often requires large amount of data.", "On the other hand, past research in spoken dialog systems and discourse analysis has suggested that many linguistic cues capture crucial features in representing natural conversation.", "For example, dialog acts (Poesio and Traum, 1998) have been widely used in the dialog managers (Litman and Allen, 1987; Raux et al., 2005; Zhao and Eskenazi, 2016) to represent the propositional function of the system.", "Therefore, we conjecture that it will be beneficial for the model to learn meaningful latent z if it is provided with explicitly extracted discourse features during the training.", "In order to incorporate the linguistic features into the basic CVAE model, we first denote the set of linguistic features as y.", "Then we assume that the generation of x depends on c, z and y. y relies on z and c as shown in Figure 2 .", "Specifically, during training the initial state of the response decoder is s 0 = W i [z, c, y] + b i and the input at every step is [e t , y] where e t is the word embedding of t th word in x.", "In addition, there is an MLP to predict y = MLP y (z, c) based on z and c. In the testing stage, the predicted y is used by the response decoder instead of the oracle decoders.", "We denote the modified model as knowledge-guided CVAE (kgCVAE) and developers can add desired discourse features that they wish the latent variable z to capture.", "KgCVAE model is trained by maximizing: L(θ, φ; x, c, y) = −KL(q φ (z|x, c, y) P θ (z|c)) + E q φ (z|c,x,y) [log p(x|z, c, y)] + E q φ (z|c,x,y) [log p(y|z, c)] (4) Since now the reconstruction of y is a part of the loss function, kgCVAE can more efficiently encode y-related information into z than discovering it only based on the surface-level x and c. Another advantage of kgCVAE is that it can output a highlevel label (e.g.", "dialog act) along with the wordlevel responses, which allows easier interpretation of the model's outputs.", "Optimization Challenges A straightforward VAE with RNN decoder fails to encode meaningful information in z due to the vanishing latent variable problem (Bowman et al., 2015) .", "Bowman et al., (2015) proposed two solutions: (1) KL annealing: gradually increasing the weight of the KL term from 0 to 1 during training; (2) word drop decoding: setting a certain percentage of the target words to 0.", "We found that CVAE suffers from the same issue when the decoder is an RNN.", "Also we did not consider word drop decoding because Bowman et al,.", "(2015) have shown that it may hurt the performance when the drop rate is too high.", "As a result, we propose a simple yet novel technique to tackle the vanishing latent variable problem: bag-of-word loss.", "The idea is to introduce an auxiliary loss that requires the decoder network to predict the bag-of-words in the response x as shown in Figure 3 (b).", "We decompose x into two variables: x o with word order and x bow without order, and assume that x o and x bow are conditionally independent given z and c: p(x, z|c) = p(x o |z, c)p(x bow |z, c)p(z|c).", "Due to the conditional independence assumption, the latent variable is forced to capture global information about the target response.", "Let f = MLP b (z, x) ∈ R V where V is vocabulary size, and we have: log p(x bow |z, c) = log |x| t=1 e fx t V j e f j (5) where |x| is the length of x and x t is the word index of t th word in x.", "The modified variational lower bound for CVAE with bag-of-word loss is (see Appendix A for kgCVAE): L (θ, φ; x, c) = L(θ, φ; x, c) + E q φ (z|c,x,y) [log p(x bow |z, c)] (6) We will show that the bag-of-word loss in Equation 6 is very effective against the vanishing latent variable and it is also complementary to the KL annealing technique.", "Experiment Setup Dataset We chose the Switchboard (SW) 1 Release 2 Corpus (Godfrey and Holliman, 1997) to evaluate the proposed models.", "SW has 2400 two-sided telephone conversations with manually transcribed speech and alignment.", "In the beginning of the call, a computer operator gave the callers recorded prompts that define the desired topic of discussion.", "There are 70 available topics.", "We randomly split the data into 2316/60/62 dialogs for train/validate/test.", "The pre-processing includes (1) tokenize using the NLTK tokenizer (Bird et al., 2009 ); (2) remove non-verbal symbols and repeated words due to false starts; (3) keep the top 10K frequent word types as the vocabulary.", "The final data have 207, 833/5, 225/5, 481 (c, x) pairs for train/validate/test.", "Furthermore, a subset of SW was manually labeled with dialog acts (Stolcke et al., 2000) .", "We extracted dialog act labels based on the dialog act recognizer proposed in (Ribeiro et al., 2015) .", "The features include the uni-gram and bi-gram of the utterance, and the contextual features of the last 3 utterances.", "We trained a Support Vector Machine (SVM) (Suykens and Vandewalle, 1999) with linear kernel on the subset of SW with human annotations.", "There are 42 types of dialog acts and the SVM achieved 77.3% accuracy on held-out data.", "Then the rest of SW data are labelled with dialog acts using the trained SVM dialog act recognizer.", "Training We trained with the following hyperparameters (according to the loss on the validate dataset): word embedding has size 200 and is shared across everywhere.", "We initialize the word embedding from Glove embedding pre-trained on Twitter (Pennington et al., 2014) .", "The utterance encoder has a hidden size of 300 for each direction.", "The context encoder has a hidden size of 600 and the response decoder has a hidden size of 400.", "The prior network and the MLP for predicting y both have 1 hidden layer of size 400 and tanh non-linearity.", "The latent variable z has a size of 200.", "The context window k is 10.", "All the initial weights are sampled from a uniform distribution [-0.08, 0.08].", "The mini-batch size is 30.", "The models are trained end-to-end using the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.001 and gradient clipping at 5.", "We selected the best models based on the variational lower bound on the validate data.", "Finally, we use the BOW loss along with KL annealing of 10,000 batches to achieve the best performance.", "Section 5.4 gives a detailed argument for the importance of the BOW loss.", "Results Experiments Setup We compared three neural dialog models: a strong baseline model, CVAE, and kgCVAE.", "The baseline model is an encoder-decoder neural dialog model without latent variables similar to (Serban et al., 2016a) .", "The baseline model's encoder uses the same context encoder to encode the dialog history and the meta features as shown in Figure 3 .", "The encoded context c is directly fed into the decoder networks as the initial state.", "The hyperparameters of the baseline are the same as the ones reported in Section 4.2 and the baseline is trained to minimize the standard cross entropy loss of the decoder RNN model without any auxiliary loss.", "Also, to compare the diversity introduced by the stochasticity in the proposed latent variable versus the softmax of RNN at each decoding step, we generate N responses from the baseline by sam-pling from the softmax.", "For CVAE/kgCVAE, we sample N times from the latent z and only use greedy decoders so that the randomness comes entirely from the latent variable z. Quantitative Analysis Automatically evaluating an open-domain generative dialog model is an open research challenge .", "Following our one-tomany hypothesis, we propose the following metrics.", "We assume that for a given dialog context c, there exist M c reference responses r j , j ∈ [1, M c ].", "Meanwhile a model can generate N hypothesis re- sponses h i , i ∈ [1, N ].", "The generalized responselevel precision/recall for a given dialog context is: precision(c) = N i=1 max j∈[1,Mc] d(r j , h i ) N recall(c) = Mc j=1 max i∈[1,N ] d(r j , h i )) M c where d(r j , h i ) is a distance function which lies between 0 to 1 and measures the similarities between r j and h i .", "The final score is averaged over the entire test dataset and we report the performance with 3 types of distance functions in order to evaluate the systems from various linguistic points of view: Smoothed Sentence-level BLEU (Chen and Cherry, 2014): BLEU is a popular metric that measures the geometric mean of modified ngram precision with a length penalty (Papineni et al., 2002; Li et al., 2015) .", "We use BLEU-1 to 4 as our lexical similarity metric and normalize the score to 0 to 1 scale.", "Cosine Distance of Bag-of-word Embedding: a simple method to obtain sentence embeddings is to take the average or extrema of all the word embeddings in the sentences (Forgues et al., 2014; Adi et al., 2016) .", "The d(r j , h i ) is the cosine distance of the two embedding vectors.", "We used Glove embedding described in Section 4 and denote the average method as A-bow and extrema method as E-bow.", "3.", "Dialog Act Match: to measure the similarity at the discourse level, the same dialogact tagger from 4.1 is applied to label all the generated responses of each model.", "We set d(r j , h i ) = 1 if r j and h i have the same dialog acts, otherwise d(r j , h i ) = 0.", "One challenge of using the above metrics is that there is only one, rather than multiple reference responses/contexts.", "This impacts reliability of our measures.", "Inspired by (Sordoni et al., 2015) , we utilized information retrieval techniques (see Appendix A) to gather 10 extra candidate reference responses/context from other conversations with the same topics.", "Then the 10 candidate references are filtered by two experts, which serve as the ground truth to train the reference response classifier.", "The result is 6.69 extra references in average per context.", "The average number of distinct reference dialog acts is 4.2.", "Table 1 The proposed models outperform the baseline in terms of recall in all the metrics with statistical significance.", "This confirms our hypothesis that generating responses with discourse-level diversity can lead to a more comprehensive coverage of the potential responses than promoting only word-level diversity.", "As for precision, we observed that the baseline has higher or similar scores than CVAE in all metrics, which is expected since the baseline tends to generate the mostly likely and safe responses repeatedly in the N hypotheses.", "However, kgCVAE is able to achieve the highest precision and recall in the 4 metrics at the same time (BLEU1-4, E-BOW).", "One reason for kgCVAE's good performance is that the predicted dialog act label in kgCVAE can regularize the generation process of its RNN decoder by forcing it to generate more coherent and precise words.", "We further analyze the precision/recall of BLEU-4 by looking at the average score versus the number of distinct reference dialog acts.", "A low number of distinct dialog acts represents the situation where the dialog context has a strong constraint on the range of the next response (low entropy), while a high number indicates the opposite (highentropy).", "Figure 4 shows that CVAE/kgCVAE achieves significantly higher recall than the baseline in higher entropy contexts.", "Also it shows that CVAE suffers from lower precision, especially in low entropy contexts.", "Finally, kgCVAE gets higher precision than both the baseline and CVAE in the full spectrum of context entropy.", "Table 2 shows the outputs generated from the baseline and kgCVAE.", "In example 1, caller A begins with an open-ended question.", "The kgCVAE model generated highly diverse answers that cover multiple plausible dialog acts.", "Further, we notice that the generated text exhibits similar dialog acts compared to the ones predicted separately by the model, implying the consistency of natural language generation based on y.", "On the contrary, the responses from the baseline model are limited to local n-gram variations and share a similar prefix, i.e.", "\"I'm\".", "Example 2 is a situation where caller A is telling B stories.", "The ground truth response is a back-channel and the range of valid answers is more constrained than example 1 since B is playing the role of a listener.", "The baseline successfully predicts \"uh-huh\".", "The kgCVAE model is also able to generate various ways of back-channeling.", "This implies that the latent z is able to capture context-sensitive variations, i.e.", "in low-entropy dialog contexts modeling lexical diversity while in high-entropy ones modeling discourse-level diversity.", "Moreover, kgCVAE is occasionally able to generate more sophisticated grounding (sample 4) beyond a simple back-channel, which is also an acceptable response given the dialog context.", "Qualitative Analysis In addition, past work (Kingma and Welling, 2013) has shown that the recognition network is able to learn to cluster high-dimension data, so we conjecture that posterior z outputted from the recognition network should cluster the responses into meaningful groups.", "Figure 5 visualizes the posterior z of responses in the test dataset in 2D space using t-SNE (Maaten and Hinton, 2008) .", "We found that the learned latent space is highly correlated with the dialog act and length of responses, which confirms our assumption.", "Results for Bag-of-Word Loss Finally, we evaluate the effectiveness of bag-ofword (BOW) loss for training VAE/CVAE with the RNN decoder.", "To compare with past work (Bowman et al., 2015) , we conducted the same language modelling (LM) task on Penn Treebank using VAE.", "The network architecture is same except we use GRU instead of LSTM.", "We compared four different training setups: (1) standard VAE without any heuristics; (2) VAE with KL annealing (KLA); (3) VAE with BOW loss; (4) VAE with both BOW loss and KLA.", "Intuitively, a well trained model should lead to a low reconstruction loss and small but non-trivial KL cost.", "For all models with KLA, the KL weight increases linearly from 0 to 1 in the first 5000 batches.", "Table 3 shows the reconstruction perplexity and the KL cost on the test dataset.", "The standard VAE fails to learn a meaningful latent variable by hav- Table 2 : Generated responses from the baselines and kgCVAE in two examples.", "KgCVAE also provides the predicted dialog act for each response.", "The context only shows the last utterance due to space limit (the actual context window size is 10).", "ing a KL cost close to 0 and a reconstruction perplexity similar to a small LSTM LM (Zaremba et al., 2014) .", "KLA helps to improve the reconstruction loss, but it requires early stopping since the models will fall back to the standard VAE after the KL weight becomes 1.", "At last, the models with BOW loss achieved significantly lower perplexity and larger KL cost.", "Figure 6 visualizes the evolution of the KL cost.", "We can see that for the standard model, the KL cost crashes to 0 at the beginning of training and never recovers.", "On the contrary, the model with only KLA learns to encode substantial information in latent z when the KL cost weight is small.", "However, after the KL weight is increased to 1 (after 5000 batch), the model once again decides to ignore the latent z and falls back to the naive implementation.", "The model with BOW loss, however, consistently converges to a non-trivial KL cost even without KLA, which confirms the importance of BOW loss for training latent variable models with the RNN decoder.", "Last but not least, our experiments showed that the conclusions drawn from LM using VAE also apply to training CVAE/kgCVAE, so we used BOW loss together with KLA for all previous experiments.", "Conclusion and Future Work In conclusion, we identified the one-to-many nature of open-domain conversation and proposed two novel models that show superior performance in generating diverse and appropriate responses at the discourse level.", "While the current paper addresses diversifying responses in respect to dialogue acts, this work is part of a larger research direction that targets leveraging both past linguistic findings and the learning power of deep neural networks to learn better representation of the latent factors in dialog.", "In turn, the output of this novel neural dialog model will be easier to explain and control by humans.", "In addition to dialog acts, we plan to apply our kgCVAE model to capture other different linguistic phenomena including sentiment, named entities,etc.", "Last but not least, the recognition network in our model will serve as the foundation for designing a datadriven dialog manager, which automatically discovers useful high-level intents.", "All of the above suggest a promising research direction." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "4.1", "4.2", "5.1", "5.2", "1.", "2.", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Encoder-decoder Dialog Models", "Conditional Variational Autoencoder", "Conditional Variational Autoencoder (CVAE) for Dialog Generation", "Knowledge-Guided CVAE (kgCVAE)", "Optimization Challenges", "Dataset", "Training", "Experiments Setup", "Quantitative Analysis", "Smoothed Sentence-level BLEU (Chen and", "Cosine", "Qualitative Analysis", "Results for Bag-of-Word Loss", "Conclusion and Future Work" ] }
GEM-SciDuet-train-100#paper-1263#slide-11
Distance Functions used for Evaluation
Smoothed Sentence-level BLEU (1/2/3/4): lexical similarity Cosine distance of Bag-of-word Embeddings: distributed semantic similarity. (pre-trained Glove embedding on twitter) a. Average of embeddings (A-bow) b. Extrema of embeddings (E-bow) Dialog Act Match: illocutionary force-level similarity a. (Use pre-trained dialog act tagger for tagging)
Smoothed Sentence-level BLEU (1/2/3/4): lexical similarity Cosine distance of Bag-of-word Embeddings: distributed semantic similarity. (pre-trained Glove embedding on twitter) a. Average of embeddings (A-bow) b. Extrema of embeddings (E-bow) Dialog Act Match: illocutionary force-level similarity a. (Use pre-trained dialog act tagger for tagging)
[]
GEM-SciDuet-train-100#paper-1263#slide-12
1263
Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders
While recent neural encoder-decoder models have shown great promise in modeling open-domain conversations, they often generate dull and generic responses. Unlike past work that has focused on diversifying the output of the decoder at word-level to alleviate this problem, we present a novel framework based on conditional variational autoencoders that captures the discourse-level diversity in the encoder. Our model uses latent variables to learn a distribution over potential conversational intents and generates diverse responses using only greedy decoders. We have further developed a novel variant that is integrated with linguistic prior knowledge for better performance. Finally, the training procedure is improved by introducing a bag-of-word loss. Our proposed models have been validated to generate significantly more diverse responses than baseline approaches and exhibit competence in discourse-level decision-making.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196 ], "paper_content_text": [ "Introduction The dialog manager is one of the key components of dialog systems, which is responsible for modeling the decision-making process.", "Specifically, it typically takes a new utterance and the dialog context as input, and generates discourse-level decisions (Bohus and Rudnicky, 2003; Williams and Young, 2007) .", "Advanced dialog managers usually have a list of potential actions that enable them to have diverse behavior during a conversation, e.g.", "different strategies to recover from non-understanding .", "However, the conventional approach of designing a dialog manager (Williams and Young, 2007) does not scale well to open-domain conversation models because of the vast quantity of possible decisions.", "Thus, there has been a growing interest in applying encoder-decoder models for modeling open-domain conversation (Vinyals and Le, 2015; Serban et al., 2016a) .", "The basic approach treats a conversation as a transduction task, in which the dialog history is the source sequence and the next response is the target sequence.", "The model is then trained end-to-end on large conversation corpora using the maximum-likelihood estimation (MLE) objective without the need for manual crafting.", "However recent research has found that encoder-decoder models tend to generate generic and dull responses (e.g., I don't know), rather than meaningful and specific answers (Li et al., 2015; Serban et al., 2016b) .", "There have been many attempts to explain and solve this limitation, and they can be broadly divided into two categories (see Section 2 for details): (1) the first category argues that the dialog history is only one of the factors that decide the next response.", "Other features should be extracted and provided to the models as conditionals in order to generate more specific responses (Xing et al., 2016; Li et al., 2016a) ; (2) the second category aims to improve the encoder-decoder model itself, including decoding with beam search and its variations (Wiseman and Rush, 2016) , encouraging responses that have long-term payoff (Li et al., 2016b) , etc.", "Building upon the past work in dialog managers and encoder-decoder models, the key idea of this paper is to model dialogs as a one-to-many problem at the discourse level.", "Previous studies indicate that there are many factors in open-domain dialogs that decide the next response, and it is nontrivial to extract all of them.", "Intuitively, given a similar dialog history (and other observed inputs), there may exist many valid responses (at the discourse-level), each corresponding to a certain configuration of the latent variables that are not presented in the input.", "To uncover the potential responses, we strive to model a probabilistic distribution over the distributed utterance embeddings of the potential responses using a latent variable ( Figure 1 ).", "This allows us to generate diverse responses by drawing samples from the learned distribution and reconstruct their words via a decoder neural network.", "Specifically, our contributions are three-fold: 1.", "We present a novel neural dialog model adapted from conditional variational autoencoders (CVAE) , which introduces a latent variable that can capture discourse-level variations as described above 2.", "We propose Knowledge-Guided CVAE (kgC-VAE), which enables easy integration of expert knowledge and results in performance improvement and model interpretability.", "3.", "We develop a training method in addressing the difficulty of optimizing CVAE for natural language generation (Bowman et al., 2015) .", "We evaluate our models on human-human conversation data and yield promising results in: (a) generating appropriate and discourse-level diverse responses, and (b) showing that the proposed training method is more effective than the previous techniques.", "Related Work Our work is related to both recent advancement in encoder-decoder dialog models and generative models based on CVAE.", "Encoder-decoder Dialog Models Since the emergence of the neural dialog model, the problem of output diversity has received much attention in the research community.", "Ideal output responses should be both coherent and diverse.", "However, most models end up with generic and dull responses.", "To tackle this problem, one line of research has focused on augmenting the input of encoder-decoder models with richer context information, in order to generate more spe-cific responses.", "Li et al., (2016a) captured speakers' characteristics by encoding background information and speaking style into the distributed embeddings, which are used to re-rank the generated response from an encoder-decoder model.", "Xing et al., (2016) maintain topic encoding based on Latent Dirichlet Allocation (LDA) (Blei et al., 2003) of the conversation to encourage the model to output more topic coherent responses.", "On the other hand, many attempts have also been made to improve the architecture of encoderdecoder models.", "Li et al,.", "(2015) proposed to optimize the standard encoder-decoder by maximizing the mutual information between input and output, which in turn reduces generic responses.", "This approach penalized unconditionally high frequency responses, and favored responses that have high conditional probability given the input.", "Wiseman and Rush (2016) focused on improving the decoder network by alleviating the biases between training and testing.", "They introduced a searchbased loss that directly optimizes the networks for beam search decoding.", "The resulting model achieves better performance on word ordering, parsing and machine translation.", "Besides improving beam search, Li et al., (2016b) pointed out that the MLE objective of an encoder-decoder model is unable to approximate the real-world goal of the conversation.", "Thus, they initialized a encoderdecoder model with MLE objective and leveraged reinforcement learning to fine tune the model by optimizing three heuristic rewards functions: informativity, coherence, and ease of answering.", "Conditional Variational Autoencoder The variational autoencoder (VAE) (Kingma and Welling, 2013; Rezende et al., 2014) is one of the most popular frameworks for image generation.", "The basic idea of VAE is to encode the input x into a probability distribution z instead of a point encoding in the autoencoder.", "Then VAE applies a decoder network to reconstruct the original input using samples from z.", "To generate images, VAE first obtains a sample of z from the prior distribution, e.g.", "N (0, I), and then produces an image via the decoder network.", "A more advanced model, the conditional VAE (CVAE), is a recent modification of VAE to generate diverse images conditioned on certain attributes, e.g.", "generating different human faces given skin color .", "Inspired by CVAE, we view the dialog contexts as the conditional attributes and adapt CVAE to generate diverse responses instead of images.", "Although VAE/CVAE has achieved impressive results in image generation, adapting this to natural language generators is non-trivial.", "Bowman et al., (2015) have used VAE with Long-Short Term Memory (LSTM)-based recognition and decoder networks to generate sentences from a latent Gaussian variable.", "They showed that their model is able to generate diverse sentences with even a greedy LSTM decoder.", "They also reported the difficulty of training because the LSTM decoder tends to ignore the latent variable.", "We refer to this issue as the vanishing latent variable problem.", "Serban et al., (2016b) have applied a latent variable hierarchical encoder-decoder dialog model to introduce utterance-level variations and facilitate longer responses.", "To improve upon the past models, we firstly introduce a novel mechanism to leverage linguistic knowledge in training end-to-end neural dialog models, and we also propose a novel training technique that mitigates the vanishing latent variable problem.", "Proposed Models Conditional Variational Autoencoder (CVAE) for Dialog Generation Each dyadic conversation can be represented via three random variables: the dialog context c (context window size k − 1), the response utterance x (the k th utterance) and a latent variable z, which is used to capture the latent distribution over the valid responses.", "Further, c is composed of the dialog history: the preceding k-1 utterances; conversational floor (1 if the utterance is from the same speaker of x, otherwise 0) and meta features m (e.g.", "the topic).", "We then define the conditional distribution p(x, z|c) = p(x|z, c)p(z|c) and our goal is to use deep neural networks (parametrized by θ) to approximate p(z|c) and p(x|z, c).", "We refer to p θ (z|c) as the prior network and p θ (x, |z, c) as the response decoder.", "Then the generative process of x is (Figure 2 (a)): 1.", "Sample a latent variable z from the prior network p θ (z|c).", "2.", "Generate x through the response decoder p θ (x|z, c).", "CVAE is trained to maximize the conditional log likelihood of x given c, which involves an intractable marginalization over the latent variable z.", "As proposed in , CVAE can be efficiently trained with the Stochastic Gradient Variational Bayes (SGVB) framework (Kingma and Welling, 2013) by maximizing the variational lower bound of the conditional log likelihood.", "We assume the z follows multivariate Gaussian distribution with a diagonal covariance matrix and introduce a recognition network q φ (z|x, c) to approximate the true posterior distribution p(z|x, c).", "have shown that the variational lower bound can be written as: Figure 3 demonstrates an overview of our model.", "The utterance encoder is a bidirectional recurrent neural network (BRNN) (Schuster and Paliwal, 1997) with a gated recurrent unit (GRU) (Chung et al., 2014) to encode each utterance into fixedsize vectors by concatenating the last hidden states of the forward and backward RNN u L(θ, φ; x, c) = −KL(q φ (z|x, c) p θ (z|c)) + E q φ (z|c,x) [log p θ (x|z, c)] (1) ≤ log p(x|c) i = [ h i , h i ].", "x is simply u k .", "The context encoder is a 1-layer GRU network that encodes the preceding k-1 utterances by taking u 1:k−1 and the corresponding conversation floor as inputs.", "The last hidden state h c of the context encoder is concatenated with meta features and c = [h c , m].", "Since we assume z follows isotropic Gaussian distribution, the recognition network q φ (z|x, c) ∼ N (µ, σ 2 I) and the prior network p θ (z|c) ∼ N (µ , σ 2 I), and then we have: µ log(σ 2 ) = W r x c + b r (2) µ log(σ 2 ) = MLP p (c) (3) We then use the reparametrization trick (Kingma and Welling, 2013) to obtain samples of z either from N (z; µ, σ 2 I) predicted by the recognition network (training) or N (z; µ , σ 2 I) predicted by the prior network (testing).", "Finally, the response decoder is a 1-layer GRU network with initial state s 0 = W i [z, c]+b i .", "The response decoder then predicts the words in x sequentially.", "Knowledge-Guided CVAE (kgCVAE) In practice, training CVAE is a challenging optimization problem and often requires large amount of data.", "On the other hand, past research in spoken dialog systems and discourse analysis has suggested that many linguistic cues capture crucial features in representing natural conversation.", "For example, dialog acts (Poesio and Traum, 1998) have been widely used in the dialog managers (Litman and Allen, 1987; Raux et al., 2005; Zhao and Eskenazi, 2016) to represent the propositional function of the system.", "Therefore, we conjecture that it will be beneficial for the model to learn meaningful latent z if it is provided with explicitly extracted discourse features during the training.", "In order to incorporate the linguistic features into the basic CVAE model, we first denote the set of linguistic features as y.", "Then we assume that the generation of x depends on c, z and y. y relies on z and c as shown in Figure 2 .", "Specifically, during training the initial state of the response decoder is s 0 = W i [z, c, y] + b i and the input at every step is [e t , y] where e t is the word embedding of t th word in x.", "In addition, there is an MLP to predict y = MLP y (z, c) based on z and c. In the testing stage, the predicted y is used by the response decoder instead of the oracle decoders.", "We denote the modified model as knowledge-guided CVAE (kgCVAE) and developers can add desired discourse features that they wish the latent variable z to capture.", "KgCVAE model is trained by maximizing: L(θ, φ; x, c, y) = −KL(q φ (z|x, c, y) P θ (z|c)) + E q φ (z|c,x,y) [log p(x|z, c, y)] + E q φ (z|c,x,y) [log p(y|z, c)] (4) Since now the reconstruction of y is a part of the loss function, kgCVAE can more efficiently encode y-related information into z than discovering it only based on the surface-level x and c. Another advantage of kgCVAE is that it can output a highlevel label (e.g.", "dialog act) along with the wordlevel responses, which allows easier interpretation of the model's outputs.", "Optimization Challenges A straightforward VAE with RNN decoder fails to encode meaningful information in z due to the vanishing latent variable problem (Bowman et al., 2015) .", "Bowman et al., (2015) proposed two solutions: (1) KL annealing: gradually increasing the weight of the KL term from 0 to 1 during training; (2) word drop decoding: setting a certain percentage of the target words to 0.", "We found that CVAE suffers from the same issue when the decoder is an RNN.", "Also we did not consider word drop decoding because Bowman et al,.", "(2015) have shown that it may hurt the performance when the drop rate is too high.", "As a result, we propose a simple yet novel technique to tackle the vanishing latent variable problem: bag-of-word loss.", "The idea is to introduce an auxiliary loss that requires the decoder network to predict the bag-of-words in the response x as shown in Figure 3 (b).", "We decompose x into two variables: x o with word order and x bow without order, and assume that x o and x bow are conditionally independent given z and c: p(x, z|c) = p(x o |z, c)p(x bow |z, c)p(z|c).", "Due to the conditional independence assumption, the latent variable is forced to capture global information about the target response.", "Let f = MLP b (z, x) ∈ R V where V is vocabulary size, and we have: log p(x bow |z, c) = log |x| t=1 e fx t V j e f j (5) where |x| is the length of x and x t is the word index of t th word in x.", "The modified variational lower bound for CVAE with bag-of-word loss is (see Appendix A for kgCVAE): L (θ, φ; x, c) = L(θ, φ; x, c) + E q φ (z|c,x,y) [log p(x bow |z, c)] (6) We will show that the bag-of-word loss in Equation 6 is very effective against the vanishing latent variable and it is also complementary to the KL annealing technique.", "Experiment Setup Dataset We chose the Switchboard (SW) 1 Release 2 Corpus (Godfrey and Holliman, 1997) to evaluate the proposed models.", "SW has 2400 two-sided telephone conversations with manually transcribed speech and alignment.", "In the beginning of the call, a computer operator gave the callers recorded prompts that define the desired topic of discussion.", "There are 70 available topics.", "We randomly split the data into 2316/60/62 dialogs for train/validate/test.", "The pre-processing includes (1) tokenize using the NLTK tokenizer (Bird et al., 2009 ); (2) remove non-verbal symbols and repeated words due to false starts; (3) keep the top 10K frequent word types as the vocabulary.", "The final data have 207, 833/5, 225/5, 481 (c, x) pairs for train/validate/test.", "Furthermore, a subset of SW was manually labeled with dialog acts (Stolcke et al., 2000) .", "We extracted dialog act labels based on the dialog act recognizer proposed in (Ribeiro et al., 2015) .", "The features include the uni-gram and bi-gram of the utterance, and the contextual features of the last 3 utterances.", "We trained a Support Vector Machine (SVM) (Suykens and Vandewalle, 1999) with linear kernel on the subset of SW with human annotations.", "There are 42 types of dialog acts and the SVM achieved 77.3% accuracy on held-out data.", "Then the rest of SW data are labelled with dialog acts using the trained SVM dialog act recognizer.", "Training We trained with the following hyperparameters (according to the loss on the validate dataset): word embedding has size 200 and is shared across everywhere.", "We initialize the word embedding from Glove embedding pre-trained on Twitter (Pennington et al., 2014) .", "The utterance encoder has a hidden size of 300 for each direction.", "The context encoder has a hidden size of 600 and the response decoder has a hidden size of 400.", "The prior network and the MLP for predicting y both have 1 hidden layer of size 400 and tanh non-linearity.", "The latent variable z has a size of 200.", "The context window k is 10.", "All the initial weights are sampled from a uniform distribution [-0.08, 0.08].", "The mini-batch size is 30.", "The models are trained end-to-end using the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.001 and gradient clipping at 5.", "We selected the best models based on the variational lower bound on the validate data.", "Finally, we use the BOW loss along with KL annealing of 10,000 batches to achieve the best performance.", "Section 5.4 gives a detailed argument for the importance of the BOW loss.", "Results Experiments Setup We compared three neural dialog models: a strong baseline model, CVAE, and kgCVAE.", "The baseline model is an encoder-decoder neural dialog model without latent variables similar to (Serban et al., 2016a) .", "The baseline model's encoder uses the same context encoder to encode the dialog history and the meta features as shown in Figure 3 .", "The encoded context c is directly fed into the decoder networks as the initial state.", "The hyperparameters of the baseline are the same as the ones reported in Section 4.2 and the baseline is trained to minimize the standard cross entropy loss of the decoder RNN model without any auxiliary loss.", "Also, to compare the diversity introduced by the stochasticity in the proposed latent variable versus the softmax of RNN at each decoding step, we generate N responses from the baseline by sam-pling from the softmax.", "For CVAE/kgCVAE, we sample N times from the latent z and only use greedy decoders so that the randomness comes entirely from the latent variable z. Quantitative Analysis Automatically evaluating an open-domain generative dialog model is an open research challenge .", "Following our one-tomany hypothesis, we propose the following metrics.", "We assume that for a given dialog context c, there exist M c reference responses r j , j ∈ [1, M c ].", "Meanwhile a model can generate N hypothesis re- sponses h i , i ∈ [1, N ].", "The generalized responselevel precision/recall for a given dialog context is: precision(c) = N i=1 max j∈[1,Mc] d(r j , h i ) N recall(c) = Mc j=1 max i∈[1,N ] d(r j , h i )) M c where d(r j , h i ) is a distance function which lies between 0 to 1 and measures the similarities between r j and h i .", "The final score is averaged over the entire test dataset and we report the performance with 3 types of distance functions in order to evaluate the systems from various linguistic points of view: Smoothed Sentence-level BLEU (Chen and Cherry, 2014): BLEU is a popular metric that measures the geometric mean of modified ngram precision with a length penalty (Papineni et al., 2002; Li et al., 2015) .", "We use BLEU-1 to 4 as our lexical similarity metric and normalize the score to 0 to 1 scale.", "Cosine Distance of Bag-of-word Embedding: a simple method to obtain sentence embeddings is to take the average or extrema of all the word embeddings in the sentences (Forgues et al., 2014; Adi et al., 2016) .", "The d(r j , h i ) is the cosine distance of the two embedding vectors.", "We used Glove embedding described in Section 4 and denote the average method as A-bow and extrema method as E-bow.", "3.", "Dialog Act Match: to measure the similarity at the discourse level, the same dialogact tagger from 4.1 is applied to label all the generated responses of each model.", "We set d(r j , h i ) = 1 if r j and h i have the same dialog acts, otherwise d(r j , h i ) = 0.", "One challenge of using the above metrics is that there is only one, rather than multiple reference responses/contexts.", "This impacts reliability of our measures.", "Inspired by (Sordoni et al., 2015) , we utilized information retrieval techniques (see Appendix A) to gather 10 extra candidate reference responses/context from other conversations with the same topics.", "Then the 10 candidate references are filtered by two experts, which serve as the ground truth to train the reference response classifier.", "The result is 6.69 extra references in average per context.", "The average number of distinct reference dialog acts is 4.2.", "Table 1 The proposed models outperform the baseline in terms of recall in all the metrics with statistical significance.", "This confirms our hypothesis that generating responses with discourse-level diversity can lead to a more comprehensive coverage of the potential responses than promoting only word-level diversity.", "As for precision, we observed that the baseline has higher or similar scores than CVAE in all metrics, which is expected since the baseline tends to generate the mostly likely and safe responses repeatedly in the N hypotheses.", "However, kgCVAE is able to achieve the highest precision and recall in the 4 metrics at the same time (BLEU1-4, E-BOW).", "One reason for kgCVAE's good performance is that the predicted dialog act label in kgCVAE can regularize the generation process of its RNN decoder by forcing it to generate more coherent and precise words.", "We further analyze the precision/recall of BLEU-4 by looking at the average score versus the number of distinct reference dialog acts.", "A low number of distinct dialog acts represents the situation where the dialog context has a strong constraint on the range of the next response (low entropy), while a high number indicates the opposite (highentropy).", "Figure 4 shows that CVAE/kgCVAE achieves significantly higher recall than the baseline in higher entropy contexts.", "Also it shows that CVAE suffers from lower precision, especially in low entropy contexts.", "Finally, kgCVAE gets higher precision than both the baseline and CVAE in the full spectrum of context entropy.", "Table 2 shows the outputs generated from the baseline and kgCVAE.", "In example 1, caller A begins with an open-ended question.", "The kgCVAE model generated highly diverse answers that cover multiple plausible dialog acts.", "Further, we notice that the generated text exhibits similar dialog acts compared to the ones predicted separately by the model, implying the consistency of natural language generation based on y.", "On the contrary, the responses from the baseline model are limited to local n-gram variations and share a similar prefix, i.e.", "\"I'm\".", "Example 2 is a situation where caller A is telling B stories.", "The ground truth response is a back-channel and the range of valid answers is more constrained than example 1 since B is playing the role of a listener.", "The baseline successfully predicts \"uh-huh\".", "The kgCVAE model is also able to generate various ways of back-channeling.", "This implies that the latent z is able to capture context-sensitive variations, i.e.", "in low-entropy dialog contexts modeling lexical diversity while in high-entropy ones modeling discourse-level diversity.", "Moreover, kgCVAE is occasionally able to generate more sophisticated grounding (sample 4) beyond a simple back-channel, which is also an acceptable response given the dialog context.", "Qualitative Analysis In addition, past work (Kingma and Welling, 2013) has shown that the recognition network is able to learn to cluster high-dimension data, so we conjecture that posterior z outputted from the recognition network should cluster the responses into meaningful groups.", "Figure 5 visualizes the posterior z of responses in the test dataset in 2D space using t-SNE (Maaten and Hinton, 2008) .", "We found that the learned latent space is highly correlated with the dialog act and length of responses, which confirms our assumption.", "Results for Bag-of-Word Loss Finally, we evaluate the effectiveness of bag-ofword (BOW) loss for training VAE/CVAE with the RNN decoder.", "To compare with past work (Bowman et al., 2015) , we conducted the same language modelling (LM) task on Penn Treebank using VAE.", "The network architecture is same except we use GRU instead of LSTM.", "We compared four different training setups: (1) standard VAE without any heuristics; (2) VAE with KL annealing (KLA); (3) VAE with BOW loss; (4) VAE with both BOW loss and KLA.", "Intuitively, a well trained model should lead to a low reconstruction loss and small but non-trivial KL cost.", "For all models with KLA, the KL weight increases linearly from 0 to 1 in the first 5000 batches.", "Table 3 shows the reconstruction perplexity and the KL cost on the test dataset.", "The standard VAE fails to learn a meaningful latent variable by hav- Table 2 : Generated responses from the baselines and kgCVAE in two examples.", "KgCVAE also provides the predicted dialog act for each response.", "The context only shows the last utterance due to space limit (the actual context window size is 10).", "ing a KL cost close to 0 and a reconstruction perplexity similar to a small LSTM LM (Zaremba et al., 2014) .", "KLA helps to improve the reconstruction loss, but it requires early stopping since the models will fall back to the standard VAE after the KL weight becomes 1.", "At last, the models with BOW loss achieved significantly lower perplexity and larger KL cost.", "Figure 6 visualizes the evolution of the KL cost.", "We can see that for the standard model, the KL cost crashes to 0 at the beginning of training and never recovers.", "On the contrary, the model with only KLA learns to encode substantial information in latent z when the KL cost weight is small.", "However, after the KL weight is increased to 1 (after 5000 batch), the model once again decides to ignore the latent z and falls back to the naive implementation.", "The model with BOW loss, however, consistently converges to a non-trivial KL cost even without KLA, which confirms the importance of BOW loss for training latent variable models with the RNN decoder.", "Last but not least, our experiments showed that the conclusions drawn from LM using VAE also apply to training CVAE/kgCVAE, so we used BOW loss together with KLA for all previous experiments.", "Conclusion and Future Work In conclusion, we identified the one-to-many nature of open-domain conversation and proposed two novel models that show superior performance in generating diverse and appropriate responses at the discourse level.", "While the current paper addresses diversifying responses in respect to dialogue acts, this work is part of a larger research direction that targets leveraging both past linguistic findings and the learning power of deep neural networks to learn better representation of the latent factors in dialog.", "In turn, the output of this novel neural dialog model will be easier to explain and control by humans.", "In addition to dialog acts, we plan to apply our kgCVAE model to capture other different linguistic phenomena including sentiment, named entities,etc.", "Last but not least, the recognition network in our model will serve as the foundation for designing a datadriven dialog manager, which automatically discovers useful high-level intents.", "All of the above suggest a promising research direction." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "4.1", "4.2", "5.1", "5.2", "1.", "2.", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Encoder-decoder Dialog Models", "Conditional Variational Autoencoder", "Conditional Variational Autoencoder (CVAE) for Dialog Generation", "Knowledge-Guided CVAE (kgCVAE)", "Optimization Challenges", "Dataset", "Training", "Experiments Setup", "Quantitative Analysis", "Smoothed Sentence-level BLEU (Chen and", "Cosine", "Qualitative Analysis", "Results for Bag-of-Word Loss", "Conclusion and Future Work" ] }
GEM-SciDuet-train-100#paper-1263#slide-12
Models trained with BOW loss
Encoder Sampling Decoder Baseline Encoder z Greedy Decoder kgCVAE
Encoder Sampling Decoder Baseline Encoder z Greedy Decoder kgCVAE
[]
GEM-SciDuet-train-100#paper-1263#slide-13
1263
Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders
While recent neural encoder-decoder models have shown great promise in modeling open-domain conversations, they often generate dull and generic responses. Unlike past work that has focused on diversifying the output of the decoder at word-level to alleviate this problem, we present a novel framework based on conditional variational autoencoders that captures the discourse-level diversity in the encoder. Our model uses latent variables to learn a distribution over potential conversational intents and generates diverse responses using only greedy decoders. We have further developed a novel variant that is integrated with linguistic prior knowledge for better performance. Finally, the training procedure is improved by introducing a bag-of-word loss. Our proposed models have been validated to generate significantly more diverse responses than baseline approaches and exhibit competence in discourse-level decision-making.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196 ], "paper_content_text": [ "Introduction The dialog manager is one of the key components of dialog systems, which is responsible for modeling the decision-making process.", "Specifically, it typically takes a new utterance and the dialog context as input, and generates discourse-level decisions (Bohus and Rudnicky, 2003; Williams and Young, 2007) .", "Advanced dialog managers usually have a list of potential actions that enable them to have diverse behavior during a conversation, e.g.", "different strategies to recover from non-understanding .", "However, the conventional approach of designing a dialog manager (Williams and Young, 2007) does not scale well to open-domain conversation models because of the vast quantity of possible decisions.", "Thus, there has been a growing interest in applying encoder-decoder models for modeling open-domain conversation (Vinyals and Le, 2015; Serban et al., 2016a) .", "The basic approach treats a conversation as a transduction task, in which the dialog history is the source sequence and the next response is the target sequence.", "The model is then trained end-to-end on large conversation corpora using the maximum-likelihood estimation (MLE) objective without the need for manual crafting.", "However recent research has found that encoder-decoder models tend to generate generic and dull responses (e.g., I don't know), rather than meaningful and specific answers (Li et al., 2015; Serban et al., 2016b) .", "There have been many attempts to explain and solve this limitation, and they can be broadly divided into two categories (see Section 2 for details): (1) the first category argues that the dialog history is only one of the factors that decide the next response.", "Other features should be extracted and provided to the models as conditionals in order to generate more specific responses (Xing et al., 2016; Li et al., 2016a) ; (2) the second category aims to improve the encoder-decoder model itself, including decoding with beam search and its variations (Wiseman and Rush, 2016) , encouraging responses that have long-term payoff (Li et al., 2016b) , etc.", "Building upon the past work in dialog managers and encoder-decoder models, the key idea of this paper is to model dialogs as a one-to-many problem at the discourse level.", "Previous studies indicate that there are many factors in open-domain dialogs that decide the next response, and it is nontrivial to extract all of them.", "Intuitively, given a similar dialog history (and other observed inputs), there may exist many valid responses (at the discourse-level), each corresponding to a certain configuration of the latent variables that are not presented in the input.", "To uncover the potential responses, we strive to model a probabilistic distribution over the distributed utterance embeddings of the potential responses using a latent variable ( Figure 1 ).", "This allows us to generate diverse responses by drawing samples from the learned distribution and reconstruct their words via a decoder neural network.", "Specifically, our contributions are three-fold: 1.", "We present a novel neural dialog model adapted from conditional variational autoencoders (CVAE) , which introduces a latent variable that can capture discourse-level variations as described above 2.", "We propose Knowledge-Guided CVAE (kgC-VAE), which enables easy integration of expert knowledge and results in performance improvement and model interpretability.", "3.", "We develop a training method in addressing the difficulty of optimizing CVAE for natural language generation (Bowman et al., 2015) .", "We evaluate our models on human-human conversation data and yield promising results in: (a) generating appropriate and discourse-level diverse responses, and (b) showing that the proposed training method is more effective than the previous techniques.", "Related Work Our work is related to both recent advancement in encoder-decoder dialog models and generative models based on CVAE.", "Encoder-decoder Dialog Models Since the emergence of the neural dialog model, the problem of output diversity has received much attention in the research community.", "Ideal output responses should be both coherent and diverse.", "However, most models end up with generic and dull responses.", "To tackle this problem, one line of research has focused on augmenting the input of encoder-decoder models with richer context information, in order to generate more spe-cific responses.", "Li et al., (2016a) captured speakers' characteristics by encoding background information and speaking style into the distributed embeddings, which are used to re-rank the generated response from an encoder-decoder model.", "Xing et al., (2016) maintain topic encoding based on Latent Dirichlet Allocation (LDA) (Blei et al., 2003) of the conversation to encourage the model to output more topic coherent responses.", "On the other hand, many attempts have also been made to improve the architecture of encoderdecoder models.", "Li et al,.", "(2015) proposed to optimize the standard encoder-decoder by maximizing the mutual information between input and output, which in turn reduces generic responses.", "This approach penalized unconditionally high frequency responses, and favored responses that have high conditional probability given the input.", "Wiseman and Rush (2016) focused on improving the decoder network by alleviating the biases between training and testing.", "They introduced a searchbased loss that directly optimizes the networks for beam search decoding.", "The resulting model achieves better performance on word ordering, parsing and machine translation.", "Besides improving beam search, Li et al., (2016b) pointed out that the MLE objective of an encoder-decoder model is unable to approximate the real-world goal of the conversation.", "Thus, they initialized a encoderdecoder model with MLE objective and leveraged reinforcement learning to fine tune the model by optimizing three heuristic rewards functions: informativity, coherence, and ease of answering.", "Conditional Variational Autoencoder The variational autoencoder (VAE) (Kingma and Welling, 2013; Rezende et al., 2014) is one of the most popular frameworks for image generation.", "The basic idea of VAE is to encode the input x into a probability distribution z instead of a point encoding in the autoencoder.", "Then VAE applies a decoder network to reconstruct the original input using samples from z.", "To generate images, VAE first obtains a sample of z from the prior distribution, e.g.", "N (0, I), and then produces an image via the decoder network.", "A more advanced model, the conditional VAE (CVAE), is a recent modification of VAE to generate diverse images conditioned on certain attributes, e.g.", "generating different human faces given skin color .", "Inspired by CVAE, we view the dialog contexts as the conditional attributes and adapt CVAE to generate diverse responses instead of images.", "Although VAE/CVAE has achieved impressive results in image generation, adapting this to natural language generators is non-trivial.", "Bowman et al., (2015) have used VAE with Long-Short Term Memory (LSTM)-based recognition and decoder networks to generate sentences from a latent Gaussian variable.", "They showed that their model is able to generate diverse sentences with even a greedy LSTM decoder.", "They also reported the difficulty of training because the LSTM decoder tends to ignore the latent variable.", "We refer to this issue as the vanishing latent variable problem.", "Serban et al., (2016b) have applied a latent variable hierarchical encoder-decoder dialog model to introduce utterance-level variations and facilitate longer responses.", "To improve upon the past models, we firstly introduce a novel mechanism to leverage linguistic knowledge in training end-to-end neural dialog models, and we also propose a novel training technique that mitigates the vanishing latent variable problem.", "Proposed Models Conditional Variational Autoencoder (CVAE) for Dialog Generation Each dyadic conversation can be represented via three random variables: the dialog context c (context window size k − 1), the response utterance x (the k th utterance) and a latent variable z, which is used to capture the latent distribution over the valid responses.", "Further, c is composed of the dialog history: the preceding k-1 utterances; conversational floor (1 if the utterance is from the same speaker of x, otherwise 0) and meta features m (e.g.", "the topic).", "We then define the conditional distribution p(x, z|c) = p(x|z, c)p(z|c) and our goal is to use deep neural networks (parametrized by θ) to approximate p(z|c) and p(x|z, c).", "We refer to p θ (z|c) as the prior network and p θ (x, |z, c) as the response decoder.", "Then the generative process of x is (Figure 2 (a)): 1.", "Sample a latent variable z from the prior network p θ (z|c).", "2.", "Generate x through the response decoder p θ (x|z, c).", "CVAE is trained to maximize the conditional log likelihood of x given c, which involves an intractable marginalization over the latent variable z.", "As proposed in , CVAE can be efficiently trained with the Stochastic Gradient Variational Bayes (SGVB) framework (Kingma and Welling, 2013) by maximizing the variational lower bound of the conditional log likelihood.", "We assume the z follows multivariate Gaussian distribution with a diagonal covariance matrix and introduce a recognition network q φ (z|x, c) to approximate the true posterior distribution p(z|x, c).", "have shown that the variational lower bound can be written as: Figure 3 demonstrates an overview of our model.", "The utterance encoder is a bidirectional recurrent neural network (BRNN) (Schuster and Paliwal, 1997) with a gated recurrent unit (GRU) (Chung et al., 2014) to encode each utterance into fixedsize vectors by concatenating the last hidden states of the forward and backward RNN u L(θ, φ; x, c) = −KL(q φ (z|x, c) p θ (z|c)) + E q φ (z|c,x) [log p θ (x|z, c)] (1) ≤ log p(x|c) i = [ h i , h i ].", "x is simply u k .", "The context encoder is a 1-layer GRU network that encodes the preceding k-1 utterances by taking u 1:k−1 and the corresponding conversation floor as inputs.", "The last hidden state h c of the context encoder is concatenated with meta features and c = [h c , m].", "Since we assume z follows isotropic Gaussian distribution, the recognition network q φ (z|x, c) ∼ N (µ, σ 2 I) and the prior network p θ (z|c) ∼ N (µ , σ 2 I), and then we have: µ log(σ 2 ) = W r x c + b r (2) µ log(σ 2 ) = MLP p (c) (3) We then use the reparametrization trick (Kingma and Welling, 2013) to obtain samples of z either from N (z; µ, σ 2 I) predicted by the recognition network (training) or N (z; µ , σ 2 I) predicted by the prior network (testing).", "Finally, the response decoder is a 1-layer GRU network with initial state s 0 = W i [z, c]+b i .", "The response decoder then predicts the words in x sequentially.", "Knowledge-Guided CVAE (kgCVAE) In practice, training CVAE is a challenging optimization problem and often requires large amount of data.", "On the other hand, past research in spoken dialog systems and discourse analysis has suggested that many linguistic cues capture crucial features in representing natural conversation.", "For example, dialog acts (Poesio and Traum, 1998) have been widely used in the dialog managers (Litman and Allen, 1987; Raux et al., 2005; Zhao and Eskenazi, 2016) to represent the propositional function of the system.", "Therefore, we conjecture that it will be beneficial for the model to learn meaningful latent z if it is provided with explicitly extracted discourse features during the training.", "In order to incorporate the linguistic features into the basic CVAE model, we first denote the set of linguistic features as y.", "Then we assume that the generation of x depends on c, z and y. y relies on z and c as shown in Figure 2 .", "Specifically, during training the initial state of the response decoder is s 0 = W i [z, c, y] + b i and the input at every step is [e t , y] where e t is the word embedding of t th word in x.", "In addition, there is an MLP to predict y = MLP y (z, c) based on z and c. In the testing stage, the predicted y is used by the response decoder instead of the oracle decoders.", "We denote the modified model as knowledge-guided CVAE (kgCVAE) and developers can add desired discourse features that they wish the latent variable z to capture.", "KgCVAE model is trained by maximizing: L(θ, φ; x, c, y) = −KL(q φ (z|x, c, y) P θ (z|c)) + E q φ (z|c,x,y) [log p(x|z, c, y)] + E q φ (z|c,x,y) [log p(y|z, c)] (4) Since now the reconstruction of y is a part of the loss function, kgCVAE can more efficiently encode y-related information into z than discovering it only based on the surface-level x and c. Another advantage of kgCVAE is that it can output a highlevel label (e.g.", "dialog act) along with the wordlevel responses, which allows easier interpretation of the model's outputs.", "Optimization Challenges A straightforward VAE with RNN decoder fails to encode meaningful information in z due to the vanishing latent variable problem (Bowman et al., 2015) .", "Bowman et al., (2015) proposed two solutions: (1) KL annealing: gradually increasing the weight of the KL term from 0 to 1 during training; (2) word drop decoding: setting a certain percentage of the target words to 0.", "We found that CVAE suffers from the same issue when the decoder is an RNN.", "Also we did not consider word drop decoding because Bowman et al,.", "(2015) have shown that it may hurt the performance when the drop rate is too high.", "As a result, we propose a simple yet novel technique to tackle the vanishing latent variable problem: bag-of-word loss.", "The idea is to introduce an auxiliary loss that requires the decoder network to predict the bag-of-words in the response x as shown in Figure 3 (b).", "We decompose x into two variables: x o with word order and x bow without order, and assume that x o and x bow are conditionally independent given z and c: p(x, z|c) = p(x o |z, c)p(x bow |z, c)p(z|c).", "Due to the conditional independence assumption, the latent variable is forced to capture global information about the target response.", "Let f = MLP b (z, x) ∈ R V where V is vocabulary size, and we have: log p(x bow |z, c) = log |x| t=1 e fx t V j e f j (5) where |x| is the length of x and x t is the word index of t th word in x.", "The modified variational lower bound for CVAE with bag-of-word loss is (see Appendix A for kgCVAE): L (θ, φ; x, c) = L(θ, φ; x, c) + E q φ (z|c,x,y) [log p(x bow |z, c)] (6) We will show that the bag-of-word loss in Equation 6 is very effective against the vanishing latent variable and it is also complementary to the KL annealing technique.", "Experiment Setup Dataset We chose the Switchboard (SW) 1 Release 2 Corpus (Godfrey and Holliman, 1997) to evaluate the proposed models.", "SW has 2400 two-sided telephone conversations with manually transcribed speech and alignment.", "In the beginning of the call, a computer operator gave the callers recorded prompts that define the desired topic of discussion.", "There are 70 available topics.", "We randomly split the data into 2316/60/62 dialogs for train/validate/test.", "The pre-processing includes (1) tokenize using the NLTK tokenizer (Bird et al., 2009 ); (2) remove non-verbal symbols and repeated words due to false starts; (3) keep the top 10K frequent word types as the vocabulary.", "The final data have 207, 833/5, 225/5, 481 (c, x) pairs for train/validate/test.", "Furthermore, a subset of SW was manually labeled with dialog acts (Stolcke et al., 2000) .", "We extracted dialog act labels based on the dialog act recognizer proposed in (Ribeiro et al., 2015) .", "The features include the uni-gram and bi-gram of the utterance, and the contextual features of the last 3 utterances.", "We trained a Support Vector Machine (SVM) (Suykens and Vandewalle, 1999) with linear kernel on the subset of SW with human annotations.", "There are 42 types of dialog acts and the SVM achieved 77.3% accuracy on held-out data.", "Then the rest of SW data are labelled with dialog acts using the trained SVM dialog act recognizer.", "Training We trained with the following hyperparameters (according to the loss on the validate dataset): word embedding has size 200 and is shared across everywhere.", "We initialize the word embedding from Glove embedding pre-trained on Twitter (Pennington et al., 2014) .", "The utterance encoder has a hidden size of 300 for each direction.", "The context encoder has a hidden size of 600 and the response decoder has a hidden size of 400.", "The prior network and the MLP for predicting y both have 1 hidden layer of size 400 and tanh non-linearity.", "The latent variable z has a size of 200.", "The context window k is 10.", "All the initial weights are sampled from a uniform distribution [-0.08, 0.08].", "The mini-batch size is 30.", "The models are trained end-to-end using the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.001 and gradient clipping at 5.", "We selected the best models based on the variational lower bound on the validate data.", "Finally, we use the BOW loss along with KL annealing of 10,000 batches to achieve the best performance.", "Section 5.4 gives a detailed argument for the importance of the BOW loss.", "Results Experiments Setup We compared three neural dialog models: a strong baseline model, CVAE, and kgCVAE.", "The baseline model is an encoder-decoder neural dialog model without latent variables similar to (Serban et al., 2016a) .", "The baseline model's encoder uses the same context encoder to encode the dialog history and the meta features as shown in Figure 3 .", "The encoded context c is directly fed into the decoder networks as the initial state.", "The hyperparameters of the baseline are the same as the ones reported in Section 4.2 and the baseline is trained to minimize the standard cross entropy loss of the decoder RNN model without any auxiliary loss.", "Also, to compare the diversity introduced by the stochasticity in the proposed latent variable versus the softmax of RNN at each decoding step, we generate N responses from the baseline by sam-pling from the softmax.", "For CVAE/kgCVAE, we sample N times from the latent z and only use greedy decoders so that the randomness comes entirely from the latent variable z. Quantitative Analysis Automatically evaluating an open-domain generative dialog model is an open research challenge .", "Following our one-tomany hypothesis, we propose the following metrics.", "We assume that for a given dialog context c, there exist M c reference responses r j , j ∈ [1, M c ].", "Meanwhile a model can generate N hypothesis re- sponses h i , i ∈ [1, N ].", "The generalized responselevel precision/recall for a given dialog context is: precision(c) = N i=1 max j∈[1,Mc] d(r j , h i ) N recall(c) = Mc j=1 max i∈[1,N ] d(r j , h i )) M c where d(r j , h i ) is a distance function which lies between 0 to 1 and measures the similarities between r j and h i .", "The final score is averaged over the entire test dataset and we report the performance with 3 types of distance functions in order to evaluate the systems from various linguistic points of view: Smoothed Sentence-level BLEU (Chen and Cherry, 2014): BLEU is a popular metric that measures the geometric mean of modified ngram precision with a length penalty (Papineni et al., 2002; Li et al., 2015) .", "We use BLEU-1 to 4 as our lexical similarity metric and normalize the score to 0 to 1 scale.", "Cosine Distance of Bag-of-word Embedding: a simple method to obtain sentence embeddings is to take the average or extrema of all the word embeddings in the sentences (Forgues et al., 2014; Adi et al., 2016) .", "The d(r j , h i ) is the cosine distance of the two embedding vectors.", "We used Glove embedding described in Section 4 and denote the average method as A-bow and extrema method as E-bow.", "3.", "Dialog Act Match: to measure the similarity at the discourse level, the same dialogact tagger from 4.1 is applied to label all the generated responses of each model.", "We set d(r j , h i ) = 1 if r j and h i have the same dialog acts, otherwise d(r j , h i ) = 0.", "One challenge of using the above metrics is that there is only one, rather than multiple reference responses/contexts.", "This impacts reliability of our measures.", "Inspired by (Sordoni et al., 2015) , we utilized information retrieval techniques (see Appendix A) to gather 10 extra candidate reference responses/context from other conversations with the same topics.", "Then the 10 candidate references are filtered by two experts, which serve as the ground truth to train the reference response classifier.", "The result is 6.69 extra references in average per context.", "The average number of distinct reference dialog acts is 4.2.", "Table 1 The proposed models outperform the baseline in terms of recall in all the metrics with statistical significance.", "This confirms our hypothesis that generating responses with discourse-level diversity can lead to a more comprehensive coverage of the potential responses than promoting only word-level diversity.", "As for precision, we observed that the baseline has higher or similar scores than CVAE in all metrics, which is expected since the baseline tends to generate the mostly likely and safe responses repeatedly in the N hypotheses.", "However, kgCVAE is able to achieve the highest precision and recall in the 4 metrics at the same time (BLEU1-4, E-BOW).", "One reason for kgCVAE's good performance is that the predicted dialog act label in kgCVAE can regularize the generation process of its RNN decoder by forcing it to generate more coherent and precise words.", "We further analyze the precision/recall of BLEU-4 by looking at the average score versus the number of distinct reference dialog acts.", "A low number of distinct dialog acts represents the situation where the dialog context has a strong constraint on the range of the next response (low entropy), while a high number indicates the opposite (highentropy).", "Figure 4 shows that CVAE/kgCVAE achieves significantly higher recall than the baseline in higher entropy contexts.", "Also it shows that CVAE suffers from lower precision, especially in low entropy contexts.", "Finally, kgCVAE gets higher precision than both the baseline and CVAE in the full spectrum of context entropy.", "Table 2 shows the outputs generated from the baseline and kgCVAE.", "In example 1, caller A begins with an open-ended question.", "The kgCVAE model generated highly diverse answers that cover multiple plausible dialog acts.", "Further, we notice that the generated text exhibits similar dialog acts compared to the ones predicted separately by the model, implying the consistency of natural language generation based on y.", "On the contrary, the responses from the baseline model are limited to local n-gram variations and share a similar prefix, i.e.", "\"I'm\".", "Example 2 is a situation where caller A is telling B stories.", "The ground truth response is a back-channel and the range of valid answers is more constrained than example 1 since B is playing the role of a listener.", "The baseline successfully predicts \"uh-huh\".", "The kgCVAE model is also able to generate various ways of back-channeling.", "This implies that the latent z is able to capture context-sensitive variations, i.e.", "in low-entropy dialog contexts modeling lexical diversity while in high-entropy ones modeling discourse-level diversity.", "Moreover, kgCVAE is occasionally able to generate more sophisticated grounding (sample 4) beyond a simple back-channel, which is also an acceptable response given the dialog context.", "Qualitative Analysis In addition, past work (Kingma and Welling, 2013) has shown that the recognition network is able to learn to cluster high-dimension data, so we conjecture that posterior z outputted from the recognition network should cluster the responses into meaningful groups.", "Figure 5 visualizes the posterior z of responses in the test dataset in 2D space using t-SNE (Maaten and Hinton, 2008) .", "We found that the learned latent space is highly correlated with the dialog act and length of responses, which confirms our assumption.", "Results for Bag-of-Word Loss Finally, we evaluate the effectiveness of bag-ofword (BOW) loss for training VAE/CVAE with the RNN decoder.", "To compare with past work (Bowman et al., 2015) , we conducted the same language modelling (LM) task on Penn Treebank using VAE.", "The network architecture is same except we use GRU instead of LSTM.", "We compared four different training setups: (1) standard VAE without any heuristics; (2) VAE with KL annealing (KLA); (3) VAE with BOW loss; (4) VAE with both BOW loss and KLA.", "Intuitively, a well trained model should lead to a low reconstruction loss and small but non-trivial KL cost.", "For all models with KLA, the KL weight increases linearly from 0 to 1 in the first 5000 batches.", "Table 3 shows the reconstruction perplexity and the KL cost on the test dataset.", "The standard VAE fails to learn a meaningful latent variable by hav- Table 2 : Generated responses from the baselines and kgCVAE in two examples.", "KgCVAE also provides the predicted dialog act for each response.", "The context only shows the last utterance due to space limit (the actual context window size is 10).", "ing a KL cost close to 0 and a reconstruction perplexity similar to a small LSTM LM (Zaremba et al., 2014) .", "KLA helps to improve the reconstruction loss, but it requires early stopping since the models will fall back to the standard VAE after the KL weight becomes 1.", "At last, the models with BOW loss achieved significantly lower perplexity and larger KL cost.", "Figure 6 visualizes the evolution of the KL cost.", "We can see that for the standard model, the KL cost crashes to 0 at the beginning of training and never recovers.", "On the contrary, the model with only KLA learns to encode substantial information in latent z when the KL cost weight is small.", "However, after the KL weight is increased to 1 (after 5000 batch), the model once again decides to ignore the latent z and falls back to the naive implementation.", "The model with BOW loss, however, consistently converges to a non-trivial KL cost even without KLA, which confirms the importance of BOW loss for training latent variable models with the RNN decoder.", "Last but not least, our experiments showed that the conclusions drawn from LM using VAE also apply to training CVAE/kgCVAE, so we used BOW loss together with KLA for all previous experiments.", "Conclusion and Future Work In conclusion, we identified the one-to-many nature of open-domain conversation and proposed two novel models that show superior performance in generating diverse and appropriate responses at the discourse level.", "While the current paper addresses diversifying responses in respect to dialogue acts, this work is part of a larger research direction that targets leveraging both past linguistic findings and the learning power of deep neural networks to learn better representation of the latent factors in dialog.", "In turn, the output of this novel neural dialog model will be easier to explain and control by humans.", "In addition to dialog acts, we plan to apply our kgCVAE model to capture other different linguistic phenomena including sentiment, named entities,etc.", "Last but not least, the recognition network in our model will serve as the foundation for designing a datadriven dialog manager, which automatically discovers useful high-level intents.", "All of the above suggest a promising research direction." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "4.1", "4.2", "5.1", "5.2", "1.", "2.", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Encoder-decoder Dialog Models", "Conditional Variational Autoencoder", "Conditional Variational Autoencoder (CVAE) for Dialog Generation", "Knowledge-Guided CVAE (kgCVAE)", "Optimization Challenges", "Dataset", "Training", "Experiments Setup", "Quantitative Analysis", "Smoothed Sentence-level BLEU (Chen and", "Cosine", "Qualitative Analysis", "Results for Bag-of-Word Loss", "Conclusion and Future Work" ] }
GEM-SciDuet-train-100#paper-1263#slide-13
Quantitative Analysis Results
Note: BLEU are normalized into [0, 1] to be valid precision and recall distance function
Note: BLEU are normalized into [0, 1] to be valid precision and recall distance function
[]
GEM-SciDuet-train-100#paper-1263#slide-14
1263
Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders
While recent neural encoder-decoder models have shown great promise in modeling open-domain conversations, they often generate dull and generic responses. Unlike past work that has focused on diversifying the output of the decoder at word-level to alleviate this problem, we present a novel framework based on conditional variational autoencoders that captures the discourse-level diversity in the encoder. Our model uses latent variables to learn a distribution over potential conversational intents and generates diverse responses using only greedy decoders. We have further developed a novel variant that is integrated with linguistic prior knowledge for better performance. Finally, the training procedure is improved by introducing a bag-of-word loss. Our proposed models have been validated to generate significantly more diverse responses than baseline approaches and exhibit competence in discourse-level decision-making.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196 ], "paper_content_text": [ "Introduction The dialog manager is one of the key components of dialog systems, which is responsible for modeling the decision-making process.", "Specifically, it typically takes a new utterance and the dialog context as input, and generates discourse-level decisions (Bohus and Rudnicky, 2003; Williams and Young, 2007) .", "Advanced dialog managers usually have a list of potential actions that enable them to have diverse behavior during a conversation, e.g.", "different strategies to recover from non-understanding .", "However, the conventional approach of designing a dialog manager (Williams and Young, 2007) does not scale well to open-domain conversation models because of the vast quantity of possible decisions.", "Thus, there has been a growing interest in applying encoder-decoder models for modeling open-domain conversation (Vinyals and Le, 2015; Serban et al., 2016a) .", "The basic approach treats a conversation as a transduction task, in which the dialog history is the source sequence and the next response is the target sequence.", "The model is then trained end-to-end on large conversation corpora using the maximum-likelihood estimation (MLE) objective without the need for manual crafting.", "However recent research has found that encoder-decoder models tend to generate generic and dull responses (e.g., I don't know), rather than meaningful and specific answers (Li et al., 2015; Serban et al., 2016b) .", "There have been many attempts to explain and solve this limitation, and they can be broadly divided into two categories (see Section 2 for details): (1) the first category argues that the dialog history is only one of the factors that decide the next response.", "Other features should be extracted and provided to the models as conditionals in order to generate more specific responses (Xing et al., 2016; Li et al., 2016a) ; (2) the second category aims to improve the encoder-decoder model itself, including decoding with beam search and its variations (Wiseman and Rush, 2016) , encouraging responses that have long-term payoff (Li et al., 2016b) , etc.", "Building upon the past work in dialog managers and encoder-decoder models, the key idea of this paper is to model dialogs as a one-to-many problem at the discourse level.", "Previous studies indicate that there are many factors in open-domain dialogs that decide the next response, and it is nontrivial to extract all of them.", "Intuitively, given a similar dialog history (and other observed inputs), there may exist many valid responses (at the discourse-level), each corresponding to a certain configuration of the latent variables that are not presented in the input.", "To uncover the potential responses, we strive to model a probabilistic distribution over the distributed utterance embeddings of the potential responses using a latent variable ( Figure 1 ).", "This allows us to generate diverse responses by drawing samples from the learned distribution and reconstruct their words via a decoder neural network.", "Specifically, our contributions are three-fold: 1.", "We present a novel neural dialog model adapted from conditional variational autoencoders (CVAE) , which introduces a latent variable that can capture discourse-level variations as described above 2.", "We propose Knowledge-Guided CVAE (kgC-VAE), which enables easy integration of expert knowledge and results in performance improvement and model interpretability.", "3.", "We develop a training method in addressing the difficulty of optimizing CVAE for natural language generation (Bowman et al., 2015) .", "We evaluate our models on human-human conversation data and yield promising results in: (a) generating appropriate and discourse-level diverse responses, and (b) showing that the proposed training method is more effective than the previous techniques.", "Related Work Our work is related to both recent advancement in encoder-decoder dialog models and generative models based on CVAE.", "Encoder-decoder Dialog Models Since the emergence of the neural dialog model, the problem of output diversity has received much attention in the research community.", "Ideal output responses should be both coherent and diverse.", "However, most models end up with generic and dull responses.", "To tackle this problem, one line of research has focused on augmenting the input of encoder-decoder models with richer context information, in order to generate more spe-cific responses.", "Li et al., (2016a) captured speakers' characteristics by encoding background information and speaking style into the distributed embeddings, which are used to re-rank the generated response from an encoder-decoder model.", "Xing et al., (2016) maintain topic encoding based on Latent Dirichlet Allocation (LDA) (Blei et al., 2003) of the conversation to encourage the model to output more topic coherent responses.", "On the other hand, many attempts have also been made to improve the architecture of encoderdecoder models.", "Li et al,.", "(2015) proposed to optimize the standard encoder-decoder by maximizing the mutual information between input and output, which in turn reduces generic responses.", "This approach penalized unconditionally high frequency responses, and favored responses that have high conditional probability given the input.", "Wiseman and Rush (2016) focused on improving the decoder network by alleviating the biases between training and testing.", "They introduced a searchbased loss that directly optimizes the networks for beam search decoding.", "The resulting model achieves better performance on word ordering, parsing and machine translation.", "Besides improving beam search, Li et al., (2016b) pointed out that the MLE objective of an encoder-decoder model is unable to approximate the real-world goal of the conversation.", "Thus, they initialized a encoderdecoder model with MLE objective and leveraged reinforcement learning to fine tune the model by optimizing three heuristic rewards functions: informativity, coherence, and ease of answering.", "Conditional Variational Autoencoder The variational autoencoder (VAE) (Kingma and Welling, 2013; Rezende et al., 2014) is one of the most popular frameworks for image generation.", "The basic idea of VAE is to encode the input x into a probability distribution z instead of a point encoding in the autoencoder.", "Then VAE applies a decoder network to reconstruct the original input using samples from z.", "To generate images, VAE first obtains a sample of z from the prior distribution, e.g.", "N (0, I), and then produces an image via the decoder network.", "A more advanced model, the conditional VAE (CVAE), is a recent modification of VAE to generate diverse images conditioned on certain attributes, e.g.", "generating different human faces given skin color .", "Inspired by CVAE, we view the dialog contexts as the conditional attributes and adapt CVAE to generate diverse responses instead of images.", "Although VAE/CVAE has achieved impressive results in image generation, adapting this to natural language generators is non-trivial.", "Bowman et al., (2015) have used VAE with Long-Short Term Memory (LSTM)-based recognition and decoder networks to generate sentences from a latent Gaussian variable.", "They showed that their model is able to generate diverse sentences with even a greedy LSTM decoder.", "They also reported the difficulty of training because the LSTM decoder tends to ignore the latent variable.", "We refer to this issue as the vanishing latent variable problem.", "Serban et al., (2016b) have applied a latent variable hierarchical encoder-decoder dialog model to introduce utterance-level variations and facilitate longer responses.", "To improve upon the past models, we firstly introduce a novel mechanism to leverage linguistic knowledge in training end-to-end neural dialog models, and we also propose a novel training technique that mitigates the vanishing latent variable problem.", "Proposed Models Conditional Variational Autoencoder (CVAE) for Dialog Generation Each dyadic conversation can be represented via three random variables: the dialog context c (context window size k − 1), the response utterance x (the k th utterance) and a latent variable z, which is used to capture the latent distribution over the valid responses.", "Further, c is composed of the dialog history: the preceding k-1 utterances; conversational floor (1 if the utterance is from the same speaker of x, otherwise 0) and meta features m (e.g.", "the topic).", "We then define the conditional distribution p(x, z|c) = p(x|z, c)p(z|c) and our goal is to use deep neural networks (parametrized by θ) to approximate p(z|c) and p(x|z, c).", "We refer to p θ (z|c) as the prior network and p θ (x, |z, c) as the response decoder.", "Then the generative process of x is (Figure 2 (a)): 1.", "Sample a latent variable z from the prior network p θ (z|c).", "2.", "Generate x through the response decoder p θ (x|z, c).", "CVAE is trained to maximize the conditional log likelihood of x given c, which involves an intractable marginalization over the latent variable z.", "As proposed in , CVAE can be efficiently trained with the Stochastic Gradient Variational Bayes (SGVB) framework (Kingma and Welling, 2013) by maximizing the variational lower bound of the conditional log likelihood.", "We assume the z follows multivariate Gaussian distribution with a diagonal covariance matrix and introduce a recognition network q φ (z|x, c) to approximate the true posterior distribution p(z|x, c).", "have shown that the variational lower bound can be written as: Figure 3 demonstrates an overview of our model.", "The utterance encoder is a bidirectional recurrent neural network (BRNN) (Schuster and Paliwal, 1997) with a gated recurrent unit (GRU) (Chung et al., 2014) to encode each utterance into fixedsize vectors by concatenating the last hidden states of the forward and backward RNN u L(θ, φ; x, c) = −KL(q φ (z|x, c) p θ (z|c)) + E q φ (z|c,x) [log p θ (x|z, c)] (1) ≤ log p(x|c) i = [ h i , h i ].", "x is simply u k .", "The context encoder is a 1-layer GRU network that encodes the preceding k-1 utterances by taking u 1:k−1 and the corresponding conversation floor as inputs.", "The last hidden state h c of the context encoder is concatenated with meta features and c = [h c , m].", "Since we assume z follows isotropic Gaussian distribution, the recognition network q φ (z|x, c) ∼ N (µ, σ 2 I) and the prior network p θ (z|c) ∼ N (µ , σ 2 I), and then we have: µ log(σ 2 ) = W r x c + b r (2) µ log(σ 2 ) = MLP p (c) (3) We then use the reparametrization trick (Kingma and Welling, 2013) to obtain samples of z either from N (z; µ, σ 2 I) predicted by the recognition network (training) or N (z; µ , σ 2 I) predicted by the prior network (testing).", "Finally, the response decoder is a 1-layer GRU network with initial state s 0 = W i [z, c]+b i .", "The response decoder then predicts the words in x sequentially.", "Knowledge-Guided CVAE (kgCVAE) In practice, training CVAE is a challenging optimization problem and often requires large amount of data.", "On the other hand, past research in spoken dialog systems and discourse analysis has suggested that many linguistic cues capture crucial features in representing natural conversation.", "For example, dialog acts (Poesio and Traum, 1998) have been widely used in the dialog managers (Litman and Allen, 1987; Raux et al., 2005; Zhao and Eskenazi, 2016) to represent the propositional function of the system.", "Therefore, we conjecture that it will be beneficial for the model to learn meaningful latent z if it is provided with explicitly extracted discourse features during the training.", "In order to incorporate the linguistic features into the basic CVAE model, we first denote the set of linguistic features as y.", "Then we assume that the generation of x depends on c, z and y. y relies on z and c as shown in Figure 2 .", "Specifically, during training the initial state of the response decoder is s 0 = W i [z, c, y] + b i and the input at every step is [e t , y] where e t is the word embedding of t th word in x.", "In addition, there is an MLP to predict y = MLP y (z, c) based on z and c. In the testing stage, the predicted y is used by the response decoder instead of the oracle decoders.", "We denote the modified model as knowledge-guided CVAE (kgCVAE) and developers can add desired discourse features that they wish the latent variable z to capture.", "KgCVAE model is trained by maximizing: L(θ, φ; x, c, y) = −KL(q φ (z|x, c, y) P θ (z|c)) + E q φ (z|c,x,y) [log p(x|z, c, y)] + E q φ (z|c,x,y) [log p(y|z, c)] (4) Since now the reconstruction of y is a part of the loss function, kgCVAE can more efficiently encode y-related information into z than discovering it only based on the surface-level x and c. Another advantage of kgCVAE is that it can output a highlevel label (e.g.", "dialog act) along with the wordlevel responses, which allows easier interpretation of the model's outputs.", "Optimization Challenges A straightforward VAE with RNN decoder fails to encode meaningful information in z due to the vanishing latent variable problem (Bowman et al., 2015) .", "Bowman et al., (2015) proposed two solutions: (1) KL annealing: gradually increasing the weight of the KL term from 0 to 1 during training; (2) word drop decoding: setting a certain percentage of the target words to 0.", "We found that CVAE suffers from the same issue when the decoder is an RNN.", "Also we did not consider word drop decoding because Bowman et al,.", "(2015) have shown that it may hurt the performance when the drop rate is too high.", "As a result, we propose a simple yet novel technique to tackle the vanishing latent variable problem: bag-of-word loss.", "The idea is to introduce an auxiliary loss that requires the decoder network to predict the bag-of-words in the response x as shown in Figure 3 (b).", "We decompose x into two variables: x o with word order and x bow without order, and assume that x o and x bow are conditionally independent given z and c: p(x, z|c) = p(x o |z, c)p(x bow |z, c)p(z|c).", "Due to the conditional independence assumption, the latent variable is forced to capture global information about the target response.", "Let f = MLP b (z, x) ∈ R V where V is vocabulary size, and we have: log p(x bow |z, c) = log |x| t=1 e fx t V j e f j (5) where |x| is the length of x and x t is the word index of t th word in x.", "The modified variational lower bound for CVAE with bag-of-word loss is (see Appendix A for kgCVAE): L (θ, φ; x, c) = L(θ, φ; x, c) + E q φ (z|c,x,y) [log p(x bow |z, c)] (6) We will show that the bag-of-word loss in Equation 6 is very effective against the vanishing latent variable and it is also complementary to the KL annealing technique.", "Experiment Setup Dataset We chose the Switchboard (SW) 1 Release 2 Corpus (Godfrey and Holliman, 1997) to evaluate the proposed models.", "SW has 2400 two-sided telephone conversations with manually transcribed speech and alignment.", "In the beginning of the call, a computer operator gave the callers recorded prompts that define the desired topic of discussion.", "There are 70 available topics.", "We randomly split the data into 2316/60/62 dialogs for train/validate/test.", "The pre-processing includes (1) tokenize using the NLTK tokenizer (Bird et al., 2009 ); (2) remove non-verbal symbols and repeated words due to false starts; (3) keep the top 10K frequent word types as the vocabulary.", "The final data have 207, 833/5, 225/5, 481 (c, x) pairs for train/validate/test.", "Furthermore, a subset of SW was manually labeled with dialog acts (Stolcke et al., 2000) .", "We extracted dialog act labels based on the dialog act recognizer proposed in (Ribeiro et al., 2015) .", "The features include the uni-gram and bi-gram of the utterance, and the contextual features of the last 3 utterances.", "We trained a Support Vector Machine (SVM) (Suykens and Vandewalle, 1999) with linear kernel on the subset of SW with human annotations.", "There are 42 types of dialog acts and the SVM achieved 77.3% accuracy on held-out data.", "Then the rest of SW data are labelled with dialog acts using the trained SVM dialog act recognizer.", "Training We trained with the following hyperparameters (according to the loss on the validate dataset): word embedding has size 200 and is shared across everywhere.", "We initialize the word embedding from Glove embedding pre-trained on Twitter (Pennington et al., 2014) .", "The utterance encoder has a hidden size of 300 for each direction.", "The context encoder has a hidden size of 600 and the response decoder has a hidden size of 400.", "The prior network and the MLP for predicting y both have 1 hidden layer of size 400 and tanh non-linearity.", "The latent variable z has a size of 200.", "The context window k is 10.", "All the initial weights are sampled from a uniform distribution [-0.08, 0.08].", "The mini-batch size is 30.", "The models are trained end-to-end using the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.001 and gradient clipping at 5.", "We selected the best models based on the variational lower bound on the validate data.", "Finally, we use the BOW loss along with KL annealing of 10,000 batches to achieve the best performance.", "Section 5.4 gives a detailed argument for the importance of the BOW loss.", "Results Experiments Setup We compared three neural dialog models: a strong baseline model, CVAE, and kgCVAE.", "The baseline model is an encoder-decoder neural dialog model without latent variables similar to (Serban et al., 2016a) .", "The baseline model's encoder uses the same context encoder to encode the dialog history and the meta features as shown in Figure 3 .", "The encoded context c is directly fed into the decoder networks as the initial state.", "The hyperparameters of the baseline are the same as the ones reported in Section 4.2 and the baseline is trained to minimize the standard cross entropy loss of the decoder RNN model without any auxiliary loss.", "Also, to compare the diversity introduced by the stochasticity in the proposed latent variable versus the softmax of RNN at each decoding step, we generate N responses from the baseline by sam-pling from the softmax.", "For CVAE/kgCVAE, we sample N times from the latent z and only use greedy decoders so that the randomness comes entirely from the latent variable z. Quantitative Analysis Automatically evaluating an open-domain generative dialog model is an open research challenge .", "Following our one-tomany hypothesis, we propose the following metrics.", "We assume that for a given dialog context c, there exist M c reference responses r j , j ∈ [1, M c ].", "Meanwhile a model can generate N hypothesis re- sponses h i , i ∈ [1, N ].", "The generalized responselevel precision/recall for a given dialog context is: precision(c) = N i=1 max j∈[1,Mc] d(r j , h i ) N recall(c) = Mc j=1 max i∈[1,N ] d(r j , h i )) M c where d(r j , h i ) is a distance function which lies between 0 to 1 and measures the similarities between r j and h i .", "The final score is averaged over the entire test dataset and we report the performance with 3 types of distance functions in order to evaluate the systems from various linguistic points of view: Smoothed Sentence-level BLEU (Chen and Cherry, 2014): BLEU is a popular metric that measures the geometric mean of modified ngram precision with a length penalty (Papineni et al., 2002; Li et al., 2015) .", "We use BLEU-1 to 4 as our lexical similarity metric and normalize the score to 0 to 1 scale.", "Cosine Distance of Bag-of-word Embedding: a simple method to obtain sentence embeddings is to take the average or extrema of all the word embeddings in the sentences (Forgues et al., 2014; Adi et al., 2016) .", "The d(r j , h i ) is the cosine distance of the two embedding vectors.", "We used Glove embedding described in Section 4 and denote the average method as A-bow and extrema method as E-bow.", "3.", "Dialog Act Match: to measure the similarity at the discourse level, the same dialogact tagger from 4.1 is applied to label all the generated responses of each model.", "We set d(r j , h i ) = 1 if r j and h i have the same dialog acts, otherwise d(r j , h i ) = 0.", "One challenge of using the above metrics is that there is only one, rather than multiple reference responses/contexts.", "This impacts reliability of our measures.", "Inspired by (Sordoni et al., 2015) , we utilized information retrieval techniques (see Appendix A) to gather 10 extra candidate reference responses/context from other conversations with the same topics.", "Then the 10 candidate references are filtered by two experts, which serve as the ground truth to train the reference response classifier.", "The result is 6.69 extra references in average per context.", "The average number of distinct reference dialog acts is 4.2.", "Table 1 The proposed models outperform the baseline in terms of recall in all the metrics with statistical significance.", "This confirms our hypothesis that generating responses with discourse-level diversity can lead to a more comprehensive coverage of the potential responses than promoting only word-level diversity.", "As for precision, we observed that the baseline has higher or similar scores than CVAE in all metrics, which is expected since the baseline tends to generate the mostly likely and safe responses repeatedly in the N hypotheses.", "However, kgCVAE is able to achieve the highest precision and recall in the 4 metrics at the same time (BLEU1-4, E-BOW).", "One reason for kgCVAE's good performance is that the predicted dialog act label in kgCVAE can regularize the generation process of its RNN decoder by forcing it to generate more coherent and precise words.", "We further analyze the precision/recall of BLEU-4 by looking at the average score versus the number of distinct reference dialog acts.", "A low number of distinct dialog acts represents the situation where the dialog context has a strong constraint on the range of the next response (low entropy), while a high number indicates the opposite (highentropy).", "Figure 4 shows that CVAE/kgCVAE achieves significantly higher recall than the baseline in higher entropy contexts.", "Also it shows that CVAE suffers from lower precision, especially in low entropy contexts.", "Finally, kgCVAE gets higher precision than both the baseline and CVAE in the full spectrum of context entropy.", "Table 2 shows the outputs generated from the baseline and kgCVAE.", "In example 1, caller A begins with an open-ended question.", "The kgCVAE model generated highly diverse answers that cover multiple plausible dialog acts.", "Further, we notice that the generated text exhibits similar dialog acts compared to the ones predicted separately by the model, implying the consistency of natural language generation based on y.", "On the contrary, the responses from the baseline model are limited to local n-gram variations and share a similar prefix, i.e.", "\"I'm\".", "Example 2 is a situation where caller A is telling B stories.", "The ground truth response is a back-channel and the range of valid answers is more constrained than example 1 since B is playing the role of a listener.", "The baseline successfully predicts \"uh-huh\".", "The kgCVAE model is also able to generate various ways of back-channeling.", "This implies that the latent z is able to capture context-sensitive variations, i.e.", "in low-entropy dialog contexts modeling lexical diversity while in high-entropy ones modeling discourse-level diversity.", "Moreover, kgCVAE is occasionally able to generate more sophisticated grounding (sample 4) beyond a simple back-channel, which is also an acceptable response given the dialog context.", "Qualitative Analysis In addition, past work (Kingma and Welling, 2013) has shown that the recognition network is able to learn to cluster high-dimension data, so we conjecture that posterior z outputted from the recognition network should cluster the responses into meaningful groups.", "Figure 5 visualizes the posterior z of responses in the test dataset in 2D space using t-SNE (Maaten and Hinton, 2008) .", "We found that the learned latent space is highly correlated with the dialog act and length of responses, which confirms our assumption.", "Results for Bag-of-Word Loss Finally, we evaluate the effectiveness of bag-ofword (BOW) loss for training VAE/CVAE with the RNN decoder.", "To compare with past work (Bowman et al., 2015) , we conducted the same language modelling (LM) task on Penn Treebank using VAE.", "The network architecture is same except we use GRU instead of LSTM.", "We compared four different training setups: (1) standard VAE without any heuristics; (2) VAE with KL annealing (KLA); (3) VAE with BOW loss; (4) VAE with both BOW loss and KLA.", "Intuitively, a well trained model should lead to a low reconstruction loss and small but non-trivial KL cost.", "For all models with KLA, the KL weight increases linearly from 0 to 1 in the first 5000 batches.", "Table 3 shows the reconstruction perplexity and the KL cost on the test dataset.", "The standard VAE fails to learn a meaningful latent variable by hav- Table 2 : Generated responses from the baselines and kgCVAE in two examples.", "KgCVAE also provides the predicted dialog act for each response.", "The context only shows the last utterance due to space limit (the actual context window size is 10).", "ing a KL cost close to 0 and a reconstruction perplexity similar to a small LSTM LM (Zaremba et al., 2014) .", "KLA helps to improve the reconstruction loss, but it requires early stopping since the models will fall back to the standard VAE after the KL weight becomes 1.", "At last, the models with BOW loss achieved significantly lower perplexity and larger KL cost.", "Figure 6 visualizes the evolution of the KL cost.", "We can see that for the standard model, the KL cost crashes to 0 at the beginning of training and never recovers.", "On the contrary, the model with only KLA learns to encode substantial information in latent z when the KL cost weight is small.", "However, after the KL weight is increased to 1 (after 5000 batch), the model once again decides to ignore the latent z and falls back to the naive implementation.", "The model with BOW loss, however, consistently converges to a non-trivial KL cost even without KLA, which confirms the importance of BOW loss for training latent variable models with the RNN decoder.", "Last but not least, our experiments showed that the conclusions drawn from LM using VAE also apply to training CVAE/kgCVAE, so we used BOW loss together with KLA for all previous experiments.", "Conclusion and Future Work In conclusion, we identified the one-to-many nature of open-domain conversation and proposed two novel models that show superior performance in generating diverse and appropriate responses at the discourse level.", "While the current paper addresses diversifying responses in respect to dialogue acts, this work is part of a larger research direction that targets leveraging both past linguistic findings and the learning power of deep neural networks to learn better representation of the latent factors in dialog.", "In turn, the output of this novel neural dialog model will be easier to explain and control by humans.", "In addition to dialog acts, we plan to apply our kgCVAE model to capture other different linguistic phenomena including sentiment, named entities,etc.", "Last but not least, the recognition network in our model will serve as the foundation for designing a datadriven dialog manager, which automatically discovers useful high-level intents.", "All of the above suggest a promising research direction." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "4.1", "4.2", "5.1", "5.2", "1.", "2.", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Encoder-decoder Dialog Models", "Conditional Variational Autoencoder", "Conditional Variational Autoencoder (CVAE) for Dialog Generation", "Knowledge-Guided CVAE (kgCVAE)", "Optimization Challenges", "Dataset", "Training", "Experiments Setup", "Quantitative Analysis", "Smoothed Sentence-level BLEU (Chen and", "Cosine", "Qualitative Analysis", "Results for Bag-of-Word Loss", "Conclusion and Future Work" ] }
GEM-SciDuet-train-100#paper-1263#slide-14
Qualitative Analysis
Topic: Recycling Context: A: are they doing a lot of recycling out in Georgia? Target (statement): well at my workplace we have places for aluminium cans Baseline + Sampling kgCVAE + Greedy 1. well Im a graduate student and have two kids. 2. well I was in last year and so weve had lots of recycling. 2. (statement) oh youre not going to have a curbside pick up here. 3. Im not sure. 3. (statement) okay I am sure about a recycling center. 4. well I dont know I just moved here in new york. 4. (yes-answer) yeah so.
Topic: Recycling Context: A: are they doing a lot of recycling out in Georgia? Target (statement): well at my workplace we have places for aluminium cans Baseline + Sampling kgCVAE + Greedy 1. well Im a graduate student and have two kids. 2. well I was in last year and so weve had lots of recycling. 2. (statement) oh youre not going to have a curbside pick up here. 3. Im not sure. 3. (statement) okay I am sure about a recycling center. 4. well I dont know I just moved here in new york. 4. (yes-answer) yeah so.
[]
GEM-SciDuet-train-100#paper-1263#slide-15
1263
Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders
While recent neural encoder-decoder models have shown great promise in modeling open-domain conversations, they often generate dull and generic responses. Unlike past work that has focused on diversifying the output of the decoder at word-level to alleviate this problem, we present a novel framework based on conditional variational autoencoders that captures the discourse-level diversity in the encoder. Our model uses latent variables to learn a distribution over potential conversational intents and generates diverse responses using only greedy decoders. We have further developed a novel variant that is integrated with linguistic prior knowledge for better performance. Finally, the training procedure is improved by introducing a bag-of-word loss. Our proposed models have been validated to generate significantly more diverse responses than baseline approaches and exhibit competence in discourse-level decision-making.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196 ], "paper_content_text": [ "Introduction The dialog manager is one of the key components of dialog systems, which is responsible for modeling the decision-making process.", "Specifically, it typically takes a new utterance and the dialog context as input, and generates discourse-level decisions (Bohus and Rudnicky, 2003; Williams and Young, 2007) .", "Advanced dialog managers usually have a list of potential actions that enable them to have diverse behavior during a conversation, e.g.", "different strategies to recover from non-understanding .", "However, the conventional approach of designing a dialog manager (Williams and Young, 2007) does not scale well to open-domain conversation models because of the vast quantity of possible decisions.", "Thus, there has been a growing interest in applying encoder-decoder models for modeling open-domain conversation (Vinyals and Le, 2015; Serban et al., 2016a) .", "The basic approach treats a conversation as a transduction task, in which the dialog history is the source sequence and the next response is the target sequence.", "The model is then trained end-to-end on large conversation corpora using the maximum-likelihood estimation (MLE) objective without the need for manual crafting.", "However recent research has found that encoder-decoder models tend to generate generic and dull responses (e.g., I don't know), rather than meaningful and specific answers (Li et al., 2015; Serban et al., 2016b) .", "There have been many attempts to explain and solve this limitation, and they can be broadly divided into two categories (see Section 2 for details): (1) the first category argues that the dialog history is only one of the factors that decide the next response.", "Other features should be extracted and provided to the models as conditionals in order to generate more specific responses (Xing et al., 2016; Li et al., 2016a) ; (2) the second category aims to improve the encoder-decoder model itself, including decoding with beam search and its variations (Wiseman and Rush, 2016) , encouraging responses that have long-term payoff (Li et al., 2016b) , etc.", "Building upon the past work in dialog managers and encoder-decoder models, the key idea of this paper is to model dialogs as a one-to-many problem at the discourse level.", "Previous studies indicate that there are many factors in open-domain dialogs that decide the next response, and it is nontrivial to extract all of them.", "Intuitively, given a similar dialog history (and other observed inputs), there may exist many valid responses (at the discourse-level), each corresponding to a certain configuration of the latent variables that are not presented in the input.", "To uncover the potential responses, we strive to model a probabilistic distribution over the distributed utterance embeddings of the potential responses using a latent variable ( Figure 1 ).", "This allows us to generate diverse responses by drawing samples from the learned distribution and reconstruct their words via a decoder neural network.", "Specifically, our contributions are three-fold: 1.", "We present a novel neural dialog model adapted from conditional variational autoencoders (CVAE) , which introduces a latent variable that can capture discourse-level variations as described above 2.", "We propose Knowledge-Guided CVAE (kgC-VAE), which enables easy integration of expert knowledge and results in performance improvement and model interpretability.", "3.", "We develop a training method in addressing the difficulty of optimizing CVAE for natural language generation (Bowman et al., 2015) .", "We evaluate our models on human-human conversation data and yield promising results in: (a) generating appropriate and discourse-level diverse responses, and (b) showing that the proposed training method is more effective than the previous techniques.", "Related Work Our work is related to both recent advancement in encoder-decoder dialog models and generative models based on CVAE.", "Encoder-decoder Dialog Models Since the emergence of the neural dialog model, the problem of output diversity has received much attention in the research community.", "Ideal output responses should be both coherent and diverse.", "However, most models end up with generic and dull responses.", "To tackle this problem, one line of research has focused on augmenting the input of encoder-decoder models with richer context information, in order to generate more spe-cific responses.", "Li et al., (2016a) captured speakers' characteristics by encoding background information and speaking style into the distributed embeddings, which are used to re-rank the generated response from an encoder-decoder model.", "Xing et al., (2016) maintain topic encoding based on Latent Dirichlet Allocation (LDA) (Blei et al., 2003) of the conversation to encourage the model to output more topic coherent responses.", "On the other hand, many attempts have also been made to improve the architecture of encoderdecoder models.", "Li et al,.", "(2015) proposed to optimize the standard encoder-decoder by maximizing the mutual information between input and output, which in turn reduces generic responses.", "This approach penalized unconditionally high frequency responses, and favored responses that have high conditional probability given the input.", "Wiseman and Rush (2016) focused on improving the decoder network by alleviating the biases between training and testing.", "They introduced a searchbased loss that directly optimizes the networks for beam search decoding.", "The resulting model achieves better performance on word ordering, parsing and machine translation.", "Besides improving beam search, Li et al., (2016b) pointed out that the MLE objective of an encoder-decoder model is unable to approximate the real-world goal of the conversation.", "Thus, they initialized a encoderdecoder model with MLE objective and leveraged reinforcement learning to fine tune the model by optimizing three heuristic rewards functions: informativity, coherence, and ease of answering.", "Conditional Variational Autoencoder The variational autoencoder (VAE) (Kingma and Welling, 2013; Rezende et al., 2014) is one of the most popular frameworks for image generation.", "The basic idea of VAE is to encode the input x into a probability distribution z instead of a point encoding in the autoencoder.", "Then VAE applies a decoder network to reconstruct the original input using samples from z.", "To generate images, VAE first obtains a sample of z from the prior distribution, e.g.", "N (0, I), and then produces an image via the decoder network.", "A more advanced model, the conditional VAE (CVAE), is a recent modification of VAE to generate diverse images conditioned on certain attributes, e.g.", "generating different human faces given skin color .", "Inspired by CVAE, we view the dialog contexts as the conditional attributes and adapt CVAE to generate diverse responses instead of images.", "Although VAE/CVAE has achieved impressive results in image generation, adapting this to natural language generators is non-trivial.", "Bowman et al., (2015) have used VAE with Long-Short Term Memory (LSTM)-based recognition and decoder networks to generate sentences from a latent Gaussian variable.", "They showed that their model is able to generate diverse sentences with even a greedy LSTM decoder.", "They also reported the difficulty of training because the LSTM decoder tends to ignore the latent variable.", "We refer to this issue as the vanishing latent variable problem.", "Serban et al., (2016b) have applied a latent variable hierarchical encoder-decoder dialog model to introduce utterance-level variations and facilitate longer responses.", "To improve upon the past models, we firstly introduce a novel mechanism to leverage linguistic knowledge in training end-to-end neural dialog models, and we also propose a novel training technique that mitigates the vanishing latent variable problem.", "Proposed Models Conditional Variational Autoencoder (CVAE) for Dialog Generation Each dyadic conversation can be represented via three random variables: the dialog context c (context window size k − 1), the response utterance x (the k th utterance) and a latent variable z, which is used to capture the latent distribution over the valid responses.", "Further, c is composed of the dialog history: the preceding k-1 utterances; conversational floor (1 if the utterance is from the same speaker of x, otherwise 0) and meta features m (e.g.", "the topic).", "We then define the conditional distribution p(x, z|c) = p(x|z, c)p(z|c) and our goal is to use deep neural networks (parametrized by θ) to approximate p(z|c) and p(x|z, c).", "We refer to p θ (z|c) as the prior network and p θ (x, |z, c) as the response decoder.", "Then the generative process of x is (Figure 2 (a)): 1.", "Sample a latent variable z from the prior network p θ (z|c).", "2.", "Generate x through the response decoder p θ (x|z, c).", "CVAE is trained to maximize the conditional log likelihood of x given c, which involves an intractable marginalization over the latent variable z.", "As proposed in , CVAE can be efficiently trained with the Stochastic Gradient Variational Bayes (SGVB) framework (Kingma and Welling, 2013) by maximizing the variational lower bound of the conditional log likelihood.", "We assume the z follows multivariate Gaussian distribution with a diagonal covariance matrix and introduce a recognition network q φ (z|x, c) to approximate the true posterior distribution p(z|x, c).", "have shown that the variational lower bound can be written as: Figure 3 demonstrates an overview of our model.", "The utterance encoder is a bidirectional recurrent neural network (BRNN) (Schuster and Paliwal, 1997) with a gated recurrent unit (GRU) (Chung et al., 2014) to encode each utterance into fixedsize vectors by concatenating the last hidden states of the forward and backward RNN u L(θ, φ; x, c) = −KL(q φ (z|x, c) p θ (z|c)) + E q φ (z|c,x) [log p θ (x|z, c)] (1) ≤ log p(x|c) i = [ h i , h i ].", "x is simply u k .", "The context encoder is a 1-layer GRU network that encodes the preceding k-1 utterances by taking u 1:k−1 and the corresponding conversation floor as inputs.", "The last hidden state h c of the context encoder is concatenated with meta features and c = [h c , m].", "Since we assume z follows isotropic Gaussian distribution, the recognition network q φ (z|x, c) ∼ N (µ, σ 2 I) and the prior network p θ (z|c) ∼ N (µ , σ 2 I), and then we have: µ log(σ 2 ) = W r x c + b r (2) µ log(σ 2 ) = MLP p (c) (3) We then use the reparametrization trick (Kingma and Welling, 2013) to obtain samples of z either from N (z; µ, σ 2 I) predicted by the recognition network (training) or N (z; µ , σ 2 I) predicted by the prior network (testing).", "Finally, the response decoder is a 1-layer GRU network with initial state s 0 = W i [z, c]+b i .", "The response decoder then predicts the words in x sequentially.", "Knowledge-Guided CVAE (kgCVAE) In practice, training CVAE is a challenging optimization problem and often requires large amount of data.", "On the other hand, past research in spoken dialog systems and discourse analysis has suggested that many linguistic cues capture crucial features in representing natural conversation.", "For example, dialog acts (Poesio and Traum, 1998) have been widely used in the dialog managers (Litman and Allen, 1987; Raux et al., 2005; Zhao and Eskenazi, 2016) to represent the propositional function of the system.", "Therefore, we conjecture that it will be beneficial for the model to learn meaningful latent z if it is provided with explicitly extracted discourse features during the training.", "In order to incorporate the linguistic features into the basic CVAE model, we first denote the set of linguistic features as y.", "Then we assume that the generation of x depends on c, z and y. y relies on z and c as shown in Figure 2 .", "Specifically, during training the initial state of the response decoder is s 0 = W i [z, c, y] + b i and the input at every step is [e t , y] where e t is the word embedding of t th word in x.", "In addition, there is an MLP to predict y = MLP y (z, c) based on z and c. In the testing stage, the predicted y is used by the response decoder instead of the oracle decoders.", "We denote the modified model as knowledge-guided CVAE (kgCVAE) and developers can add desired discourse features that they wish the latent variable z to capture.", "KgCVAE model is trained by maximizing: L(θ, φ; x, c, y) = −KL(q φ (z|x, c, y) P θ (z|c)) + E q φ (z|c,x,y) [log p(x|z, c, y)] + E q φ (z|c,x,y) [log p(y|z, c)] (4) Since now the reconstruction of y is a part of the loss function, kgCVAE can more efficiently encode y-related information into z than discovering it only based on the surface-level x and c. Another advantage of kgCVAE is that it can output a highlevel label (e.g.", "dialog act) along with the wordlevel responses, which allows easier interpretation of the model's outputs.", "Optimization Challenges A straightforward VAE with RNN decoder fails to encode meaningful information in z due to the vanishing latent variable problem (Bowman et al., 2015) .", "Bowman et al., (2015) proposed two solutions: (1) KL annealing: gradually increasing the weight of the KL term from 0 to 1 during training; (2) word drop decoding: setting a certain percentage of the target words to 0.", "We found that CVAE suffers from the same issue when the decoder is an RNN.", "Also we did not consider word drop decoding because Bowman et al,.", "(2015) have shown that it may hurt the performance when the drop rate is too high.", "As a result, we propose a simple yet novel technique to tackle the vanishing latent variable problem: bag-of-word loss.", "The idea is to introduce an auxiliary loss that requires the decoder network to predict the bag-of-words in the response x as shown in Figure 3 (b).", "We decompose x into two variables: x o with word order and x bow without order, and assume that x o and x bow are conditionally independent given z and c: p(x, z|c) = p(x o |z, c)p(x bow |z, c)p(z|c).", "Due to the conditional independence assumption, the latent variable is forced to capture global information about the target response.", "Let f = MLP b (z, x) ∈ R V where V is vocabulary size, and we have: log p(x bow |z, c) = log |x| t=1 e fx t V j e f j (5) where |x| is the length of x and x t is the word index of t th word in x.", "The modified variational lower bound for CVAE with bag-of-word loss is (see Appendix A for kgCVAE): L (θ, φ; x, c) = L(θ, φ; x, c) + E q φ (z|c,x,y) [log p(x bow |z, c)] (6) We will show that the bag-of-word loss in Equation 6 is very effective against the vanishing latent variable and it is also complementary to the KL annealing technique.", "Experiment Setup Dataset We chose the Switchboard (SW) 1 Release 2 Corpus (Godfrey and Holliman, 1997) to evaluate the proposed models.", "SW has 2400 two-sided telephone conversations with manually transcribed speech and alignment.", "In the beginning of the call, a computer operator gave the callers recorded prompts that define the desired topic of discussion.", "There are 70 available topics.", "We randomly split the data into 2316/60/62 dialogs for train/validate/test.", "The pre-processing includes (1) tokenize using the NLTK tokenizer (Bird et al., 2009 ); (2) remove non-verbal symbols and repeated words due to false starts; (3) keep the top 10K frequent word types as the vocabulary.", "The final data have 207, 833/5, 225/5, 481 (c, x) pairs for train/validate/test.", "Furthermore, a subset of SW was manually labeled with dialog acts (Stolcke et al., 2000) .", "We extracted dialog act labels based on the dialog act recognizer proposed in (Ribeiro et al., 2015) .", "The features include the uni-gram and bi-gram of the utterance, and the contextual features of the last 3 utterances.", "We trained a Support Vector Machine (SVM) (Suykens and Vandewalle, 1999) with linear kernel on the subset of SW with human annotations.", "There are 42 types of dialog acts and the SVM achieved 77.3% accuracy on held-out data.", "Then the rest of SW data are labelled with dialog acts using the trained SVM dialog act recognizer.", "Training We trained with the following hyperparameters (according to the loss on the validate dataset): word embedding has size 200 and is shared across everywhere.", "We initialize the word embedding from Glove embedding pre-trained on Twitter (Pennington et al., 2014) .", "The utterance encoder has a hidden size of 300 for each direction.", "The context encoder has a hidden size of 600 and the response decoder has a hidden size of 400.", "The prior network and the MLP for predicting y both have 1 hidden layer of size 400 and tanh non-linearity.", "The latent variable z has a size of 200.", "The context window k is 10.", "All the initial weights are sampled from a uniform distribution [-0.08, 0.08].", "The mini-batch size is 30.", "The models are trained end-to-end using the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.001 and gradient clipping at 5.", "We selected the best models based on the variational lower bound on the validate data.", "Finally, we use the BOW loss along with KL annealing of 10,000 batches to achieve the best performance.", "Section 5.4 gives a detailed argument for the importance of the BOW loss.", "Results Experiments Setup We compared three neural dialog models: a strong baseline model, CVAE, and kgCVAE.", "The baseline model is an encoder-decoder neural dialog model without latent variables similar to (Serban et al., 2016a) .", "The baseline model's encoder uses the same context encoder to encode the dialog history and the meta features as shown in Figure 3 .", "The encoded context c is directly fed into the decoder networks as the initial state.", "The hyperparameters of the baseline are the same as the ones reported in Section 4.2 and the baseline is trained to minimize the standard cross entropy loss of the decoder RNN model without any auxiliary loss.", "Also, to compare the diversity introduced by the stochasticity in the proposed latent variable versus the softmax of RNN at each decoding step, we generate N responses from the baseline by sam-pling from the softmax.", "For CVAE/kgCVAE, we sample N times from the latent z and only use greedy decoders so that the randomness comes entirely from the latent variable z. Quantitative Analysis Automatically evaluating an open-domain generative dialog model is an open research challenge .", "Following our one-tomany hypothesis, we propose the following metrics.", "We assume that for a given dialog context c, there exist M c reference responses r j , j ∈ [1, M c ].", "Meanwhile a model can generate N hypothesis re- sponses h i , i ∈ [1, N ].", "The generalized responselevel precision/recall for a given dialog context is: precision(c) = N i=1 max j∈[1,Mc] d(r j , h i ) N recall(c) = Mc j=1 max i∈[1,N ] d(r j , h i )) M c where d(r j , h i ) is a distance function which lies between 0 to 1 and measures the similarities between r j and h i .", "The final score is averaged over the entire test dataset and we report the performance with 3 types of distance functions in order to evaluate the systems from various linguistic points of view: Smoothed Sentence-level BLEU (Chen and Cherry, 2014): BLEU is a popular metric that measures the geometric mean of modified ngram precision with a length penalty (Papineni et al., 2002; Li et al., 2015) .", "We use BLEU-1 to 4 as our lexical similarity metric and normalize the score to 0 to 1 scale.", "Cosine Distance of Bag-of-word Embedding: a simple method to obtain sentence embeddings is to take the average or extrema of all the word embeddings in the sentences (Forgues et al., 2014; Adi et al., 2016) .", "The d(r j , h i ) is the cosine distance of the two embedding vectors.", "We used Glove embedding described in Section 4 and denote the average method as A-bow and extrema method as E-bow.", "3.", "Dialog Act Match: to measure the similarity at the discourse level, the same dialogact tagger from 4.1 is applied to label all the generated responses of each model.", "We set d(r j , h i ) = 1 if r j and h i have the same dialog acts, otherwise d(r j , h i ) = 0.", "One challenge of using the above metrics is that there is only one, rather than multiple reference responses/contexts.", "This impacts reliability of our measures.", "Inspired by (Sordoni et al., 2015) , we utilized information retrieval techniques (see Appendix A) to gather 10 extra candidate reference responses/context from other conversations with the same topics.", "Then the 10 candidate references are filtered by two experts, which serve as the ground truth to train the reference response classifier.", "The result is 6.69 extra references in average per context.", "The average number of distinct reference dialog acts is 4.2.", "Table 1 The proposed models outperform the baseline in terms of recall in all the metrics with statistical significance.", "This confirms our hypothesis that generating responses with discourse-level diversity can lead to a more comprehensive coverage of the potential responses than promoting only word-level diversity.", "As for precision, we observed that the baseline has higher or similar scores than CVAE in all metrics, which is expected since the baseline tends to generate the mostly likely and safe responses repeatedly in the N hypotheses.", "However, kgCVAE is able to achieve the highest precision and recall in the 4 metrics at the same time (BLEU1-4, E-BOW).", "One reason for kgCVAE's good performance is that the predicted dialog act label in kgCVAE can regularize the generation process of its RNN decoder by forcing it to generate more coherent and precise words.", "We further analyze the precision/recall of BLEU-4 by looking at the average score versus the number of distinct reference dialog acts.", "A low number of distinct dialog acts represents the situation where the dialog context has a strong constraint on the range of the next response (low entropy), while a high number indicates the opposite (highentropy).", "Figure 4 shows that CVAE/kgCVAE achieves significantly higher recall than the baseline in higher entropy contexts.", "Also it shows that CVAE suffers from lower precision, especially in low entropy contexts.", "Finally, kgCVAE gets higher precision than both the baseline and CVAE in the full spectrum of context entropy.", "Table 2 shows the outputs generated from the baseline and kgCVAE.", "In example 1, caller A begins with an open-ended question.", "The kgCVAE model generated highly diverse answers that cover multiple plausible dialog acts.", "Further, we notice that the generated text exhibits similar dialog acts compared to the ones predicted separately by the model, implying the consistency of natural language generation based on y.", "On the contrary, the responses from the baseline model are limited to local n-gram variations and share a similar prefix, i.e.", "\"I'm\".", "Example 2 is a situation where caller A is telling B stories.", "The ground truth response is a back-channel and the range of valid answers is more constrained than example 1 since B is playing the role of a listener.", "The baseline successfully predicts \"uh-huh\".", "The kgCVAE model is also able to generate various ways of back-channeling.", "This implies that the latent z is able to capture context-sensitive variations, i.e.", "in low-entropy dialog contexts modeling lexical diversity while in high-entropy ones modeling discourse-level diversity.", "Moreover, kgCVAE is occasionally able to generate more sophisticated grounding (sample 4) beyond a simple back-channel, which is also an acceptable response given the dialog context.", "Qualitative Analysis In addition, past work (Kingma and Welling, 2013) has shown that the recognition network is able to learn to cluster high-dimension data, so we conjecture that posterior z outputted from the recognition network should cluster the responses into meaningful groups.", "Figure 5 visualizes the posterior z of responses in the test dataset in 2D space using t-SNE (Maaten and Hinton, 2008) .", "We found that the learned latent space is highly correlated with the dialog act and length of responses, which confirms our assumption.", "Results for Bag-of-Word Loss Finally, we evaluate the effectiveness of bag-ofword (BOW) loss for training VAE/CVAE with the RNN decoder.", "To compare with past work (Bowman et al., 2015) , we conducted the same language modelling (LM) task on Penn Treebank using VAE.", "The network architecture is same except we use GRU instead of LSTM.", "We compared four different training setups: (1) standard VAE without any heuristics; (2) VAE with KL annealing (KLA); (3) VAE with BOW loss; (4) VAE with both BOW loss and KLA.", "Intuitively, a well trained model should lead to a low reconstruction loss and small but non-trivial KL cost.", "For all models with KLA, the KL weight increases linearly from 0 to 1 in the first 5000 batches.", "Table 3 shows the reconstruction perplexity and the KL cost on the test dataset.", "The standard VAE fails to learn a meaningful latent variable by hav- Table 2 : Generated responses from the baselines and kgCVAE in two examples.", "KgCVAE also provides the predicted dialog act for each response.", "The context only shows the last utterance due to space limit (the actual context window size is 10).", "ing a KL cost close to 0 and a reconstruction perplexity similar to a small LSTM LM (Zaremba et al., 2014) .", "KLA helps to improve the reconstruction loss, but it requires early stopping since the models will fall back to the standard VAE after the KL weight becomes 1.", "At last, the models with BOW loss achieved significantly lower perplexity and larger KL cost.", "Figure 6 visualizes the evolution of the KL cost.", "We can see that for the standard model, the KL cost crashes to 0 at the beginning of training and never recovers.", "On the contrary, the model with only KLA learns to encode substantial information in latent z when the KL cost weight is small.", "However, after the KL weight is increased to 1 (after 5000 batch), the model once again decides to ignore the latent z and falls back to the naive implementation.", "The model with BOW loss, however, consistently converges to a non-trivial KL cost even without KLA, which confirms the importance of BOW loss for training latent variable models with the RNN decoder.", "Last but not least, our experiments showed that the conclusions drawn from LM using VAE also apply to training CVAE/kgCVAE, so we used BOW loss together with KLA for all previous experiments.", "Conclusion and Future Work In conclusion, we identified the one-to-many nature of open-domain conversation and proposed two novel models that show superior performance in generating diverse and appropriate responses at the discourse level.", "While the current paper addresses diversifying responses in respect to dialogue acts, this work is part of a larger research direction that targets leveraging both past linguistic findings and the learning power of deep neural networks to learn better representation of the latent factors in dialog.", "In turn, the output of this novel neural dialog model will be easier to explain and control by humans.", "In addition to dialog acts, we plan to apply our kgCVAE model to capture other different linguistic phenomena including sentiment, named entities,etc.", "Last but not least, the recognition network in our model will serve as the foundation for designing a datadriven dialog manager, which automatically discovers useful high-level intents.", "All of the above suggest a promising research direction." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "4.1", "4.2", "5.1", "5.2", "1.", "2.", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Encoder-decoder Dialog Models", "Conditional Variational Autoencoder", "Conditional Variational Autoencoder (CVAE) for Dialog Generation", "Knowledge-Guided CVAE (kgCVAE)", "Optimization Challenges", "Dataset", "Training", "Experiments Setup", "Quantitative Analysis", "Smoothed Sentence-level BLEU (Chen and", "Cosine", "Qualitative Analysis", "Results for Bag-of-Word Loss", "Conclusion and Future Work" ] }
GEM-SciDuet-train-100#paper-1263#slide-15
Latent Space Visualization
Visualization of the posterior Z on the test dataset in 2D space using t-SNE. Assign different colors to the top 8 frequent The size of circle represents the response Exhibit clear clusterings of responses w.r.t the
Visualization of the posterior Z on the test dataset in 2D space using t-SNE. Assign different colors to the top 8 frequent The size of circle represents the response Exhibit clear clusterings of responses w.r.t the
[]
GEM-SciDuet-train-100#paper-1263#slide-16
1263
Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders
While recent neural encoder-decoder models have shown great promise in modeling open-domain conversations, they often generate dull and generic responses. Unlike past work that has focused on diversifying the output of the decoder at word-level to alleviate this problem, we present a novel framework based on conditional variational autoencoders that captures the discourse-level diversity in the encoder. Our model uses latent variables to learn a distribution over potential conversational intents and generates diverse responses using only greedy decoders. We have further developed a novel variant that is integrated with linguistic prior knowledge for better performance. Finally, the training procedure is improved by introducing a bag-of-word loss. Our proposed models have been validated to generate significantly more diverse responses than baseline approaches and exhibit competence in discourse-level decision-making.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196 ], "paper_content_text": [ "Introduction The dialog manager is one of the key components of dialog systems, which is responsible for modeling the decision-making process.", "Specifically, it typically takes a new utterance and the dialog context as input, and generates discourse-level decisions (Bohus and Rudnicky, 2003; Williams and Young, 2007) .", "Advanced dialog managers usually have a list of potential actions that enable them to have diverse behavior during a conversation, e.g.", "different strategies to recover from non-understanding .", "However, the conventional approach of designing a dialog manager (Williams and Young, 2007) does not scale well to open-domain conversation models because of the vast quantity of possible decisions.", "Thus, there has been a growing interest in applying encoder-decoder models for modeling open-domain conversation (Vinyals and Le, 2015; Serban et al., 2016a) .", "The basic approach treats a conversation as a transduction task, in which the dialog history is the source sequence and the next response is the target sequence.", "The model is then trained end-to-end on large conversation corpora using the maximum-likelihood estimation (MLE) objective without the need for manual crafting.", "However recent research has found that encoder-decoder models tend to generate generic and dull responses (e.g., I don't know), rather than meaningful and specific answers (Li et al., 2015; Serban et al., 2016b) .", "There have been many attempts to explain and solve this limitation, and they can be broadly divided into two categories (see Section 2 for details): (1) the first category argues that the dialog history is only one of the factors that decide the next response.", "Other features should be extracted and provided to the models as conditionals in order to generate more specific responses (Xing et al., 2016; Li et al., 2016a) ; (2) the second category aims to improve the encoder-decoder model itself, including decoding with beam search and its variations (Wiseman and Rush, 2016) , encouraging responses that have long-term payoff (Li et al., 2016b) , etc.", "Building upon the past work in dialog managers and encoder-decoder models, the key idea of this paper is to model dialogs as a one-to-many problem at the discourse level.", "Previous studies indicate that there are many factors in open-domain dialogs that decide the next response, and it is nontrivial to extract all of them.", "Intuitively, given a similar dialog history (and other observed inputs), there may exist many valid responses (at the discourse-level), each corresponding to a certain configuration of the latent variables that are not presented in the input.", "To uncover the potential responses, we strive to model a probabilistic distribution over the distributed utterance embeddings of the potential responses using a latent variable ( Figure 1 ).", "This allows us to generate diverse responses by drawing samples from the learned distribution and reconstruct their words via a decoder neural network.", "Specifically, our contributions are three-fold: 1.", "We present a novel neural dialog model adapted from conditional variational autoencoders (CVAE) , which introduces a latent variable that can capture discourse-level variations as described above 2.", "We propose Knowledge-Guided CVAE (kgC-VAE), which enables easy integration of expert knowledge and results in performance improvement and model interpretability.", "3.", "We develop a training method in addressing the difficulty of optimizing CVAE for natural language generation (Bowman et al., 2015) .", "We evaluate our models on human-human conversation data and yield promising results in: (a) generating appropriate and discourse-level diverse responses, and (b) showing that the proposed training method is more effective than the previous techniques.", "Related Work Our work is related to both recent advancement in encoder-decoder dialog models and generative models based on CVAE.", "Encoder-decoder Dialog Models Since the emergence of the neural dialog model, the problem of output diversity has received much attention in the research community.", "Ideal output responses should be both coherent and diverse.", "However, most models end up with generic and dull responses.", "To tackle this problem, one line of research has focused on augmenting the input of encoder-decoder models with richer context information, in order to generate more spe-cific responses.", "Li et al., (2016a) captured speakers' characteristics by encoding background information and speaking style into the distributed embeddings, which are used to re-rank the generated response from an encoder-decoder model.", "Xing et al., (2016) maintain topic encoding based on Latent Dirichlet Allocation (LDA) (Blei et al., 2003) of the conversation to encourage the model to output more topic coherent responses.", "On the other hand, many attempts have also been made to improve the architecture of encoderdecoder models.", "Li et al,.", "(2015) proposed to optimize the standard encoder-decoder by maximizing the mutual information between input and output, which in turn reduces generic responses.", "This approach penalized unconditionally high frequency responses, and favored responses that have high conditional probability given the input.", "Wiseman and Rush (2016) focused on improving the decoder network by alleviating the biases between training and testing.", "They introduced a searchbased loss that directly optimizes the networks for beam search decoding.", "The resulting model achieves better performance on word ordering, parsing and machine translation.", "Besides improving beam search, Li et al., (2016b) pointed out that the MLE objective of an encoder-decoder model is unable to approximate the real-world goal of the conversation.", "Thus, they initialized a encoderdecoder model with MLE objective and leveraged reinforcement learning to fine tune the model by optimizing three heuristic rewards functions: informativity, coherence, and ease of answering.", "Conditional Variational Autoencoder The variational autoencoder (VAE) (Kingma and Welling, 2013; Rezende et al., 2014) is one of the most popular frameworks for image generation.", "The basic idea of VAE is to encode the input x into a probability distribution z instead of a point encoding in the autoencoder.", "Then VAE applies a decoder network to reconstruct the original input using samples from z.", "To generate images, VAE first obtains a sample of z from the prior distribution, e.g.", "N (0, I), and then produces an image via the decoder network.", "A more advanced model, the conditional VAE (CVAE), is a recent modification of VAE to generate diverse images conditioned on certain attributes, e.g.", "generating different human faces given skin color .", "Inspired by CVAE, we view the dialog contexts as the conditional attributes and adapt CVAE to generate diverse responses instead of images.", "Although VAE/CVAE has achieved impressive results in image generation, adapting this to natural language generators is non-trivial.", "Bowman et al., (2015) have used VAE with Long-Short Term Memory (LSTM)-based recognition and decoder networks to generate sentences from a latent Gaussian variable.", "They showed that their model is able to generate diverse sentences with even a greedy LSTM decoder.", "They also reported the difficulty of training because the LSTM decoder tends to ignore the latent variable.", "We refer to this issue as the vanishing latent variable problem.", "Serban et al., (2016b) have applied a latent variable hierarchical encoder-decoder dialog model to introduce utterance-level variations and facilitate longer responses.", "To improve upon the past models, we firstly introduce a novel mechanism to leverage linguistic knowledge in training end-to-end neural dialog models, and we also propose a novel training technique that mitigates the vanishing latent variable problem.", "Proposed Models Conditional Variational Autoencoder (CVAE) for Dialog Generation Each dyadic conversation can be represented via three random variables: the dialog context c (context window size k − 1), the response utterance x (the k th utterance) and a latent variable z, which is used to capture the latent distribution over the valid responses.", "Further, c is composed of the dialog history: the preceding k-1 utterances; conversational floor (1 if the utterance is from the same speaker of x, otherwise 0) and meta features m (e.g.", "the topic).", "We then define the conditional distribution p(x, z|c) = p(x|z, c)p(z|c) and our goal is to use deep neural networks (parametrized by θ) to approximate p(z|c) and p(x|z, c).", "We refer to p θ (z|c) as the prior network and p θ (x, |z, c) as the response decoder.", "Then the generative process of x is (Figure 2 (a)): 1.", "Sample a latent variable z from the prior network p θ (z|c).", "2.", "Generate x through the response decoder p θ (x|z, c).", "CVAE is trained to maximize the conditional log likelihood of x given c, which involves an intractable marginalization over the latent variable z.", "As proposed in , CVAE can be efficiently trained with the Stochastic Gradient Variational Bayes (SGVB) framework (Kingma and Welling, 2013) by maximizing the variational lower bound of the conditional log likelihood.", "We assume the z follows multivariate Gaussian distribution with a diagonal covariance matrix and introduce a recognition network q φ (z|x, c) to approximate the true posterior distribution p(z|x, c).", "have shown that the variational lower bound can be written as: Figure 3 demonstrates an overview of our model.", "The utterance encoder is a bidirectional recurrent neural network (BRNN) (Schuster and Paliwal, 1997) with a gated recurrent unit (GRU) (Chung et al., 2014) to encode each utterance into fixedsize vectors by concatenating the last hidden states of the forward and backward RNN u L(θ, φ; x, c) = −KL(q φ (z|x, c) p θ (z|c)) + E q φ (z|c,x) [log p θ (x|z, c)] (1) ≤ log p(x|c) i = [ h i , h i ].", "x is simply u k .", "The context encoder is a 1-layer GRU network that encodes the preceding k-1 utterances by taking u 1:k−1 and the corresponding conversation floor as inputs.", "The last hidden state h c of the context encoder is concatenated with meta features and c = [h c , m].", "Since we assume z follows isotropic Gaussian distribution, the recognition network q φ (z|x, c) ∼ N (µ, σ 2 I) and the prior network p θ (z|c) ∼ N (µ , σ 2 I), and then we have: µ log(σ 2 ) = W r x c + b r (2) µ log(σ 2 ) = MLP p (c) (3) We then use the reparametrization trick (Kingma and Welling, 2013) to obtain samples of z either from N (z; µ, σ 2 I) predicted by the recognition network (training) or N (z; µ , σ 2 I) predicted by the prior network (testing).", "Finally, the response decoder is a 1-layer GRU network with initial state s 0 = W i [z, c]+b i .", "The response decoder then predicts the words in x sequentially.", "Knowledge-Guided CVAE (kgCVAE) In practice, training CVAE is a challenging optimization problem and often requires large amount of data.", "On the other hand, past research in spoken dialog systems and discourse analysis has suggested that many linguistic cues capture crucial features in representing natural conversation.", "For example, dialog acts (Poesio and Traum, 1998) have been widely used in the dialog managers (Litman and Allen, 1987; Raux et al., 2005; Zhao and Eskenazi, 2016) to represent the propositional function of the system.", "Therefore, we conjecture that it will be beneficial for the model to learn meaningful latent z if it is provided with explicitly extracted discourse features during the training.", "In order to incorporate the linguistic features into the basic CVAE model, we first denote the set of linguistic features as y.", "Then we assume that the generation of x depends on c, z and y. y relies on z and c as shown in Figure 2 .", "Specifically, during training the initial state of the response decoder is s 0 = W i [z, c, y] + b i and the input at every step is [e t , y] where e t is the word embedding of t th word in x.", "In addition, there is an MLP to predict y = MLP y (z, c) based on z and c. In the testing stage, the predicted y is used by the response decoder instead of the oracle decoders.", "We denote the modified model as knowledge-guided CVAE (kgCVAE) and developers can add desired discourse features that they wish the latent variable z to capture.", "KgCVAE model is trained by maximizing: L(θ, φ; x, c, y) = −KL(q φ (z|x, c, y) P θ (z|c)) + E q φ (z|c,x,y) [log p(x|z, c, y)] + E q φ (z|c,x,y) [log p(y|z, c)] (4) Since now the reconstruction of y is a part of the loss function, kgCVAE can more efficiently encode y-related information into z than discovering it only based on the surface-level x and c. Another advantage of kgCVAE is that it can output a highlevel label (e.g.", "dialog act) along with the wordlevel responses, which allows easier interpretation of the model's outputs.", "Optimization Challenges A straightforward VAE with RNN decoder fails to encode meaningful information in z due to the vanishing latent variable problem (Bowman et al., 2015) .", "Bowman et al., (2015) proposed two solutions: (1) KL annealing: gradually increasing the weight of the KL term from 0 to 1 during training; (2) word drop decoding: setting a certain percentage of the target words to 0.", "We found that CVAE suffers from the same issue when the decoder is an RNN.", "Also we did not consider word drop decoding because Bowman et al,.", "(2015) have shown that it may hurt the performance when the drop rate is too high.", "As a result, we propose a simple yet novel technique to tackle the vanishing latent variable problem: bag-of-word loss.", "The idea is to introduce an auxiliary loss that requires the decoder network to predict the bag-of-words in the response x as shown in Figure 3 (b).", "We decompose x into two variables: x o with word order and x bow without order, and assume that x o and x bow are conditionally independent given z and c: p(x, z|c) = p(x o |z, c)p(x bow |z, c)p(z|c).", "Due to the conditional independence assumption, the latent variable is forced to capture global information about the target response.", "Let f = MLP b (z, x) ∈ R V where V is vocabulary size, and we have: log p(x bow |z, c) = log |x| t=1 e fx t V j e f j (5) where |x| is the length of x and x t is the word index of t th word in x.", "The modified variational lower bound for CVAE with bag-of-word loss is (see Appendix A for kgCVAE): L (θ, φ; x, c) = L(θ, φ; x, c) + E q φ (z|c,x,y) [log p(x bow |z, c)] (6) We will show that the bag-of-word loss in Equation 6 is very effective against the vanishing latent variable and it is also complementary to the KL annealing technique.", "Experiment Setup Dataset We chose the Switchboard (SW) 1 Release 2 Corpus (Godfrey and Holliman, 1997) to evaluate the proposed models.", "SW has 2400 two-sided telephone conversations with manually transcribed speech and alignment.", "In the beginning of the call, a computer operator gave the callers recorded prompts that define the desired topic of discussion.", "There are 70 available topics.", "We randomly split the data into 2316/60/62 dialogs for train/validate/test.", "The pre-processing includes (1) tokenize using the NLTK tokenizer (Bird et al., 2009 ); (2) remove non-verbal symbols and repeated words due to false starts; (3) keep the top 10K frequent word types as the vocabulary.", "The final data have 207, 833/5, 225/5, 481 (c, x) pairs for train/validate/test.", "Furthermore, a subset of SW was manually labeled with dialog acts (Stolcke et al., 2000) .", "We extracted dialog act labels based on the dialog act recognizer proposed in (Ribeiro et al., 2015) .", "The features include the uni-gram and bi-gram of the utterance, and the contextual features of the last 3 utterances.", "We trained a Support Vector Machine (SVM) (Suykens and Vandewalle, 1999) with linear kernel on the subset of SW with human annotations.", "There are 42 types of dialog acts and the SVM achieved 77.3% accuracy on held-out data.", "Then the rest of SW data are labelled with dialog acts using the trained SVM dialog act recognizer.", "Training We trained with the following hyperparameters (according to the loss on the validate dataset): word embedding has size 200 and is shared across everywhere.", "We initialize the word embedding from Glove embedding pre-trained on Twitter (Pennington et al., 2014) .", "The utterance encoder has a hidden size of 300 for each direction.", "The context encoder has a hidden size of 600 and the response decoder has a hidden size of 400.", "The prior network and the MLP for predicting y both have 1 hidden layer of size 400 and tanh non-linearity.", "The latent variable z has a size of 200.", "The context window k is 10.", "All the initial weights are sampled from a uniform distribution [-0.08, 0.08].", "The mini-batch size is 30.", "The models are trained end-to-end using the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.001 and gradient clipping at 5.", "We selected the best models based on the variational lower bound on the validate data.", "Finally, we use the BOW loss along with KL annealing of 10,000 batches to achieve the best performance.", "Section 5.4 gives a detailed argument for the importance of the BOW loss.", "Results Experiments Setup We compared three neural dialog models: a strong baseline model, CVAE, and kgCVAE.", "The baseline model is an encoder-decoder neural dialog model without latent variables similar to (Serban et al., 2016a) .", "The baseline model's encoder uses the same context encoder to encode the dialog history and the meta features as shown in Figure 3 .", "The encoded context c is directly fed into the decoder networks as the initial state.", "The hyperparameters of the baseline are the same as the ones reported in Section 4.2 and the baseline is trained to minimize the standard cross entropy loss of the decoder RNN model without any auxiliary loss.", "Also, to compare the diversity introduced by the stochasticity in the proposed latent variable versus the softmax of RNN at each decoding step, we generate N responses from the baseline by sam-pling from the softmax.", "For CVAE/kgCVAE, we sample N times from the latent z and only use greedy decoders so that the randomness comes entirely from the latent variable z. Quantitative Analysis Automatically evaluating an open-domain generative dialog model is an open research challenge .", "Following our one-tomany hypothesis, we propose the following metrics.", "We assume that for a given dialog context c, there exist M c reference responses r j , j ∈ [1, M c ].", "Meanwhile a model can generate N hypothesis re- sponses h i , i ∈ [1, N ].", "The generalized responselevel precision/recall for a given dialog context is: precision(c) = N i=1 max j∈[1,Mc] d(r j , h i ) N recall(c) = Mc j=1 max i∈[1,N ] d(r j , h i )) M c where d(r j , h i ) is a distance function which lies between 0 to 1 and measures the similarities between r j and h i .", "The final score is averaged over the entire test dataset and we report the performance with 3 types of distance functions in order to evaluate the systems from various linguistic points of view: Smoothed Sentence-level BLEU (Chen and Cherry, 2014): BLEU is a popular metric that measures the geometric mean of modified ngram precision with a length penalty (Papineni et al., 2002; Li et al., 2015) .", "We use BLEU-1 to 4 as our lexical similarity metric and normalize the score to 0 to 1 scale.", "Cosine Distance of Bag-of-word Embedding: a simple method to obtain sentence embeddings is to take the average or extrema of all the word embeddings in the sentences (Forgues et al., 2014; Adi et al., 2016) .", "The d(r j , h i ) is the cosine distance of the two embedding vectors.", "We used Glove embedding described in Section 4 and denote the average method as A-bow and extrema method as E-bow.", "3.", "Dialog Act Match: to measure the similarity at the discourse level, the same dialogact tagger from 4.1 is applied to label all the generated responses of each model.", "We set d(r j , h i ) = 1 if r j and h i have the same dialog acts, otherwise d(r j , h i ) = 0.", "One challenge of using the above metrics is that there is only one, rather than multiple reference responses/contexts.", "This impacts reliability of our measures.", "Inspired by (Sordoni et al., 2015) , we utilized information retrieval techniques (see Appendix A) to gather 10 extra candidate reference responses/context from other conversations with the same topics.", "Then the 10 candidate references are filtered by two experts, which serve as the ground truth to train the reference response classifier.", "The result is 6.69 extra references in average per context.", "The average number of distinct reference dialog acts is 4.2.", "Table 1 The proposed models outperform the baseline in terms of recall in all the metrics with statistical significance.", "This confirms our hypothesis that generating responses with discourse-level diversity can lead to a more comprehensive coverage of the potential responses than promoting only word-level diversity.", "As for precision, we observed that the baseline has higher or similar scores than CVAE in all metrics, which is expected since the baseline tends to generate the mostly likely and safe responses repeatedly in the N hypotheses.", "However, kgCVAE is able to achieve the highest precision and recall in the 4 metrics at the same time (BLEU1-4, E-BOW).", "One reason for kgCVAE's good performance is that the predicted dialog act label in kgCVAE can regularize the generation process of its RNN decoder by forcing it to generate more coherent and precise words.", "We further analyze the precision/recall of BLEU-4 by looking at the average score versus the number of distinct reference dialog acts.", "A low number of distinct dialog acts represents the situation where the dialog context has a strong constraint on the range of the next response (low entropy), while a high number indicates the opposite (highentropy).", "Figure 4 shows that CVAE/kgCVAE achieves significantly higher recall than the baseline in higher entropy contexts.", "Also it shows that CVAE suffers from lower precision, especially in low entropy contexts.", "Finally, kgCVAE gets higher precision than both the baseline and CVAE in the full spectrum of context entropy.", "Table 2 shows the outputs generated from the baseline and kgCVAE.", "In example 1, caller A begins with an open-ended question.", "The kgCVAE model generated highly diverse answers that cover multiple plausible dialog acts.", "Further, we notice that the generated text exhibits similar dialog acts compared to the ones predicted separately by the model, implying the consistency of natural language generation based on y.", "On the contrary, the responses from the baseline model are limited to local n-gram variations and share a similar prefix, i.e.", "\"I'm\".", "Example 2 is a situation where caller A is telling B stories.", "The ground truth response is a back-channel and the range of valid answers is more constrained than example 1 since B is playing the role of a listener.", "The baseline successfully predicts \"uh-huh\".", "The kgCVAE model is also able to generate various ways of back-channeling.", "This implies that the latent z is able to capture context-sensitive variations, i.e.", "in low-entropy dialog contexts modeling lexical diversity while in high-entropy ones modeling discourse-level diversity.", "Moreover, kgCVAE is occasionally able to generate more sophisticated grounding (sample 4) beyond a simple back-channel, which is also an acceptable response given the dialog context.", "Qualitative Analysis In addition, past work (Kingma and Welling, 2013) has shown that the recognition network is able to learn to cluster high-dimension data, so we conjecture that posterior z outputted from the recognition network should cluster the responses into meaningful groups.", "Figure 5 visualizes the posterior z of responses in the test dataset in 2D space using t-SNE (Maaten and Hinton, 2008) .", "We found that the learned latent space is highly correlated with the dialog act and length of responses, which confirms our assumption.", "Results for Bag-of-Word Loss Finally, we evaluate the effectiveness of bag-ofword (BOW) loss for training VAE/CVAE with the RNN decoder.", "To compare with past work (Bowman et al., 2015) , we conducted the same language modelling (LM) task on Penn Treebank using VAE.", "The network architecture is same except we use GRU instead of LSTM.", "We compared four different training setups: (1) standard VAE without any heuristics; (2) VAE with KL annealing (KLA); (3) VAE with BOW loss; (4) VAE with both BOW loss and KLA.", "Intuitively, a well trained model should lead to a low reconstruction loss and small but non-trivial KL cost.", "For all models with KLA, the KL weight increases linearly from 0 to 1 in the first 5000 batches.", "Table 3 shows the reconstruction perplexity and the KL cost on the test dataset.", "The standard VAE fails to learn a meaningful latent variable by hav- Table 2 : Generated responses from the baselines and kgCVAE in two examples.", "KgCVAE also provides the predicted dialog act for each response.", "The context only shows the last utterance due to space limit (the actual context window size is 10).", "ing a KL cost close to 0 and a reconstruction perplexity similar to a small LSTM LM (Zaremba et al., 2014) .", "KLA helps to improve the reconstruction loss, but it requires early stopping since the models will fall back to the standard VAE after the KL weight becomes 1.", "At last, the models with BOW loss achieved significantly lower perplexity and larger KL cost.", "Figure 6 visualizes the evolution of the KL cost.", "We can see that for the standard model, the KL cost crashes to 0 at the beginning of training and never recovers.", "On the contrary, the model with only KLA learns to encode substantial information in latent z when the KL cost weight is small.", "However, after the KL weight is increased to 1 (after 5000 batch), the model once again decides to ignore the latent z and falls back to the naive implementation.", "The model with BOW loss, however, consistently converges to a non-trivial KL cost even without KLA, which confirms the importance of BOW loss for training latent variable models with the RNN decoder.", "Last but not least, our experiments showed that the conclusions drawn from LM using VAE also apply to training CVAE/kgCVAE, so we used BOW loss together with KLA for all previous experiments.", "Conclusion and Future Work In conclusion, we identified the one-to-many nature of open-domain conversation and proposed two novel models that show superior performance in generating diverse and appropriate responses at the discourse level.", "While the current paper addresses diversifying responses in respect to dialogue acts, this work is part of a larger research direction that targets leveraging both past linguistic findings and the learning power of deep neural networks to learn better representation of the latent factors in dialog.", "In turn, the output of this novel neural dialog model will be easier to explain and control by humans.", "In addition to dialog acts, we plan to apply our kgCVAE model to capture other different linguistic phenomena including sentiment, named entities,etc.", "Last but not least, the recognition network in our model will serve as the foundation for designing a datadriven dialog manager, which automatically discovers useful high-level intents.", "All of the above suggest a promising research direction." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "4.1", "4.2", "5.1", "5.2", "1.", "2.", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Encoder-decoder Dialog Models", "Conditional Variational Autoencoder", "Conditional Variational Autoencoder (CVAE) for Dialog Generation", "Knowledge-Guided CVAE (kgCVAE)", "Optimization Challenges", "Dataset", "Training", "Experiments Setup", "Quantitative Analysis", "Smoothed Sentence-level BLEU (Chen and", "Cosine", "Qualitative Analysis", "Results for Bag-of-Word Loss", "Conclusion and Future Work" ] }
GEM-SciDuet-train-100#paper-1263#slide-16
The Effect of BOW Loss
Same setup on PennTree Bank for LM Model Perplexity KL Cost Goal: low reconstruction loss + small but non-trivial KL cost BOW+KLA
Same setup on PennTree Bank for LM Model Perplexity KL Cost Goal: low reconstruction loss + small but non-trivial KL cost BOW+KLA
[]
GEM-SciDuet-train-100#paper-1263#slide-17
1263
Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders
While recent neural encoder-decoder models have shown great promise in modeling open-domain conversations, they often generate dull and generic responses. Unlike past work that has focused on diversifying the output of the decoder at word-level to alleviate this problem, we present a novel framework based on conditional variational autoencoders that captures the discourse-level diversity in the encoder. Our model uses latent variables to learn a distribution over potential conversational intents and generates diverse responses using only greedy decoders. We have further developed a novel variant that is integrated with linguistic prior knowledge for better performance. Finally, the training procedure is improved by introducing a bag-of-word loss. Our proposed models have been validated to generate significantly more diverse responses than baseline approaches and exhibit competence in discourse-level decision-making.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196 ], "paper_content_text": [ "Introduction The dialog manager is one of the key components of dialog systems, which is responsible for modeling the decision-making process.", "Specifically, it typically takes a new utterance and the dialog context as input, and generates discourse-level decisions (Bohus and Rudnicky, 2003; Williams and Young, 2007) .", "Advanced dialog managers usually have a list of potential actions that enable them to have diverse behavior during a conversation, e.g.", "different strategies to recover from non-understanding .", "However, the conventional approach of designing a dialog manager (Williams and Young, 2007) does not scale well to open-domain conversation models because of the vast quantity of possible decisions.", "Thus, there has been a growing interest in applying encoder-decoder models for modeling open-domain conversation (Vinyals and Le, 2015; Serban et al., 2016a) .", "The basic approach treats a conversation as a transduction task, in which the dialog history is the source sequence and the next response is the target sequence.", "The model is then trained end-to-end on large conversation corpora using the maximum-likelihood estimation (MLE) objective without the need for manual crafting.", "However recent research has found that encoder-decoder models tend to generate generic and dull responses (e.g., I don't know), rather than meaningful and specific answers (Li et al., 2015; Serban et al., 2016b) .", "There have been many attempts to explain and solve this limitation, and they can be broadly divided into two categories (see Section 2 for details): (1) the first category argues that the dialog history is only one of the factors that decide the next response.", "Other features should be extracted and provided to the models as conditionals in order to generate more specific responses (Xing et al., 2016; Li et al., 2016a) ; (2) the second category aims to improve the encoder-decoder model itself, including decoding with beam search and its variations (Wiseman and Rush, 2016) , encouraging responses that have long-term payoff (Li et al., 2016b) , etc.", "Building upon the past work in dialog managers and encoder-decoder models, the key idea of this paper is to model dialogs as a one-to-many problem at the discourse level.", "Previous studies indicate that there are many factors in open-domain dialogs that decide the next response, and it is nontrivial to extract all of them.", "Intuitively, given a similar dialog history (and other observed inputs), there may exist many valid responses (at the discourse-level), each corresponding to a certain configuration of the latent variables that are not presented in the input.", "To uncover the potential responses, we strive to model a probabilistic distribution over the distributed utterance embeddings of the potential responses using a latent variable ( Figure 1 ).", "This allows us to generate diverse responses by drawing samples from the learned distribution and reconstruct their words via a decoder neural network.", "Specifically, our contributions are three-fold: 1.", "We present a novel neural dialog model adapted from conditional variational autoencoders (CVAE) , which introduces a latent variable that can capture discourse-level variations as described above 2.", "We propose Knowledge-Guided CVAE (kgC-VAE), which enables easy integration of expert knowledge and results in performance improvement and model interpretability.", "3.", "We develop a training method in addressing the difficulty of optimizing CVAE for natural language generation (Bowman et al., 2015) .", "We evaluate our models on human-human conversation data and yield promising results in: (a) generating appropriate and discourse-level diverse responses, and (b) showing that the proposed training method is more effective than the previous techniques.", "Related Work Our work is related to both recent advancement in encoder-decoder dialog models and generative models based on CVAE.", "Encoder-decoder Dialog Models Since the emergence of the neural dialog model, the problem of output diversity has received much attention in the research community.", "Ideal output responses should be both coherent and diverse.", "However, most models end up with generic and dull responses.", "To tackle this problem, one line of research has focused on augmenting the input of encoder-decoder models with richer context information, in order to generate more spe-cific responses.", "Li et al., (2016a) captured speakers' characteristics by encoding background information and speaking style into the distributed embeddings, which are used to re-rank the generated response from an encoder-decoder model.", "Xing et al., (2016) maintain topic encoding based on Latent Dirichlet Allocation (LDA) (Blei et al., 2003) of the conversation to encourage the model to output more topic coherent responses.", "On the other hand, many attempts have also been made to improve the architecture of encoderdecoder models.", "Li et al,.", "(2015) proposed to optimize the standard encoder-decoder by maximizing the mutual information between input and output, which in turn reduces generic responses.", "This approach penalized unconditionally high frequency responses, and favored responses that have high conditional probability given the input.", "Wiseman and Rush (2016) focused on improving the decoder network by alleviating the biases between training and testing.", "They introduced a searchbased loss that directly optimizes the networks for beam search decoding.", "The resulting model achieves better performance on word ordering, parsing and machine translation.", "Besides improving beam search, Li et al., (2016b) pointed out that the MLE objective of an encoder-decoder model is unable to approximate the real-world goal of the conversation.", "Thus, they initialized a encoderdecoder model with MLE objective and leveraged reinforcement learning to fine tune the model by optimizing three heuristic rewards functions: informativity, coherence, and ease of answering.", "Conditional Variational Autoencoder The variational autoencoder (VAE) (Kingma and Welling, 2013; Rezende et al., 2014) is one of the most popular frameworks for image generation.", "The basic idea of VAE is to encode the input x into a probability distribution z instead of a point encoding in the autoencoder.", "Then VAE applies a decoder network to reconstruct the original input using samples from z.", "To generate images, VAE first obtains a sample of z from the prior distribution, e.g.", "N (0, I), and then produces an image via the decoder network.", "A more advanced model, the conditional VAE (CVAE), is a recent modification of VAE to generate diverse images conditioned on certain attributes, e.g.", "generating different human faces given skin color .", "Inspired by CVAE, we view the dialog contexts as the conditional attributes and adapt CVAE to generate diverse responses instead of images.", "Although VAE/CVAE has achieved impressive results in image generation, adapting this to natural language generators is non-trivial.", "Bowman et al., (2015) have used VAE with Long-Short Term Memory (LSTM)-based recognition and decoder networks to generate sentences from a latent Gaussian variable.", "They showed that their model is able to generate diverse sentences with even a greedy LSTM decoder.", "They also reported the difficulty of training because the LSTM decoder tends to ignore the latent variable.", "We refer to this issue as the vanishing latent variable problem.", "Serban et al., (2016b) have applied a latent variable hierarchical encoder-decoder dialog model to introduce utterance-level variations and facilitate longer responses.", "To improve upon the past models, we firstly introduce a novel mechanism to leverage linguistic knowledge in training end-to-end neural dialog models, and we also propose a novel training technique that mitigates the vanishing latent variable problem.", "Proposed Models Conditional Variational Autoencoder (CVAE) for Dialog Generation Each dyadic conversation can be represented via three random variables: the dialog context c (context window size k − 1), the response utterance x (the k th utterance) and a latent variable z, which is used to capture the latent distribution over the valid responses.", "Further, c is composed of the dialog history: the preceding k-1 utterances; conversational floor (1 if the utterance is from the same speaker of x, otherwise 0) and meta features m (e.g.", "the topic).", "We then define the conditional distribution p(x, z|c) = p(x|z, c)p(z|c) and our goal is to use deep neural networks (parametrized by θ) to approximate p(z|c) and p(x|z, c).", "We refer to p θ (z|c) as the prior network and p θ (x, |z, c) as the response decoder.", "Then the generative process of x is (Figure 2 (a)): 1.", "Sample a latent variable z from the prior network p θ (z|c).", "2.", "Generate x through the response decoder p θ (x|z, c).", "CVAE is trained to maximize the conditional log likelihood of x given c, which involves an intractable marginalization over the latent variable z.", "As proposed in , CVAE can be efficiently trained with the Stochastic Gradient Variational Bayes (SGVB) framework (Kingma and Welling, 2013) by maximizing the variational lower bound of the conditional log likelihood.", "We assume the z follows multivariate Gaussian distribution with a diagonal covariance matrix and introduce a recognition network q φ (z|x, c) to approximate the true posterior distribution p(z|x, c).", "have shown that the variational lower bound can be written as: Figure 3 demonstrates an overview of our model.", "The utterance encoder is a bidirectional recurrent neural network (BRNN) (Schuster and Paliwal, 1997) with a gated recurrent unit (GRU) (Chung et al., 2014) to encode each utterance into fixedsize vectors by concatenating the last hidden states of the forward and backward RNN u L(θ, φ; x, c) = −KL(q φ (z|x, c) p θ (z|c)) + E q φ (z|c,x) [log p θ (x|z, c)] (1) ≤ log p(x|c) i = [ h i , h i ].", "x is simply u k .", "The context encoder is a 1-layer GRU network that encodes the preceding k-1 utterances by taking u 1:k−1 and the corresponding conversation floor as inputs.", "The last hidden state h c of the context encoder is concatenated with meta features and c = [h c , m].", "Since we assume z follows isotropic Gaussian distribution, the recognition network q φ (z|x, c) ∼ N (µ, σ 2 I) and the prior network p θ (z|c) ∼ N (µ , σ 2 I), and then we have: µ log(σ 2 ) = W r x c + b r (2) µ log(σ 2 ) = MLP p (c) (3) We then use the reparametrization trick (Kingma and Welling, 2013) to obtain samples of z either from N (z; µ, σ 2 I) predicted by the recognition network (training) or N (z; µ , σ 2 I) predicted by the prior network (testing).", "Finally, the response decoder is a 1-layer GRU network with initial state s 0 = W i [z, c]+b i .", "The response decoder then predicts the words in x sequentially.", "Knowledge-Guided CVAE (kgCVAE) In practice, training CVAE is a challenging optimization problem and often requires large amount of data.", "On the other hand, past research in spoken dialog systems and discourse analysis has suggested that many linguistic cues capture crucial features in representing natural conversation.", "For example, dialog acts (Poesio and Traum, 1998) have been widely used in the dialog managers (Litman and Allen, 1987; Raux et al., 2005; Zhao and Eskenazi, 2016) to represent the propositional function of the system.", "Therefore, we conjecture that it will be beneficial for the model to learn meaningful latent z if it is provided with explicitly extracted discourse features during the training.", "In order to incorporate the linguistic features into the basic CVAE model, we first denote the set of linguistic features as y.", "Then we assume that the generation of x depends on c, z and y. y relies on z and c as shown in Figure 2 .", "Specifically, during training the initial state of the response decoder is s 0 = W i [z, c, y] + b i and the input at every step is [e t , y] where e t is the word embedding of t th word in x.", "In addition, there is an MLP to predict y = MLP y (z, c) based on z and c. In the testing stage, the predicted y is used by the response decoder instead of the oracle decoders.", "We denote the modified model as knowledge-guided CVAE (kgCVAE) and developers can add desired discourse features that they wish the latent variable z to capture.", "KgCVAE model is trained by maximizing: L(θ, φ; x, c, y) = −KL(q φ (z|x, c, y) P θ (z|c)) + E q φ (z|c,x,y) [log p(x|z, c, y)] + E q φ (z|c,x,y) [log p(y|z, c)] (4) Since now the reconstruction of y is a part of the loss function, kgCVAE can more efficiently encode y-related information into z than discovering it only based on the surface-level x and c. Another advantage of kgCVAE is that it can output a highlevel label (e.g.", "dialog act) along with the wordlevel responses, which allows easier interpretation of the model's outputs.", "Optimization Challenges A straightforward VAE with RNN decoder fails to encode meaningful information in z due to the vanishing latent variable problem (Bowman et al., 2015) .", "Bowman et al., (2015) proposed two solutions: (1) KL annealing: gradually increasing the weight of the KL term from 0 to 1 during training; (2) word drop decoding: setting a certain percentage of the target words to 0.", "We found that CVAE suffers from the same issue when the decoder is an RNN.", "Also we did not consider word drop decoding because Bowman et al,.", "(2015) have shown that it may hurt the performance when the drop rate is too high.", "As a result, we propose a simple yet novel technique to tackle the vanishing latent variable problem: bag-of-word loss.", "The idea is to introduce an auxiliary loss that requires the decoder network to predict the bag-of-words in the response x as shown in Figure 3 (b).", "We decompose x into two variables: x o with word order and x bow without order, and assume that x o and x bow are conditionally independent given z and c: p(x, z|c) = p(x o |z, c)p(x bow |z, c)p(z|c).", "Due to the conditional independence assumption, the latent variable is forced to capture global information about the target response.", "Let f = MLP b (z, x) ∈ R V where V is vocabulary size, and we have: log p(x bow |z, c) = log |x| t=1 e fx t V j e f j (5) where |x| is the length of x and x t is the word index of t th word in x.", "The modified variational lower bound for CVAE with bag-of-word loss is (see Appendix A for kgCVAE): L (θ, φ; x, c) = L(θ, φ; x, c) + E q φ (z|c,x,y) [log p(x bow |z, c)] (6) We will show that the bag-of-word loss in Equation 6 is very effective against the vanishing latent variable and it is also complementary to the KL annealing technique.", "Experiment Setup Dataset We chose the Switchboard (SW) 1 Release 2 Corpus (Godfrey and Holliman, 1997) to evaluate the proposed models.", "SW has 2400 two-sided telephone conversations with manually transcribed speech and alignment.", "In the beginning of the call, a computer operator gave the callers recorded prompts that define the desired topic of discussion.", "There are 70 available topics.", "We randomly split the data into 2316/60/62 dialogs for train/validate/test.", "The pre-processing includes (1) tokenize using the NLTK tokenizer (Bird et al., 2009 ); (2) remove non-verbal symbols and repeated words due to false starts; (3) keep the top 10K frequent word types as the vocabulary.", "The final data have 207, 833/5, 225/5, 481 (c, x) pairs for train/validate/test.", "Furthermore, a subset of SW was manually labeled with dialog acts (Stolcke et al., 2000) .", "We extracted dialog act labels based on the dialog act recognizer proposed in (Ribeiro et al., 2015) .", "The features include the uni-gram and bi-gram of the utterance, and the contextual features of the last 3 utterances.", "We trained a Support Vector Machine (SVM) (Suykens and Vandewalle, 1999) with linear kernel on the subset of SW with human annotations.", "There are 42 types of dialog acts and the SVM achieved 77.3% accuracy on held-out data.", "Then the rest of SW data are labelled with dialog acts using the trained SVM dialog act recognizer.", "Training We trained with the following hyperparameters (according to the loss on the validate dataset): word embedding has size 200 and is shared across everywhere.", "We initialize the word embedding from Glove embedding pre-trained on Twitter (Pennington et al., 2014) .", "The utterance encoder has a hidden size of 300 for each direction.", "The context encoder has a hidden size of 600 and the response decoder has a hidden size of 400.", "The prior network and the MLP for predicting y both have 1 hidden layer of size 400 and tanh non-linearity.", "The latent variable z has a size of 200.", "The context window k is 10.", "All the initial weights are sampled from a uniform distribution [-0.08, 0.08].", "The mini-batch size is 30.", "The models are trained end-to-end using the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.001 and gradient clipping at 5.", "We selected the best models based on the variational lower bound on the validate data.", "Finally, we use the BOW loss along with KL annealing of 10,000 batches to achieve the best performance.", "Section 5.4 gives a detailed argument for the importance of the BOW loss.", "Results Experiments Setup We compared three neural dialog models: a strong baseline model, CVAE, and kgCVAE.", "The baseline model is an encoder-decoder neural dialog model without latent variables similar to (Serban et al., 2016a) .", "The baseline model's encoder uses the same context encoder to encode the dialog history and the meta features as shown in Figure 3 .", "The encoded context c is directly fed into the decoder networks as the initial state.", "The hyperparameters of the baseline are the same as the ones reported in Section 4.2 and the baseline is trained to minimize the standard cross entropy loss of the decoder RNN model without any auxiliary loss.", "Also, to compare the diversity introduced by the stochasticity in the proposed latent variable versus the softmax of RNN at each decoding step, we generate N responses from the baseline by sam-pling from the softmax.", "For CVAE/kgCVAE, we sample N times from the latent z and only use greedy decoders so that the randomness comes entirely from the latent variable z. Quantitative Analysis Automatically evaluating an open-domain generative dialog model is an open research challenge .", "Following our one-tomany hypothesis, we propose the following metrics.", "We assume that for a given dialog context c, there exist M c reference responses r j , j ∈ [1, M c ].", "Meanwhile a model can generate N hypothesis re- sponses h i , i ∈ [1, N ].", "The generalized responselevel precision/recall for a given dialog context is: precision(c) = N i=1 max j∈[1,Mc] d(r j , h i ) N recall(c) = Mc j=1 max i∈[1,N ] d(r j , h i )) M c where d(r j , h i ) is a distance function which lies between 0 to 1 and measures the similarities between r j and h i .", "The final score is averaged over the entire test dataset and we report the performance with 3 types of distance functions in order to evaluate the systems from various linguistic points of view: Smoothed Sentence-level BLEU (Chen and Cherry, 2014): BLEU is a popular metric that measures the geometric mean of modified ngram precision with a length penalty (Papineni et al., 2002; Li et al., 2015) .", "We use BLEU-1 to 4 as our lexical similarity metric and normalize the score to 0 to 1 scale.", "Cosine Distance of Bag-of-word Embedding: a simple method to obtain sentence embeddings is to take the average or extrema of all the word embeddings in the sentences (Forgues et al., 2014; Adi et al., 2016) .", "The d(r j , h i ) is the cosine distance of the two embedding vectors.", "We used Glove embedding described in Section 4 and denote the average method as A-bow and extrema method as E-bow.", "3.", "Dialog Act Match: to measure the similarity at the discourse level, the same dialogact tagger from 4.1 is applied to label all the generated responses of each model.", "We set d(r j , h i ) = 1 if r j and h i have the same dialog acts, otherwise d(r j , h i ) = 0.", "One challenge of using the above metrics is that there is only one, rather than multiple reference responses/contexts.", "This impacts reliability of our measures.", "Inspired by (Sordoni et al., 2015) , we utilized information retrieval techniques (see Appendix A) to gather 10 extra candidate reference responses/context from other conversations with the same topics.", "Then the 10 candidate references are filtered by two experts, which serve as the ground truth to train the reference response classifier.", "The result is 6.69 extra references in average per context.", "The average number of distinct reference dialog acts is 4.2.", "Table 1 The proposed models outperform the baseline in terms of recall in all the metrics with statistical significance.", "This confirms our hypothesis that generating responses with discourse-level diversity can lead to a more comprehensive coverage of the potential responses than promoting only word-level diversity.", "As for precision, we observed that the baseline has higher or similar scores than CVAE in all metrics, which is expected since the baseline tends to generate the mostly likely and safe responses repeatedly in the N hypotheses.", "However, kgCVAE is able to achieve the highest precision and recall in the 4 metrics at the same time (BLEU1-4, E-BOW).", "One reason for kgCVAE's good performance is that the predicted dialog act label in kgCVAE can regularize the generation process of its RNN decoder by forcing it to generate more coherent and precise words.", "We further analyze the precision/recall of BLEU-4 by looking at the average score versus the number of distinct reference dialog acts.", "A low number of distinct dialog acts represents the situation where the dialog context has a strong constraint on the range of the next response (low entropy), while a high number indicates the opposite (highentropy).", "Figure 4 shows that CVAE/kgCVAE achieves significantly higher recall than the baseline in higher entropy contexts.", "Also it shows that CVAE suffers from lower precision, especially in low entropy contexts.", "Finally, kgCVAE gets higher precision than both the baseline and CVAE in the full spectrum of context entropy.", "Table 2 shows the outputs generated from the baseline and kgCVAE.", "In example 1, caller A begins with an open-ended question.", "The kgCVAE model generated highly diverse answers that cover multiple plausible dialog acts.", "Further, we notice that the generated text exhibits similar dialog acts compared to the ones predicted separately by the model, implying the consistency of natural language generation based on y.", "On the contrary, the responses from the baseline model are limited to local n-gram variations and share a similar prefix, i.e.", "\"I'm\".", "Example 2 is a situation where caller A is telling B stories.", "The ground truth response is a back-channel and the range of valid answers is more constrained than example 1 since B is playing the role of a listener.", "The baseline successfully predicts \"uh-huh\".", "The kgCVAE model is also able to generate various ways of back-channeling.", "This implies that the latent z is able to capture context-sensitive variations, i.e.", "in low-entropy dialog contexts modeling lexical diversity while in high-entropy ones modeling discourse-level diversity.", "Moreover, kgCVAE is occasionally able to generate more sophisticated grounding (sample 4) beyond a simple back-channel, which is also an acceptable response given the dialog context.", "Qualitative Analysis In addition, past work (Kingma and Welling, 2013) has shown that the recognition network is able to learn to cluster high-dimension data, so we conjecture that posterior z outputted from the recognition network should cluster the responses into meaningful groups.", "Figure 5 visualizes the posterior z of responses in the test dataset in 2D space using t-SNE (Maaten and Hinton, 2008) .", "We found that the learned latent space is highly correlated with the dialog act and length of responses, which confirms our assumption.", "Results for Bag-of-Word Loss Finally, we evaluate the effectiveness of bag-ofword (BOW) loss for training VAE/CVAE with the RNN decoder.", "To compare with past work (Bowman et al., 2015) , we conducted the same language modelling (LM) task on Penn Treebank using VAE.", "The network architecture is same except we use GRU instead of LSTM.", "We compared four different training setups: (1) standard VAE without any heuristics; (2) VAE with KL annealing (KLA); (3) VAE with BOW loss; (4) VAE with both BOW loss and KLA.", "Intuitively, a well trained model should lead to a low reconstruction loss and small but non-trivial KL cost.", "For all models with KLA, the KL weight increases linearly from 0 to 1 in the first 5000 batches.", "Table 3 shows the reconstruction perplexity and the KL cost on the test dataset.", "The standard VAE fails to learn a meaningful latent variable by hav- Table 2 : Generated responses from the baselines and kgCVAE in two examples.", "KgCVAE also provides the predicted dialog act for each response.", "The context only shows the last utterance due to space limit (the actual context window size is 10).", "ing a KL cost close to 0 and a reconstruction perplexity similar to a small LSTM LM (Zaremba et al., 2014) .", "KLA helps to improve the reconstruction loss, but it requires early stopping since the models will fall back to the standard VAE after the KL weight becomes 1.", "At last, the models with BOW loss achieved significantly lower perplexity and larger KL cost.", "Figure 6 visualizes the evolution of the KL cost.", "We can see that for the standard model, the KL cost crashes to 0 at the beginning of training and never recovers.", "On the contrary, the model with only KLA learns to encode substantial information in latent z when the KL cost weight is small.", "However, after the KL weight is increased to 1 (after 5000 batch), the model once again decides to ignore the latent z and falls back to the naive implementation.", "The model with BOW loss, however, consistently converges to a non-trivial KL cost even without KLA, which confirms the importance of BOW loss for training latent variable models with the RNN decoder.", "Last but not least, our experiments showed that the conclusions drawn from LM using VAE also apply to training CVAE/kgCVAE, so we used BOW loss together with KLA for all previous experiments.", "Conclusion and Future Work In conclusion, we identified the one-to-many nature of open-domain conversation and proposed two novel models that show superior performance in generating diverse and appropriate responses at the discourse level.", "While the current paper addresses diversifying responses in respect to dialogue acts, this work is part of a larger research direction that targets leveraging both past linguistic findings and the learning power of deep neural networks to learn better representation of the latent factors in dialog.", "In turn, the output of this novel neural dialog model will be easier to explain and control by humans.", "In addition to dialog acts, we plan to apply our kgCVAE model to capture other different linguistic phenomena including sentiment, named entities,etc.", "Last but not least, the recognition network in our model will serve as the foundation for designing a datadriven dialog manager, which automatically discovers useful high-level intents.", "All of the above suggest a promising research direction." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "4.1", "4.2", "5.1", "5.2", "1.", "2.", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Encoder-decoder Dialog Models", "Conditional Variational Autoencoder", "Conditional Variational Autoencoder (CVAE) for Dialog Generation", "Knowledge-Guided CVAE (kgCVAE)", "Optimization Challenges", "Dataset", "Training", "Experiments Setup", "Quantitative Analysis", "Smoothed Sentence-level BLEU (Chen and", "Cosine", "Qualitative Analysis", "Results for Bag-of-Word Loss", "Conclusion and Future Work" ] }
GEM-SciDuet-train-100#paper-1263#slide-17
KL Cost during Training
Standard model suffers from vanishing KLA requires early stopping. BOW leads to stable convergence The same trend is observed on CVAE.
Standard model suffers from vanishing KLA requires early stopping. BOW leads to stable convergence The same trend is observed on CVAE.
[]
GEM-SciDuet-train-100#paper-1263#slide-18
1263
Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders
While recent neural encoder-decoder models have shown great promise in modeling open-domain conversations, they often generate dull and generic responses. Unlike past work that has focused on diversifying the output of the decoder at word-level to alleviate this problem, we present a novel framework based on conditional variational autoencoders that captures the discourse-level diversity in the encoder. Our model uses latent variables to learn a distribution over potential conversational intents and generates diverse responses using only greedy decoders. We have further developed a novel variant that is integrated with linguistic prior knowledge for better performance. Finally, the training procedure is improved by introducing a bag-of-word loss. Our proposed models have been validated to generate significantly more diverse responses than baseline approaches and exhibit competence in discourse-level decision-making.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196 ], "paper_content_text": [ "Introduction The dialog manager is one of the key components of dialog systems, which is responsible for modeling the decision-making process.", "Specifically, it typically takes a new utterance and the dialog context as input, and generates discourse-level decisions (Bohus and Rudnicky, 2003; Williams and Young, 2007) .", "Advanced dialog managers usually have a list of potential actions that enable them to have diverse behavior during a conversation, e.g.", "different strategies to recover from non-understanding .", "However, the conventional approach of designing a dialog manager (Williams and Young, 2007) does not scale well to open-domain conversation models because of the vast quantity of possible decisions.", "Thus, there has been a growing interest in applying encoder-decoder models for modeling open-domain conversation (Vinyals and Le, 2015; Serban et al., 2016a) .", "The basic approach treats a conversation as a transduction task, in which the dialog history is the source sequence and the next response is the target sequence.", "The model is then trained end-to-end on large conversation corpora using the maximum-likelihood estimation (MLE) objective without the need for manual crafting.", "However recent research has found that encoder-decoder models tend to generate generic and dull responses (e.g., I don't know), rather than meaningful and specific answers (Li et al., 2015; Serban et al., 2016b) .", "There have been many attempts to explain and solve this limitation, and they can be broadly divided into two categories (see Section 2 for details): (1) the first category argues that the dialog history is only one of the factors that decide the next response.", "Other features should be extracted and provided to the models as conditionals in order to generate more specific responses (Xing et al., 2016; Li et al., 2016a) ; (2) the second category aims to improve the encoder-decoder model itself, including decoding with beam search and its variations (Wiseman and Rush, 2016) , encouraging responses that have long-term payoff (Li et al., 2016b) , etc.", "Building upon the past work in dialog managers and encoder-decoder models, the key idea of this paper is to model dialogs as a one-to-many problem at the discourse level.", "Previous studies indicate that there are many factors in open-domain dialogs that decide the next response, and it is nontrivial to extract all of them.", "Intuitively, given a similar dialog history (and other observed inputs), there may exist many valid responses (at the discourse-level), each corresponding to a certain configuration of the latent variables that are not presented in the input.", "To uncover the potential responses, we strive to model a probabilistic distribution over the distributed utterance embeddings of the potential responses using a latent variable ( Figure 1 ).", "This allows us to generate diverse responses by drawing samples from the learned distribution and reconstruct their words via a decoder neural network.", "Specifically, our contributions are three-fold: 1.", "We present a novel neural dialog model adapted from conditional variational autoencoders (CVAE) , which introduces a latent variable that can capture discourse-level variations as described above 2.", "We propose Knowledge-Guided CVAE (kgC-VAE), which enables easy integration of expert knowledge and results in performance improvement and model interpretability.", "3.", "We develop a training method in addressing the difficulty of optimizing CVAE for natural language generation (Bowman et al., 2015) .", "We evaluate our models on human-human conversation data and yield promising results in: (a) generating appropriate and discourse-level diverse responses, and (b) showing that the proposed training method is more effective than the previous techniques.", "Related Work Our work is related to both recent advancement in encoder-decoder dialog models and generative models based on CVAE.", "Encoder-decoder Dialog Models Since the emergence of the neural dialog model, the problem of output diversity has received much attention in the research community.", "Ideal output responses should be both coherent and diverse.", "However, most models end up with generic and dull responses.", "To tackle this problem, one line of research has focused on augmenting the input of encoder-decoder models with richer context information, in order to generate more spe-cific responses.", "Li et al., (2016a) captured speakers' characteristics by encoding background information and speaking style into the distributed embeddings, which are used to re-rank the generated response from an encoder-decoder model.", "Xing et al., (2016) maintain topic encoding based on Latent Dirichlet Allocation (LDA) (Blei et al., 2003) of the conversation to encourage the model to output more topic coherent responses.", "On the other hand, many attempts have also been made to improve the architecture of encoderdecoder models.", "Li et al,.", "(2015) proposed to optimize the standard encoder-decoder by maximizing the mutual information between input and output, which in turn reduces generic responses.", "This approach penalized unconditionally high frequency responses, and favored responses that have high conditional probability given the input.", "Wiseman and Rush (2016) focused on improving the decoder network by alleviating the biases between training and testing.", "They introduced a searchbased loss that directly optimizes the networks for beam search decoding.", "The resulting model achieves better performance on word ordering, parsing and machine translation.", "Besides improving beam search, Li et al., (2016b) pointed out that the MLE objective of an encoder-decoder model is unable to approximate the real-world goal of the conversation.", "Thus, they initialized a encoderdecoder model with MLE objective and leveraged reinforcement learning to fine tune the model by optimizing three heuristic rewards functions: informativity, coherence, and ease of answering.", "Conditional Variational Autoencoder The variational autoencoder (VAE) (Kingma and Welling, 2013; Rezende et al., 2014) is one of the most popular frameworks for image generation.", "The basic idea of VAE is to encode the input x into a probability distribution z instead of a point encoding in the autoencoder.", "Then VAE applies a decoder network to reconstruct the original input using samples from z.", "To generate images, VAE first obtains a sample of z from the prior distribution, e.g.", "N (0, I), and then produces an image via the decoder network.", "A more advanced model, the conditional VAE (CVAE), is a recent modification of VAE to generate diverse images conditioned on certain attributes, e.g.", "generating different human faces given skin color .", "Inspired by CVAE, we view the dialog contexts as the conditional attributes and adapt CVAE to generate diverse responses instead of images.", "Although VAE/CVAE has achieved impressive results in image generation, adapting this to natural language generators is non-trivial.", "Bowman et al., (2015) have used VAE with Long-Short Term Memory (LSTM)-based recognition and decoder networks to generate sentences from a latent Gaussian variable.", "They showed that their model is able to generate diverse sentences with even a greedy LSTM decoder.", "They also reported the difficulty of training because the LSTM decoder tends to ignore the latent variable.", "We refer to this issue as the vanishing latent variable problem.", "Serban et al., (2016b) have applied a latent variable hierarchical encoder-decoder dialog model to introduce utterance-level variations and facilitate longer responses.", "To improve upon the past models, we firstly introduce a novel mechanism to leverage linguistic knowledge in training end-to-end neural dialog models, and we also propose a novel training technique that mitigates the vanishing latent variable problem.", "Proposed Models Conditional Variational Autoencoder (CVAE) for Dialog Generation Each dyadic conversation can be represented via three random variables: the dialog context c (context window size k − 1), the response utterance x (the k th utterance) and a latent variable z, which is used to capture the latent distribution over the valid responses.", "Further, c is composed of the dialog history: the preceding k-1 utterances; conversational floor (1 if the utterance is from the same speaker of x, otherwise 0) and meta features m (e.g.", "the topic).", "We then define the conditional distribution p(x, z|c) = p(x|z, c)p(z|c) and our goal is to use deep neural networks (parametrized by θ) to approximate p(z|c) and p(x|z, c).", "We refer to p θ (z|c) as the prior network and p θ (x, |z, c) as the response decoder.", "Then the generative process of x is (Figure 2 (a)): 1.", "Sample a latent variable z from the prior network p θ (z|c).", "2.", "Generate x through the response decoder p θ (x|z, c).", "CVAE is trained to maximize the conditional log likelihood of x given c, which involves an intractable marginalization over the latent variable z.", "As proposed in , CVAE can be efficiently trained with the Stochastic Gradient Variational Bayes (SGVB) framework (Kingma and Welling, 2013) by maximizing the variational lower bound of the conditional log likelihood.", "We assume the z follows multivariate Gaussian distribution with a diagonal covariance matrix and introduce a recognition network q φ (z|x, c) to approximate the true posterior distribution p(z|x, c).", "have shown that the variational lower bound can be written as: Figure 3 demonstrates an overview of our model.", "The utterance encoder is a bidirectional recurrent neural network (BRNN) (Schuster and Paliwal, 1997) with a gated recurrent unit (GRU) (Chung et al., 2014) to encode each utterance into fixedsize vectors by concatenating the last hidden states of the forward and backward RNN u L(θ, φ; x, c) = −KL(q φ (z|x, c) p θ (z|c)) + E q φ (z|c,x) [log p θ (x|z, c)] (1) ≤ log p(x|c) i = [ h i , h i ].", "x is simply u k .", "The context encoder is a 1-layer GRU network that encodes the preceding k-1 utterances by taking u 1:k−1 and the corresponding conversation floor as inputs.", "The last hidden state h c of the context encoder is concatenated with meta features and c = [h c , m].", "Since we assume z follows isotropic Gaussian distribution, the recognition network q φ (z|x, c) ∼ N (µ, σ 2 I) and the prior network p θ (z|c) ∼ N (µ , σ 2 I), and then we have: µ log(σ 2 ) = W r x c + b r (2) µ log(σ 2 ) = MLP p (c) (3) We then use the reparametrization trick (Kingma and Welling, 2013) to obtain samples of z either from N (z; µ, σ 2 I) predicted by the recognition network (training) or N (z; µ , σ 2 I) predicted by the prior network (testing).", "Finally, the response decoder is a 1-layer GRU network with initial state s 0 = W i [z, c]+b i .", "The response decoder then predicts the words in x sequentially.", "Knowledge-Guided CVAE (kgCVAE) In practice, training CVAE is a challenging optimization problem and often requires large amount of data.", "On the other hand, past research in spoken dialog systems and discourse analysis has suggested that many linguistic cues capture crucial features in representing natural conversation.", "For example, dialog acts (Poesio and Traum, 1998) have been widely used in the dialog managers (Litman and Allen, 1987; Raux et al., 2005; Zhao and Eskenazi, 2016) to represent the propositional function of the system.", "Therefore, we conjecture that it will be beneficial for the model to learn meaningful latent z if it is provided with explicitly extracted discourse features during the training.", "In order to incorporate the linguistic features into the basic CVAE model, we first denote the set of linguistic features as y.", "Then we assume that the generation of x depends on c, z and y. y relies on z and c as shown in Figure 2 .", "Specifically, during training the initial state of the response decoder is s 0 = W i [z, c, y] + b i and the input at every step is [e t , y] where e t is the word embedding of t th word in x.", "In addition, there is an MLP to predict y = MLP y (z, c) based on z and c. In the testing stage, the predicted y is used by the response decoder instead of the oracle decoders.", "We denote the modified model as knowledge-guided CVAE (kgCVAE) and developers can add desired discourse features that they wish the latent variable z to capture.", "KgCVAE model is trained by maximizing: L(θ, φ; x, c, y) = −KL(q φ (z|x, c, y) P θ (z|c)) + E q φ (z|c,x,y) [log p(x|z, c, y)] + E q φ (z|c,x,y) [log p(y|z, c)] (4) Since now the reconstruction of y is a part of the loss function, kgCVAE can more efficiently encode y-related information into z than discovering it only based on the surface-level x and c. Another advantage of kgCVAE is that it can output a highlevel label (e.g.", "dialog act) along with the wordlevel responses, which allows easier interpretation of the model's outputs.", "Optimization Challenges A straightforward VAE with RNN decoder fails to encode meaningful information in z due to the vanishing latent variable problem (Bowman et al., 2015) .", "Bowman et al., (2015) proposed two solutions: (1) KL annealing: gradually increasing the weight of the KL term from 0 to 1 during training; (2) word drop decoding: setting a certain percentage of the target words to 0.", "We found that CVAE suffers from the same issue when the decoder is an RNN.", "Also we did not consider word drop decoding because Bowman et al,.", "(2015) have shown that it may hurt the performance when the drop rate is too high.", "As a result, we propose a simple yet novel technique to tackle the vanishing latent variable problem: bag-of-word loss.", "The idea is to introduce an auxiliary loss that requires the decoder network to predict the bag-of-words in the response x as shown in Figure 3 (b).", "We decompose x into two variables: x o with word order and x bow without order, and assume that x o and x bow are conditionally independent given z and c: p(x, z|c) = p(x o |z, c)p(x bow |z, c)p(z|c).", "Due to the conditional independence assumption, the latent variable is forced to capture global information about the target response.", "Let f = MLP b (z, x) ∈ R V where V is vocabulary size, and we have: log p(x bow |z, c) = log |x| t=1 e fx t V j e f j (5) where |x| is the length of x and x t is the word index of t th word in x.", "The modified variational lower bound for CVAE with bag-of-word loss is (see Appendix A for kgCVAE): L (θ, φ; x, c) = L(θ, φ; x, c) + E q φ (z|c,x,y) [log p(x bow |z, c)] (6) We will show that the bag-of-word loss in Equation 6 is very effective against the vanishing latent variable and it is also complementary to the KL annealing technique.", "Experiment Setup Dataset We chose the Switchboard (SW) 1 Release 2 Corpus (Godfrey and Holliman, 1997) to evaluate the proposed models.", "SW has 2400 two-sided telephone conversations with manually transcribed speech and alignment.", "In the beginning of the call, a computer operator gave the callers recorded prompts that define the desired topic of discussion.", "There are 70 available topics.", "We randomly split the data into 2316/60/62 dialogs for train/validate/test.", "The pre-processing includes (1) tokenize using the NLTK tokenizer (Bird et al., 2009 ); (2) remove non-verbal symbols and repeated words due to false starts; (3) keep the top 10K frequent word types as the vocabulary.", "The final data have 207, 833/5, 225/5, 481 (c, x) pairs for train/validate/test.", "Furthermore, a subset of SW was manually labeled with dialog acts (Stolcke et al., 2000) .", "We extracted dialog act labels based on the dialog act recognizer proposed in (Ribeiro et al., 2015) .", "The features include the uni-gram and bi-gram of the utterance, and the contextual features of the last 3 utterances.", "We trained a Support Vector Machine (SVM) (Suykens and Vandewalle, 1999) with linear kernel on the subset of SW with human annotations.", "There are 42 types of dialog acts and the SVM achieved 77.3% accuracy on held-out data.", "Then the rest of SW data are labelled with dialog acts using the trained SVM dialog act recognizer.", "Training We trained with the following hyperparameters (according to the loss on the validate dataset): word embedding has size 200 and is shared across everywhere.", "We initialize the word embedding from Glove embedding pre-trained on Twitter (Pennington et al., 2014) .", "The utterance encoder has a hidden size of 300 for each direction.", "The context encoder has a hidden size of 600 and the response decoder has a hidden size of 400.", "The prior network and the MLP for predicting y both have 1 hidden layer of size 400 and tanh non-linearity.", "The latent variable z has a size of 200.", "The context window k is 10.", "All the initial weights are sampled from a uniform distribution [-0.08, 0.08].", "The mini-batch size is 30.", "The models are trained end-to-end using the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.001 and gradient clipping at 5.", "We selected the best models based on the variational lower bound on the validate data.", "Finally, we use the BOW loss along with KL annealing of 10,000 batches to achieve the best performance.", "Section 5.4 gives a detailed argument for the importance of the BOW loss.", "Results Experiments Setup We compared three neural dialog models: a strong baseline model, CVAE, and kgCVAE.", "The baseline model is an encoder-decoder neural dialog model without latent variables similar to (Serban et al., 2016a) .", "The baseline model's encoder uses the same context encoder to encode the dialog history and the meta features as shown in Figure 3 .", "The encoded context c is directly fed into the decoder networks as the initial state.", "The hyperparameters of the baseline are the same as the ones reported in Section 4.2 and the baseline is trained to minimize the standard cross entropy loss of the decoder RNN model without any auxiliary loss.", "Also, to compare the diversity introduced by the stochasticity in the proposed latent variable versus the softmax of RNN at each decoding step, we generate N responses from the baseline by sam-pling from the softmax.", "For CVAE/kgCVAE, we sample N times from the latent z and only use greedy decoders so that the randomness comes entirely from the latent variable z. Quantitative Analysis Automatically evaluating an open-domain generative dialog model is an open research challenge .", "Following our one-tomany hypothesis, we propose the following metrics.", "We assume that for a given dialog context c, there exist M c reference responses r j , j ∈ [1, M c ].", "Meanwhile a model can generate N hypothesis re- sponses h i , i ∈ [1, N ].", "The generalized responselevel precision/recall for a given dialog context is: precision(c) = N i=1 max j∈[1,Mc] d(r j , h i ) N recall(c) = Mc j=1 max i∈[1,N ] d(r j , h i )) M c where d(r j , h i ) is a distance function which lies between 0 to 1 and measures the similarities between r j and h i .", "The final score is averaged over the entire test dataset and we report the performance with 3 types of distance functions in order to evaluate the systems from various linguistic points of view: Smoothed Sentence-level BLEU (Chen and Cherry, 2014): BLEU is a popular metric that measures the geometric mean of modified ngram precision with a length penalty (Papineni et al., 2002; Li et al., 2015) .", "We use BLEU-1 to 4 as our lexical similarity metric and normalize the score to 0 to 1 scale.", "Cosine Distance of Bag-of-word Embedding: a simple method to obtain sentence embeddings is to take the average or extrema of all the word embeddings in the sentences (Forgues et al., 2014; Adi et al., 2016) .", "The d(r j , h i ) is the cosine distance of the two embedding vectors.", "We used Glove embedding described in Section 4 and denote the average method as A-bow and extrema method as E-bow.", "3.", "Dialog Act Match: to measure the similarity at the discourse level, the same dialogact tagger from 4.1 is applied to label all the generated responses of each model.", "We set d(r j , h i ) = 1 if r j and h i have the same dialog acts, otherwise d(r j , h i ) = 0.", "One challenge of using the above metrics is that there is only one, rather than multiple reference responses/contexts.", "This impacts reliability of our measures.", "Inspired by (Sordoni et al., 2015) , we utilized information retrieval techniques (see Appendix A) to gather 10 extra candidate reference responses/context from other conversations with the same topics.", "Then the 10 candidate references are filtered by two experts, which serve as the ground truth to train the reference response classifier.", "The result is 6.69 extra references in average per context.", "The average number of distinct reference dialog acts is 4.2.", "Table 1 The proposed models outperform the baseline in terms of recall in all the metrics with statistical significance.", "This confirms our hypothesis that generating responses with discourse-level diversity can lead to a more comprehensive coverage of the potential responses than promoting only word-level diversity.", "As for precision, we observed that the baseline has higher or similar scores than CVAE in all metrics, which is expected since the baseline tends to generate the mostly likely and safe responses repeatedly in the N hypotheses.", "However, kgCVAE is able to achieve the highest precision and recall in the 4 metrics at the same time (BLEU1-4, E-BOW).", "One reason for kgCVAE's good performance is that the predicted dialog act label in kgCVAE can regularize the generation process of its RNN decoder by forcing it to generate more coherent and precise words.", "We further analyze the precision/recall of BLEU-4 by looking at the average score versus the number of distinct reference dialog acts.", "A low number of distinct dialog acts represents the situation where the dialog context has a strong constraint on the range of the next response (low entropy), while a high number indicates the opposite (highentropy).", "Figure 4 shows that CVAE/kgCVAE achieves significantly higher recall than the baseline in higher entropy contexts.", "Also it shows that CVAE suffers from lower precision, especially in low entropy contexts.", "Finally, kgCVAE gets higher precision than both the baseline and CVAE in the full spectrum of context entropy.", "Table 2 shows the outputs generated from the baseline and kgCVAE.", "In example 1, caller A begins with an open-ended question.", "The kgCVAE model generated highly diverse answers that cover multiple plausible dialog acts.", "Further, we notice that the generated text exhibits similar dialog acts compared to the ones predicted separately by the model, implying the consistency of natural language generation based on y.", "On the contrary, the responses from the baseline model are limited to local n-gram variations and share a similar prefix, i.e.", "\"I'm\".", "Example 2 is a situation where caller A is telling B stories.", "The ground truth response is a back-channel and the range of valid answers is more constrained than example 1 since B is playing the role of a listener.", "The baseline successfully predicts \"uh-huh\".", "The kgCVAE model is also able to generate various ways of back-channeling.", "This implies that the latent z is able to capture context-sensitive variations, i.e.", "in low-entropy dialog contexts modeling lexical diversity while in high-entropy ones modeling discourse-level diversity.", "Moreover, kgCVAE is occasionally able to generate more sophisticated grounding (sample 4) beyond a simple back-channel, which is also an acceptable response given the dialog context.", "Qualitative Analysis In addition, past work (Kingma and Welling, 2013) has shown that the recognition network is able to learn to cluster high-dimension data, so we conjecture that posterior z outputted from the recognition network should cluster the responses into meaningful groups.", "Figure 5 visualizes the posterior z of responses in the test dataset in 2D space using t-SNE (Maaten and Hinton, 2008) .", "We found that the learned latent space is highly correlated with the dialog act and length of responses, which confirms our assumption.", "Results for Bag-of-Word Loss Finally, we evaluate the effectiveness of bag-ofword (BOW) loss for training VAE/CVAE with the RNN decoder.", "To compare with past work (Bowman et al., 2015) , we conducted the same language modelling (LM) task on Penn Treebank using VAE.", "The network architecture is same except we use GRU instead of LSTM.", "We compared four different training setups: (1) standard VAE without any heuristics; (2) VAE with KL annealing (KLA); (3) VAE with BOW loss; (4) VAE with both BOW loss and KLA.", "Intuitively, a well trained model should lead to a low reconstruction loss and small but non-trivial KL cost.", "For all models with KLA, the KL weight increases linearly from 0 to 1 in the first 5000 batches.", "Table 3 shows the reconstruction perplexity and the KL cost on the test dataset.", "The standard VAE fails to learn a meaningful latent variable by hav- Table 2 : Generated responses from the baselines and kgCVAE in two examples.", "KgCVAE also provides the predicted dialog act for each response.", "The context only shows the last utterance due to space limit (the actual context window size is 10).", "ing a KL cost close to 0 and a reconstruction perplexity similar to a small LSTM LM (Zaremba et al., 2014) .", "KLA helps to improve the reconstruction loss, but it requires early stopping since the models will fall back to the standard VAE after the KL weight becomes 1.", "At last, the models with BOW loss achieved significantly lower perplexity and larger KL cost.", "Figure 6 visualizes the evolution of the KL cost.", "We can see that for the standard model, the KL cost crashes to 0 at the beginning of training and never recovers.", "On the contrary, the model with only KLA learns to encode substantial information in latent z when the KL cost weight is small.", "However, after the KL weight is increased to 1 (after 5000 batch), the model once again decides to ignore the latent z and falls back to the naive implementation.", "The model with BOW loss, however, consistently converges to a non-trivial KL cost even without KLA, which confirms the importance of BOW loss for training latent variable models with the RNN decoder.", "Last but not least, our experiments showed that the conclusions drawn from LM using VAE also apply to training CVAE/kgCVAE, so we used BOW loss together with KLA for all previous experiments.", "Conclusion and Future Work In conclusion, we identified the one-to-many nature of open-domain conversation and proposed two novel models that show superior performance in generating diverse and appropriate responses at the discourse level.", "While the current paper addresses diversifying responses in respect to dialogue acts, this work is part of a larger research direction that targets leveraging both past linguistic findings and the learning power of deep neural networks to learn better representation of the latent factors in dialog.", "In turn, the output of this novel neural dialog model will be easier to explain and control by humans.", "In addition to dialog acts, we plan to apply our kgCVAE model to capture other different linguistic phenomena including sentiment, named entities,etc.", "Last but not least, the recognition network in our model will serve as the foundation for designing a datadriven dialog manager, which automatically discovers useful high-level intents.", "All of the above suggest a promising research direction." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "4.1", "4.2", "5.1", "5.2", "1.", "2.", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Encoder-decoder Dialog Models", "Conditional Variational Autoencoder", "Conditional Variational Autoencoder (CVAE) for Dialog Generation", "Knowledge-Guided CVAE (kgCVAE)", "Optimization Challenges", "Dataset", "Training", "Experiments Setup", "Quantitative Analysis", "Smoothed Sentence-level BLEU (Chen and", "Cosine", "Qualitative Analysis", "Results for Bag-of-Word Loss", "Conclusion and Future Work" ] }
GEM-SciDuet-train-100#paper-1263#slide-18
Conclusion and Future Work
Identify the ONE-TO-MANY nature of open-domain dialog modeling Propose two novel models based on latent variables models for generating diverse yet Explore further in the direction of leveraging both past linguistic findings and deep models for controllability and explainability. Utilize crowdsourcing to yield more robust evaluation. Code available here! https://github.com/snakeztc/NeuralDialog-CVAE
Identify the ONE-TO-MANY nature of open-domain dialog modeling Propose two novel models based on latent variables models for generating diverse yet Explore further in the direction of leveraging both past linguistic findings and deep models for controllability and explainability. Utilize crowdsourcing to yield more robust evaluation. Code available here! https://github.com/snakeztc/NeuralDialog-CVAE
[]
GEM-SciDuet-train-100#paper-1263#slide-19
1263
Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders
While recent neural encoder-decoder models have shown great promise in modeling open-domain conversations, they often generate dull and generic responses. Unlike past work that has focused on diversifying the output of the decoder at word-level to alleviate this problem, we present a novel framework based on conditional variational autoencoders that captures the discourse-level diversity in the encoder. Our model uses latent variables to learn a distribution over potential conversational intents and generates diverse responses using only greedy decoders. We have further developed a novel variant that is integrated with linguistic prior knowledge for better performance. Finally, the training procedure is improved by introducing a bag-of-word loss. Our proposed models have been validated to generate significantly more diverse responses than baseline approaches and exhibit competence in discourse-level decision-making.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196 ], "paper_content_text": [ "Introduction The dialog manager is one of the key components of dialog systems, which is responsible for modeling the decision-making process.", "Specifically, it typically takes a new utterance and the dialog context as input, and generates discourse-level decisions (Bohus and Rudnicky, 2003; Williams and Young, 2007) .", "Advanced dialog managers usually have a list of potential actions that enable them to have diverse behavior during a conversation, e.g.", "different strategies to recover from non-understanding .", "However, the conventional approach of designing a dialog manager (Williams and Young, 2007) does not scale well to open-domain conversation models because of the vast quantity of possible decisions.", "Thus, there has been a growing interest in applying encoder-decoder models for modeling open-domain conversation (Vinyals and Le, 2015; Serban et al., 2016a) .", "The basic approach treats a conversation as a transduction task, in which the dialog history is the source sequence and the next response is the target sequence.", "The model is then trained end-to-end on large conversation corpora using the maximum-likelihood estimation (MLE) objective without the need for manual crafting.", "However recent research has found that encoder-decoder models tend to generate generic and dull responses (e.g., I don't know), rather than meaningful and specific answers (Li et al., 2015; Serban et al., 2016b) .", "There have been many attempts to explain and solve this limitation, and they can be broadly divided into two categories (see Section 2 for details): (1) the first category argues that the dialog history is only one of the factors that decide the next response.", "Other features should be extracted and provided to the models as conditionals in order to generate more specific responses (Xing et al., 2016; Li et al., 2016a) ; (2) the second category aims to improve the encoder-decoder model itself, including decoding with beam search and its variations (Wiseman and Rush, 2016) , encouraging responses that have long-term payoff (Li et al., 2016b) , etc.", "Building upon the past work in dialog managers and encoder-decoder models, the key idea of this paper is to model dialogs as a one-to-many problem at the discourse level.", "Previous studies indicate that there are many factors in open-domain dialogs that decide the next response, and it is nontrivial to extract all of them.", "Intuitively, given a similar dialog history (and other observed inputs), there may exist many valid responses (at the discourse-level), each corresponding to a certain configuration of the latent variables that are not presented in the input.", "To uncover the potential responses, we strive to model a probabilistic distribution over the distributed utterance embeddings of the potential responses using a latent variable ( Figure 1 ).", "This allows us to generate diverse responses by drawing samples from the learned distribution and reconstruct their words via a decoder neural network.", "Specifically, our contributions are three-fold: 1.", "We present a novel neural dialog model adapted from conditional variational autoencoders (CVAE) , which introduces a latent variable that can capture discourse-level variations as described above 2.", "We propose Knowledge-Guided CVAE (kgC-VAE), which enables easy integration of expert knowledge and results in performance improvement and model interpretability.", "3.", "We develop a training method in addressing the difficulty of optimizing CVAE for natural language generation (Bowman et al., 2015) .", "We evaluate our models on human-human conversation data and yield promising results in: (a) generating appropriate and discourse-level diverse responses, and (b) showing that the proposed training method is more effective than the previous techniques.", "Related Work Our work is related to both recent advancement in encoder-decoder dialog models and generative models based on CVAE.", "Encoder-decoder Dialog Models Since the emergence of the neural dialog model, the problem of output diversity has received much attention in the research community.", "Ideal output responses should be both coherent and diverse.", "However, most models end up with generic and dull responses.", "To tackle this problem, one line of research has focused on augmenting the input of encoder-decoder models with richer context information, in order to generate more spe-cific responses.", "Li et al., (2016a) captured speakers' characteristics by encoding background information and speaking style into the distributed embeddings, which are used to re-rank the generated response from an encoder-decoder model.", "Xing et al., (2016) maintain topic encoding based on Latent Dirichlet Allocation (LDA) (Blei et al., 2003) of the conversation to encourage the model to output more topic coherent responses.", "On the other hand, many attempts have also been made to improve the architecture of encoderdecoder models.", "Li et al,.", "(2015) proposed to optimize the standard encoder-decoder by maximizing the mutual information between input and output, which in turn reduces generic responses.", "This approach penalized unconditionally high frequency responses, and favored responses that have high conditional probability given the input.", "Wiseman and Rush (2016) focused on improving the decoder network by alleviating the biases between training and testing.", "They introduced a searchbased loss that directly optimizes the networks for beam search decoding.", "The resulting model achieves better performance on word ordering, parsing and machine translation.", "Besides improving beam search, Li et al., (2016b) pointed out that the MLE objective of an encoder-decoder model is unable to approximate the real-world goal of the conversation.", "Thus, they initialized a encoderdecoder model with MLE objective and leveraged reinforcement learning to fine tune the model by optimizing three heuristic rewards functions: informativity, coherence, and ease of answering.", "Conditional Variational Autoencoder The variational autoencoder (VAE) (Kingma and Welling, 2013; Rezende et al., 2014) is one of the most popular frameworks for image generation.", "The basic idea of VAE is to encode the input x into a probability distribution z instead of a point encoding in the autoencoder.", "Then VAE applies a decoder network to reconstruct the original input using samples from z.", "To generate images, VAE first obtains a sample of z from the prior distribution, e.g.", "N (0, I), and then produces an image via the decoder network.", "A more advanced model, the conditional VAE (CVAE), is a recent modification of VAE to generate diverse images conditioned on certain attributes, e.g.", "generating different human faces given skin color .", "Inspired by CVAE, we view the dialog contexts as the conditional attributes and adapt CVAE to generate diverse responses instead of images.", "Although VAE/CVAE has achieved impressive results in image generation, adapting this to natural language generators is non-trivial.", "Bowman et al., (2015) have used VAE with Long-Short Term Memory (LSTM)-based recognition and decoder networks to generate sentences from a latent Gaussian variable.", "They showed that their model is able to generate diverse sentences with even a greedy LSTM decoder.", "They also reported the difficulty of training because the LSTM decoder tends to ignore the latent variable.", "We refer to this issue as the vanishing latent variable problem.", "Serban et al., (2016b) have applied a latent variable hierarchical encoder-decoder dialog model to introduce utterance-level variations and facilitate longer responses.", "To improve upon the past models, we firstly introduce a novel mechanism to leverage linguistic knowledge in training end-to-end neural dialog models, and we also propose a novel training technique that mitigates the vanishing latent variable problem.", "Proposed Models Conditional Variational Autoencoder (CVAE) for Dialog Generation Each dyadic conversation can be represented via three random variables: the dialog context c (context window size k − 1), the response utterance x (the k th utterance) and a latent variable z, which is used to capture the latent distribution over the valid responses.", "Further, c is composed of the dialog history: the preceding k-1 utterances; conversational floor (1 if the utterance is from the same speaker of x, otherwise 0) and meta features m (e.g.", "the topic).", "We then define the conditional distribution p(x, z|c) = p(x|z, c)p(z|c) and our goal is to use deep neural networks (parametrized by θ) to approximate p(z|c) and p(x|z, c).", "We refer to p θ (z|c) as the prior network and p θ (x, |z, c) as the response decoder.", "Then the generative process of x is (Figure 2 (a)): 1.", "Sample a latent variable z from the prior network p θ (z|c).", "2.", "Generate x through the response decoder p θ (x|z, c).", "CVAE is trained to maximize the conditional log likelihood of x given c, which involves an intractable marginalization over the latent variable z.", "As proposed in , CVAE can be efficiently trained with the Stochastic Gradient Variational Bayes (SGVB) framework (Kingma and Welling, 2013) by maximizing the variational lower bound of the conditional log likelihood.", "We assume the z follows multivariate Gaussian distribution with a diagonal covariance matrix and introduce a recognition network q φ (z|x, c) to approximate the true posterior distribution p(z|x, c).", "have shown that the variational lower bound can be written as: Figure 3 demonstrates an overview of our model.", "The utterance encoder is a bidirectional recurrent neural network (BRNN) (Schuster and Paliwal, 1997) with a gated recurrent unit (GRU) (Chung et al., 2014) to encode each utterance into fixedsize vectors by concatenating the last hidden states of the forward and backward RNN u L(θ, φ; x, c) = −KL(q φ (z|x, c) p θ (z|c)) + E q φ (z|c,x) [log p θ (x|z, c)] (1) ≤ log p(x|c) i = [ h i , h i ].", "x is simply u k .", "The context encoder is a 1-layer GRU network that encodes the preceding k-1 utterances by taking u 1:k−1 and the corresponding conversation floor as inputs.", "The last hidden state h c of the context encoder is concatenated with meta features and c = [h c , m].", "Since we assume z follows isotropic Gaussian distribution, the recognition network q φ (z|x, c) ∼ N (µ, σ 2 I) and the prior network p θ (z|c) ∼ N (µ , σ 2 I), and then we have: µ log(σ 2 ) = W r x c + b r (2) µ log(σ 2 ) = MLP p (c) (3) We then use the reparametrization trick (Kingma and Welling, 2013) to obtain samples of z either from N (z; µ, σ 2 I) predicted by the recognition network (training) or N (z; µ , σ 2 I) predicted by the prior network (testing).", "Finally, the response decoder is a 1-layer GRU network with initial state s 0 = W i [z, c]+b i .", "The response decoder then predicts the words in x sequentially.", "Knowledge-Guided CVAE (kgCVAE) In practice, training CVAE is a challenging optimization problem and often requires large amount of data.", "On the other hand, past research in spoken dialog systems and discourse analysis has suggested that many linguistic cues capture crucial features in representing natural conversation.", "For example, dialog acts (Poesio and Traum, 1998) have been widely used in the dialog managers (Litman and Allen, 1987; Raux et al., 2005; Zhao and Eskenazi, 2016) to represent the propositional function of the system.", "Therefore, we conjecture that it will be beneficial for the model to learn meaningful latent z if it is provided with explicitly extracted discourse features during the training.", "In order to incorporate the linguistic features into the basic CVAE model, we first denote the set of linguistic features as y.", "Then we assume that the generation of x depends on c, z and y. y relies on z and c as shown in Figure 2 .", "Specifically, during training the initial state of the response decoder is s 0 = W i [z, c, y] + b i and the input at every step is [e t , y] where e t is the word embedding of t th word in x.", "In addition, there is an MLP to predict y = MLP y (z, c) based on z and c. In the testing stage, the predicted y is used by the response decoder instead of the oracle decoders.", "We denote the modified model as knowledge-guided CVAE (kgCVAE) and developers can add desired discourse features that they wish the latent variable z to capture.", "KgCVAE model is trained by maximizing: L(θ, φ; x, c, y) = −KL(q φ (z|x, c, y) P θ (z|c)) + E q φ (z|c,x,y) [log p(x|z, c, y)] + E q φ (z|c,x,y) [log p(y|z, c)] (4) Since now the reconstruction of y is a part of the loss function, kgCVAE can more efficiently encode y-related information into z than discovering it only based on the surface-level x and c. Another advantage of kgCVAE is that it can output a highlevel label (e.g.", "dialog act) along with the wordlevel responses, which allows easier interpretation of the model's outputs.", "Optimization Challenges A straightforward VAE with RNN decoder fails to encode meaningful information in z due to the vanishing latent variable problem (Bowman et al., 2015) .", "Bowman et al., (2015) proposed two solutions: (1) KL annealing: gradually increasing the weight of the KL term from 0 to 1 during training; (2) word drop decoding: setting a certain percentage of the target words to 0.", "We found that CVAE suffers from the same issue when the decoder is an RNN.", "Also we did not consider word drop decoding because Bowman et al,.", "(2015) have shown that it may hurt the performance when the drop rate is too high.", "As a result, we propose a simple yet novel technique to tackle the vanishing latent variable problem: bag-of-word loss.", "The idea is to introduce an auxiliary loss that requires the decoder network to predict the bag-of-words in the response x as shown in Figure 3 (b).", "We decompose x into two variables: x o with word order and x bow without order, and assume that x o and x bow are conditionally independent given z and c: p(x, z|c) = p(x o |z, c)p(x bow |z, c)p(z|c).", "Due to the conditional independence assumption, the latent variable is forced to capture global information about the target response.", "Let f = MLP b (z, x) ∈ R V where V is vocabulary size, and we have: log p(x bow |z, c) = log |x| t=1 e fx t V j e f j (5) where |x| is the length of x and x t is the word index of t th word in x.", "The modified variational lower bound for CVAE with bag-of-word loss is (see Appendix A for kgCVAE): L (θ, φ; x, c) = L(θ, φ; x, c) + E q φ (z|c,x,y) [log p(x bow |z, c)] (6) We will show that the bag-of-word loss in Equation 6 is very effective against the vanishing latent variable and it is also complementary to the KL annealing technique.", "Experiment Setup Dataset We chose the Switchboard (SW) 1 Release 2 Corpus (Godfrey and Holliman, 1997) to evaluate the proposed models.", "SW has 2400 two-sided telephone conversations with manually transcribed speech and alignment.", "In the beginning of the call, a computer operator gave the callers recorded prompts that define the desired topic of discussion.", "There are 70 available topics.", "We randomly split the data into 2316/60/62 dialogs for train/validate/test.", "The pre-processing includes (1) tokenize using the NLTK tokenizer (Bird et al., 2009 ); (2) remove non-verbal symbols and repeated words due to false starts; (3) keep the top 10K frequent word types as the vocabulary.", "The final data have 207, 833/5, 225/5, 481 (c, x) pairs for train/validate/test.", "Furthermore, a subset of SW was manually labeled with dialog acts (Stolcke et al., 2000) .", "We extracted dialog act labels based on the dialog act recognizer proposed in (Ribeiro et al., 2015) .", "The features include the uni-gram and bi-gram of the utterance, and the contextual features of the last 3 utterances.", "We trained a Support Vector Machine (SVM) (Suykens and Vandewalle, 1999) with linear kernel on the subset of SW with human annotations.", "There are 42 types of dialog acts and the SVM achieved 77.3% accuracy on held-out data.", "Then the rest of SW data are labelled with dialog acts using the trained SVM dialog act recognizer.", "Training We trained with the following hyperparameters (according to the loss on the validate dataset): word embedding has size 200 and is shared across everywhere.", "We initialize the word embedding from Glove embedding pre-trained on Twitter (Pennington et al., 2014) .", "The utterance encoder has a hidden size of 300 for each direction.", "The context encoder has a hidden size of 600 and the response decoder has a hidden size of 400.", "The prior network and the MLP for predicting y both have 1 hidden layer of size 400 and tanh non-linearity.", "The latent variable z has a size of 200.", "The context window k is 10.", "All the initial weights are sampled from a uniform distribution [-0.08, 0.08].", "The mini-batch size is 30.", "The models are trained end-to-end using the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.001 and gradient clipping at 5.", "We selected the best models based on the variational lower bound on the validate data.", "Finally, we use the BOW loss along with KL annealing of 10,000 batches to achieve the best performance.", "Section 5.4 gives a detailed argument for the importance of the BOW loss.", "Results Experiments Setup We compared three neural dialog models: a strong baseline model, CVAE, and kgCVAE.", "The baseline model is an encoder-decoder neural dialog model without latent variables similar to (Serban et al., 2016a) .", "The baseline model's encoder uses the same context encoder to encode the dialog history and the meta features as shown in Figure 3 .", "The encoded context c is directly fed into the decoder networks as the initial state.", "The hyperparameters of the baseline are the same as the ones reported in Section 4.2 and the baseline is trained to minimize the standard cross entropy loss of the decoder RNN model without any auxiliary loss.", "Also, to compare the diversity introduced by the stochasticity in the proposed latent variable versus the softmax of RNN at each decoding step, we generate N responses from the baseline by sam-pling from the softmax.", "For CVAE/kgCVAE, we sample N times from the latent z and only use greedy decoders so that the randomness comes entirely from the latent variable z. Quantitative Analysis Automatically evaluating an open-domain generative dialog model is an open research challenge .", "Following our one-tomany hypothesis, we propose the following metrics.", "We assume that for a given dialog context c, there exist M c reference responses r j , j ∈ [1, M c ].", "Meanwhile a model can generate N hypothesis re- sponses h i , i ∈ [1, N ].", "The generalized responselevel precision/recall for a given dialog context is: precision(c) = N i=1 max j∈[1,Mc] d(r j , h i ) N recall(c) = Mc j=1 max i∈[1,N ] d(r j , h i )) M c where d(r j , h i ) is a distance function which lies between 0 to 1 and measures the similarities between r j and h i .", "The final score is averaged over the entire test dataset and we report the performance with 3 types of distance functions in order to evaluate the systems from various linguistic points of view: Smoothed Sentence-level BLEU (Chen and Cherry, 2014): BLEU is a popular metric that measures the geometric mean of modified ngram precision with a length penalty (Papineni et al., 2002; Li et al., 2015) .", "We use BLEU-1 to 4 as our lexical similarity metric and normalize the score to 0 to 1 scale.", "Cosine Distance of Bag-of-word Embedding: a simple method to obtain sentence embeddings is to take the average or extrema of all the word embeddings in the sentences (Forgues et al., 2014; Adi et al., 2016) .", "The d(r j , h i ) is the cosine distance of the two embedding vectors.", "We used Glove embedding described in Section 4 and denote the average method as A-bow and extrema method as E-bow.", "3.", "Dialog Act Match: to measure the similarity at the discourse level, the same dialogact tagger from 4.1 is applied to label all the generated responses of each model.", "We set d(r j , h i ) = 1 if r j and h i have the same dialog acts, otherwise d(r j , h i ) = 0.", "One challenge of using the above metrics is that there is only one, rather than multiple reference responses/contexts.", "This impacts reliability of our measures.", "Inspired by (Sordoni et al., 2015) , we utilized information retrieval techniques (see Appendix A) to gather 10 extra candidate reference responses/context from other conversations with the same topics.", "Then the 10 candidate references are filtered by two experts, which serve as the ground truth to train the reference response classifier.", "The result is 6.69 extra references in average per context.", "The average number of distinct reference dialog acts is 4.2.", "Table 1 The proposed models outperform the baseline in terms of recall in all the metrics with statistical significance.", "This confirms our hypothesis that generating responses with discourse-level diversity can lead to a more comprehensive coverage of the potential responses than promoting only word-level diversity.", "As for precision, we observed that the baseline has higher or similar scores than CVAE in all metrics, which is expected since the baseline tends to generate the mostly likely and safe responses repeatedly in the N hypotheses.", "However, kgCVAE is able to achieve the highest precision and recall in the 4 metrics at the same time (BLEU1-4, E-BOW).", "One reason for kgCVAE's good performance is that the predicted dialog act label in kgCVAE can regularize the generation process of its RNN decoder by forcing it to generate more coherent and precise words.", "We further analyze the precision/recall of BLEU-4 by looking at the average score versus the number of distinct reference dialog acts.", "A low number of distinct dialog acts represents the situation where the dialog context has a strong constraint on the range of the next response (low entropy), while a high number indicates the opposite (highentropy).", "Figure 4 shows that CVAE/kgCVAE achieves significantly higher recall than the baseline in higher entropy contexts.", "Also it shows that CVAE suffers from lower precision, especially in low entropy contexts.", "Finally, kgCVAE gets higher precision than both the baseline and CVAE in the full spectrum of context entropy.", "Table 2 shows the outputs generated from the baseline and kgCVAE.", "In example 1, caller A begins with an open-ended question.", "The kgCVAE model generated highly diverse answers that cover multiple plausible dialog acts.", "Further, we notice that the generated text exhibits similar dialog acts compared to the ones predicted separately by the model, implying the consistency of natural language generation based on y.", "On the contrary, the responses from the baseline model are limited to local n-gram variations and share a similar prefix, i.e.", "\"I'm\".", "Example 2 is a situation where caller A is telling B stories.", "The ground truth response is a back-channel and the range of valid answers is more constrained than example 1 since B is playing the role of a listener.", "The baseline successfully predicts \"uh-huh\".", "The kgCVAE model is also able to generate various ways of back-channeling.", "This implies that the latent z is able to capture context-sensitive variations, i.e.", "in low-entropy dialog contexts modeling lexical diversity while in high-entropy ones modeling discourse-level diversity.", "Moreover, kgCVAE is occasionally able to generate more sophisticated grounding (sample 4) beyond a simple back-channel, which is also an acceptable response given the dialog context.", "Qualitative Analysis In addition, past work (Kingma and Welling, 2013) has shown that the recognition network is able to learn to cluster high-dimension data, so we conjecture that posterior z outputted from the recognition network should cluster the responses into meaningful groups.", "Figure 5 visualizes the posterior z of responses in the test dataset in 2D space using t-SNE (Maaten and Hinton, 2008) .", "We found that the learned latent space is highly correlated with the dialog act and length of responses, which confirms our assumption.", "Results for Bag-of-Word Loss Finally, we evaluate the effectiveness of bag-ofword (BOW) loss for training VAE/CVAE with the RNN decoder.", "To compare with past work (Bowman et al., 2015) , we conducted the same language modelling (LM) task on Penn Treebank using VAE.", "The network architecture is same except we use GRU instead of LSTM.", "We compared four different training setups: (1) standard VAE without any heuristics; (2) VAE with KL annealing (KLA); (3) VAE with BOW loss; (4) VAE with both BOW loss and KLA.", "Intuitively, a well trained model should lead to a low reconstruction loss and small but non-trivial KL cost.", "For all models with KLA, the KL weight increases linearly from 0 to 1 in the first 5000 batches.", "Table 3 shows the reconstruction perplexity and the KL cost on the test dataset.", "The standard VAE fails to learn a meaningful latent variable by hav- Table 2 : Generated responses from the baselines and kgCVAE in two examples.", "KgCVAE also provides the predicted dialog act for each response.", "The context only shows the last utterance due to space limit (the actual context window size is 10).", "ing a KL cost close to 0 and a reconstruction perplexity similar to a small LSTM LM (Zaremba et al., 2014) .", "KLA helps to improve the reconstruction loss, but it requires early stopping since the models will fall back to the standard VAE after the KL weight becomes 1.", "At last, the models with BOW loss achieved significantly lower perplexity and larger KL cost.", "Figure 6 visualizes the evolution of the KL cost.", "We can see that for the standard model, the KL cost crashes to 0 at the beginning of training and never recovers.", "On the contrary, the model with only KLA learns to encode substantial information in latent z when the KL cost weight is small.", "However, after the KL weight is increased to 1 (after 5000 batch), the model once again decides to ignore the latent z and falls back to the naive implementation.", "The model with BOW loss, however, consistently converges to a non-trivial KL cost even without KLA, which confirms the importance of BOW loss for training latent variable models with the RNN decoder.", "Last but not least, our experiments showed that the conclusions drawn from LM using VAE also apply to training CVAE/kgCVAE, so we used BOW loss together with KLA for all previous experiments.", "Conclusion and Future Work In conclusion, we identified the one-to-many nature of open-domain conversation and proposed two novel models that show superior performance in generating diverse and appropriate responses at the discourse level.", "While the current paper addresses diversifying responses in respect to dialogue acts, this work is part of a larger research direction that targets leveraging both past linguistic findings and the learning power of deep neural networks to learn better representation of the latent factors in dialog.", "In turn, the output of this novel neural dialog model will be easier to explain and control by humans.", "In addition to dialog acts, we plan to apply our kgCVAE model to capture other different linguistic phenomena including sentiment, named entities,etc.", "Last but not least, the recognition network in our model will serve as the foundation for designing a datadriven dialog manager, which automatically discovers useful high-level intents.", "All of the above suggest a promising research direction." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "4.1", "4.2", "5.1", "5.2", "1.", "2.", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Encoder-decoder Dialog Models", "Conditional Variational Autoencoder", "Conditional Variational Autoencoder (CVAE) for Dialog Generation", "Knowledge-Guided CVAE (kgCVAE)", "Optimization Challenges", "Dataset", "Training", "Experiments Setup", "Quantitative Analysis", "Smoothed Sentence-level BLEU (Chen and", "Cosine", "Qualitative Analysis", "Results for Bag-of-Word Loss", "Conclusion and Future Work" ] }
GEM-SciDuet-train-100#paper-1263#slide-19
Training Details
Word Embedding 200 Glove pre-trained on Twitter Utterance Encoder Hidden Size Context Encoder Hidden Size Response Decoder Hidden Size Context Window Size 10 utterances Optimizer Adam learning rate=0.001
Word Embedding 200 Glove pre-trained on Twitter Utterance Encoder Hidden Size Context Encoder Hidden Size Response Decoder Hidden Size Context Window Size 10 utterances Optimizer Adam learning rate=0.001
[]
GEM-SciDuet-train-100#paper-1263#slide-20
1263
Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders
While recent neural encoder-decoder models have shown great promise in modeling open-domain conversations, they often generate dull and generic responses. Unlike past work that has focused on diversifying the output of the decoder at word-level to alleviate this problem, we present a novel framework based on conditional variational autoencoders that captures the discourse-level diversity in the encoder. Our model uses latent variables to learn a distribution over potential conversational intents and generates diverse responses using only greedy decoders. We have further developed a novel variant that is integrated with linguistic prior knowledge for better performance. Finally, the training procedure is improved by introducing a bag-of-word loss. Our proposed models have been validated to generate significantly more diverse responses than baseline approaches and exhibit competence in discourse-level decision-making.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196 ], "paper_content_text": [ "Introduction The dialog manager is one of the key components of dialog systems, which is responsible for modeling the decision-making process.", "Specifically, it typically takes a new utterance and the dialog context as input, and generates discourse-level decisions (Bohus and Rudnicky, 2003; Williams and Young, 2007) .", "Advanced dialog managers usually have a list of potential actions that enable them to have diverse behavior during a conversation, e.g.", "different strategies to recover from non-understanding .", "However, the conventional approach of designing a dialog manager (Williams and Young, 2007) does not scale well to open-domain conversation models because of the vast quantity of possible decisions.", "Thus, there has been a growing interest in applying encoder-decoder models for modeling open-domain conversation (Vinyals and Le, 2015; Serban et al., 2016a) .", "The basic approach treats a conversation as a transduction task, in which the dialog history is the source sequence and the next response is the target sequence.", "The model is then trained end-to-end on large conversation corpora using the maximum-likelihood estimation (MLE) objective without the need for manual crafting.", "However recent research has found that encoder-decoder models tend to generate generic and dull responses (e.g., I don't know), rather than meaningful and specific answers (Li et al., 2015; Serban et al., 2016b) .", "There have been many attempts to explain and solve this limitation, and they can be broadly divided into two categories (see Section 2 for details): (1) the first category argues that the dialog history is only one of the factors that decide the next response.", "Other features should be extracted and provided to the models as conditionals in order to generate more specific responses (Xing et al., 2016; Li et al., 2016a) ; (2) the second category aims to improve the encoder-decoder model itself, including decoding with beam search and its variations (Wiseman and Rush, 2016) , encouraging responses that have long-term payoff (Li et al., 2016b) , etc.", "Building upon the past work in dialog managers and encoder-decoder models, the key idea of this paper is to model dialogs as a one-to-many problem at the discourse level.", "Previous studies indicate that there are many factors in open-domain dialogs that decide the next response, and it is nontrivial to extract all of them.", "Intuitively, given a similar dialog history (and other observed inputs), there may exist many valid responses (at the discourse-level), each corresponding to a certain configuration of the latent variables that are not presented in the input.", "To uncover the potential responses, we strive to model a probabilistic distribution over the distributed utterance embeddings of the potential responses using a latent variable ( Figure 1 ).", "This allows us to generate diverse responses by drawing samples from the learned distribution and reconstruct their words via a decoder neural network.", "Specifically, our contributions are three-fold: 1.", "We present a novel neural dialog model adapted from conditional variational autoencoders (CVAE) , which introduces a latent variable that can capture discourse-level variations as described above 2.", "We propose Knowledge-Guided CVAE (kgC-VAE), which enables easy integration of expert knowledge and results in performance improvement and model interpretability.", "3.", "We develop a training method in addressing the difficulty of optimizing CVAE for natural language generation (Bowman et al., 2015) .", "We evaluate our models on human-human conversation data and yield promising results in: (a) generating appropriate and discourse-level diverse responses, and (b) showing that the proposed training method is more effective than the previous techniques.", "Related Work Our work is related to both recent advancement in encoder-decoder dialog models and generative models based on CVAE.", "Encoder-decoder Dialog Models Since the emergence of the neural dialog model, the problem of output diversity has received much attention in the research community.", "Ideal output responses should be both coherent and diverse.", "However, most models end up with generic and dull responses.", "To tackle this problem, one line of research has focused on augmenting the input of encoder-decoder models with richer context information, in order to generate more spe-cific responses.", "Li et al., (2016a) captured speakers' characteristics by encoding background information and speaking style into the distributed embeddings, which are used to re-rank the generated response from an encoder-decoder model.", "Xing et al., (2016) maintain topic encoding based on Latent Dirichlet Allocation (LDA) (Blei et al., 2003) of the conversation to encourage the model to output more topic coherent responses.", "On the other hand, many attempts have also been made to improve the architecture of encoderdecoder models.", "Li et al,.", "(2015) proposed to optimize the standard encoder-decoder by maximizing the mutual information between input and output, which in turn reduces generic responses.", "This approach penalized unconditionally high frequency responses, and favored responses that have high conditional probability given the input.", "Wiseman and Rush (2016) focused on improving the decoder network by alleviating the biases between training and testing.", "They introduced a searchbased loss that directly optimizes the networks for beam search decoding.", "The resulting model achieves better performance on word ordering, parsing and machine translation.", "Besides improving beam search, Li et al., (2016b) pointed out that the MLE objective of an encoder-decoder model is unable to approximate the real-world goal of the conversation.", "Thus, they initialized a encoderdecoder model with MLE objective and leveraged reinforcement learning to fine tune the model by optimizing three heuristic rewards functions: informativity, coherence, and ease of answering.", "Conditional Variational Autoencoder The variational autoencoder (VAE) (Kingma and Welling, 2013; Rezende et al., 2014) is one of the most popular frameworks for image generation.", "The basic idea of VAE is to encode the input x into a probability distribution z instead of a point encoding in the autoencoder.", "Then VAE applies a decoder network to reconstruct the original input using samples from z.", "To generate images, VAE first obtains a sample of z from the prior distribution, e.g.", "N (0, I), and then produces an image via the decoder network.", "A more advanced model, the conditional VAE (CVAE), is a recent modification of VAE to generate diverse images conditioned on certain attributes, e.g.", "generating different human faces given skin color .", "Inspired by CVAE, we view the dialog contexts as the conditional attributes and adapt CVAE to generate diverse responses instead of images.", "Although VAE/CVAE has achieved impressive results in image generation, adapting this to natural language generators is non-trivial.", "Bowman et al., (2015) have used VAE with Long-Short Term Memory (LSTM)-based recognition and decoder networks to generate sentences from a latent Gaussian variable.", "They showed that their model is able to generate diverse sentences with even a greedy LSTM decoder.", "They also reported the difficulty of training because the LSTM decoder tends to ignore the latent variable.", "We refer to this issue as the vanishing latent variable problem.", "Serban et al., (2016b) have applied a latent variable hierarchical encoder-decoder dialog model to introduce utterance-level variations and facilitate longer responses.", "To improve upon the past models, we firstly introduce a novel mechanism to leverage linguistic knowledge in training end-to-end neural dialog models, and we also propose a novel training technique that mitigates the vanishing latent variable problem.", "Proposed Models Conditional Variational Autoencoder (CVAE) for Dialog Generation Each dyadic conversation can be represented via three random variables: the dialog context c (context window size k − 1), the response utterance x (the k th utterance) and a latent variable z, which is used to capture the latent distribution over the valid responses.", "Further, c is composed of the dialog history: the preceding k-1 utterances; conversational floor (1 if the utterance is from the same speaker of x, otherwise 0) and meta features m (e.g.", "the topic).", "We then define the conditional distribution p(x, z|c) = p(x|z, c)p(z|c) and our goal is to use deep neural networks (parametrized by θ) to approximate p(z|c) and p(x|z, c).", "We refer to p θ (z|c) as the prior network and p θ (x, |z, c) as the response decoder.", "Then the generative process of x is (Figure 2 (a)): 1.", "Sample a latent variable z from the prior network p θ (z|c).", "2.", "Generate x through the response decoder p θ (x|z, c).", "CVAE is trained to maximize the conditional log likelihood of x given c, which involves an intractable marginalization over the latent variable z.", "As proposed in , CVAE can be efficiently trained with the Stochastic Gradient Variational Bayes (SGVB) framework (Kingma and Welling, 2013) by maximizing the variational lower bound of the conditional log likelihood.", "We assume the z follows multivariate Gaussian distribution with a diagonal covariance matrix and introduce a recognition network q φ (z|x, c) to approximate the true posterior distribution p(z|x, c).", "have shown that the variational lower bound can be written as: Figure 3 demonstrates an overview of our model.", "The utterance encoder is a bidirectional recurrent neural network (BRNN) (Schuster and Paliwal, 1997) with a gated recurrent unit (GRU) (Chung et al., 2014) to encode each utterance into fixedsize vectors by concatenating the last hidden states of the forward and backward RNN u L(θ, φ; x, c) = −KL(q φ (z|x, c) p θ (z|c)) + E q φ (z|c,x) [log p θ (x|z, c)] (1) ≤ log p(x|c) i = [ h i , h i ].", "x is simply u k .", "The context encoder is a 1-layer GRU network that encodes the preceding k-1 utterances by taking u 1:k−1 and the corresponding conversation floor as inputs.", "The last hidden state h c of the context encoder is concatenated with meta features and c = [h c , m].", "Since we assume z follows isotropic Gaussian distribution, the recognition network q φ (z|x, c) ∼ N (µ, σ 2 I) and the prior network p θ (z|c) ∼ N (µ , σ 2 I), and then we have: µ log(σ 2 ) = W r x c + b r (2) µ log(σ 2 ) = MLP p (c) (3) We then use the reparametrization trick (Kingma and Welling, 2013) to obtain samples of z either from N (z; µ, σ 2 I) predicted by the recognition network (training) or N (z; µ , σ 2 I) predicted by the prior network (testing).", "Finally, the response decoder is a 1-layer GRU network with initial state s 0 = W i [z, c]+b i .", "The response decoder then predicts the words in x sequentially.", "Knowledge-Guided CVAE (kgCVAE) In practice, training CVAE is a challenging optimization problem and often requires large amount of data.", "On the other hand, past research in spoken dialog systems and discourse analysis has suggested that many linguistic cues capture crucial features in representing natural conversation.", "For example, dialog acts (Poesio and Traum, 1998) have been widely used in the dialog managers (Litman and Allen, 1987; Raux et al., 2005; Zhao and Eskenazi, 2016) to represent the propositional function of the system.", "Therefore, we conjecture that it will be beneficial for the model to learn meaningful latent z if it is provided with explicitly extracted discourse features during the training.", "In order to incorporate the linguistic features into the basic CVAE model, we first denote the set of linguistic features as y.", "Then we assume that the generation of x depends on c, z and y. y relies on z and c as shown in Figure 2 .", "Specifically, during training the initial state of the response decoder is s 0 = W i [z, c, y] + b i and the input at every step is [e t , y] where e t is the word embedding of t th word in x.", "In addition, there is an MLP to predict y = MLP y (z, c) based on z and c. In the testing stage, the predicted y is used by the response decoder instead of the oracle decoders.", "We denote the modified model as knowledge-guided CVAE (kgCVAE) and developers can add desired discourse features that they wish the latent variable z to capture.", "KgCVAE model is trained by maximizing: L(θ, φ; x, c, y) = −KL(q φ (z|x, c, y) P θ (z|c)) + E q φ (z|c,x,y) [log p(x|z, c, y)] + E q φ (z|c,x,y) [log p(y|z, c)] (4) Since now the reconstruction of y is a part of the loss function, kgCVAE can more efficiently encode y-related information into z than discovering it only based on the surface-level x and c. Another advantage of kgCVAE is that it can output a highlevel label (e.g.", "dialog act) along with the wordlevel responses, which allows easier interpretation of the model's outputs.", "Optimization Challenges A straightforward VAE with RNN decoder fails to encode meaningful information in z due to the vanishing latent variable problem (Bowman et al., 2015) .", "Bowman et al., (2015) proposed two solutions: (1) KL annealing: gradually increasing the weight of the KL term from 0 to 1 during training; (2) word drop decoding: setting a certain percentage of the target words to 0.", "We found that CVAE suffers from the same issue when the decoder is an RNN.", "Also we did not consider word drop decoding because Bowman et al,.", "(2015) have shown that it may hurt the performance when the drop rate is too high.", "As a result, we propose a simple yet novel technique to tackle the vanishing latent variable problem: bag-of-word loss.", "The idea is to introduce an auxiliary loss that requires the decoder network to predict the bag-of-words in the response x as shown in Figure 3 (b).", "We decompose x into two variables: x o with word order and x bow without order, and assume that x o and x bow are conditionally independent given z and c: p(x, z|c) = p(x o |z, c)p(x bow |z, c)p(z|c).", "Due to the conditional independence assumption, the latent variable is forced to capture global information about the target response.", "Let f = MLP b (z, x) ∈ R V where V is vocabulary size, and we have: log p(x bow |z, c) = log |x| t=1 e fx t V j e f j (5) where |x| is the length of x and x t is the word index of t th word in x.", "The modified variational lower bound for CVAE with bag-of-word loss is (see Appendix A for kgCVAE): L (θ, φ; x, c) = L(θ, φ; x, c) + E q φ (z|c,x,y) [log p(x bow |z, c)] (6) We will show that the bag-of-word loss in Equation 6 is very effective against the vanishing latent variable and it is also complementary to the KL annealing technique.", "Experiment Setup Dataset We chose the Switchboard (SW) 1 Release 2 Corpus (Godfrey and Holliman, 1997) to evaluate the proposed models.", "SW has 2400 two-sided telephone conversations with manually transcribed speech and alignment.", "In the beginning of the call, a computer operator gave the callers recorded prompts that define the desired topic of discussion.", "There are 70 available topics.", "We randomly split the data into 2316/60/62 dialogs for train/validate/test.", "The pre-processing includes (1) tokenize using the NLTK tokenizer (Bird et al., 2009 ); (2) remove non-verbal symbols and repeated words due to false starts; (3) keep the top 10K frequent word types as the vocabulary.", "The final data have 207, 833/5, 225/5, 481 (c, x) pairs for train/validate/test.", "Furthermore, a subset of SW was manually labeled with dialog acts (Stolcke et al., 2000) .", "We extracted dialog act labels based on the dialog act recognizer proposed in (Ribeiro et al., 2015) .", "The features include the uni-gram and bi-gram of the utterance, and the contextual features of the last 3 utterances.", "We trained a Support Vector Machine (SVM) (Suykens and Vandewalle, 1999) with linear kernel on the subset of SW with human annotations.", "There are 42 types of dialog acts and the SVM achieved 77.3% accuracy on held-out data.", "Then the rest of SW data are labelled with dialog acts using the trained SVM dialog act recognizer.", "Training We trained with the following hyperparameters (according to the loss on the validate dataset): word embedding has size 200 and is shared across everywhere.", "We initialize the word embedding from Glove embedding pre-trained on Twitter (Pennington et al., 2014) .", "The utterance encoder has a hidden size of 300 for each direction.", "The context encoder has a hidden size of 600 and the response decoder has a hidden size of 400.", "The prior network and the MLP for predicting y both have 1 hidden layer of size 400 and tanh non-linearity.", "The latent variable z has a size of 200.", "The context window k is 10.", "All the initial weights are sampled from a uniform distribution [-0.08, 0.08].", "The mini-batch size is 30.", "The models are trained end-to-end using the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.001 and gradient clipping at 5.", "We selected the best models based on the variational lower bound on the validate data.", "Finally, we use the BOW loss along with KL annealing of 10,000 batches to achieve the best performance.", "Section 5.4 gives a detailed argument for the importance of the BOW loss.", "Results Experiments Setup We compared three neural dialog models: a strong baseline model, CVAE, and kgCVAE.", "The baseline model is an encoder-decoder neural dialog model without latent variables similar to (Serban et al., 2016a) .", "The baseline model's encoder uses the same context encoder to encode the dialog history and the meta features as shown in Figure 3 .", "The encoded context c is directly fed into the decoder networks as the initial state.", "The hyperparameters of the baseline are the same as the ones reported in Section 4.2 and the baseline is trained to minimize the standard cross entropy loss of the decoder RNN model without any auxiliary loss.", "Also, to compare the diversity introduced by the stochasticity in the proposed latent variable versus the softmax of RNN at each decoding step, we generate N responses from the baseline by sam-pling from the softmax.", "For CVAE/kgCVAE, we sample N times from the latent z and only use greedy decoders so that the randomness comes entirely from the latent variable z. Quantitative Analysis Automatically evaluating an open-domain generative dialog model is an open research challenge .", "Following our one-tomany hypothesis, we propose the following metrics.", "We assume that for a given dialog context c, there exist M c reference responses r j , j ∈ [1, M c ].", "Meanwhile a model can generate N hypothesis re- sponses h i , i ∈ [1, N ].", "The generalized responselevel precision/recall for a given dialog context is: precision(c) = N i=1 max j∈[1,Mc] d(r j , h i ) N recall(c) = Mc j=1 max i∈[1,N ] d(r j , h i )) M c where d(r j , h i ) is a distance function which lies between 0 to 1 and measures the similarities between r j and h i .", "The final score is averaged over the entire test dataset and we report the performance with 3 types of distance functions in order to evaluate the systems from various linguistic points of view: Smoothed Sentence-level BLEU (Chen and Cherry, 2014): BLEU is a popular metric that measures the geometric mean of modified ngram precision with a length penalty (Papineni et al., 2002; Li et al., 2015) .", "We use BLEU-1 to 4 as our lexical similarity metric and normalize the score to 0 to 1 scale.", "Cosine Distance of Bag-of-word Embedding: a simple method to obtain sentence embeddings is to take the average or extrema of all the word embeddings in the sentences (Forgues et al., 2014; Adi et al., 2016) .", "The d(r j , h i ) is the cosine distance of the two embedding vectors.", "We used Glove embedding described in Section 4 and denote the average method as A-bow and extrema method as E-bow.", "3.", "Dialog Act Match: to measure the similarity at the discourse level, the same dialogact tagger from 4.1 is applied to label all the generated responses of each model.", "We set d(r j , h i ) = 1 if r j and h i have the same dialog acts, otherwise d(r j , h i ) = 0.", "One challenge of using the above metrics is that there is only one, rather than multiple reference responses/contexts.", "This impacts reliability of our measures.", "Inspired by (Sordoni et al., 2015) , we utilized information retrieval techniques (see Appendix A) to gather 10 extra candidate reference responses/context from other conversations with the same topics.", "Then the 10 candidate references are filtered by two experts, which serve as the ground truth to train the reference response classifier.", "The result is 6.69 extra references in average per context.", "The average number of distinct reference dialog acts is 4.2.", "Table 1 The proposed models outperform the baseline in terms of recall in all the metrics with statistical significance.", "This confirms our hypothesis that generating responses with discourse-level diversity can lead to a more comprehensive coverage of the potential responses than promoting only word-level diversity.", "As for precision, we observed that the baseline has higher or similar scores than CVAE in all metrics, which is expected since the baseline tends to generate the mostly likely and safe responses repeatedly in the N hypotheses.", "However, kgCVAE is able to achieve the highest precision and recall in the 4 metrics at the same time (BLEU1-4, E-BOW).", "One reason for kgCVAE's good performance is that the predicted dialog act label in kgCVAE can regularize the generation process of its RNN decoder by forcing it to generate more coherent and precise words.", "We further analyze the precision/recall of BLEU-4 by looking at the average score versus the number of distinct reference dialog acts.", "A low number of distinct dialog acts represents the situation where the dialog context has a strong constraint on the range of the next response (low entropy), while a high number indicates the opposite (highentropy).", "Figure 4 shows that CVAE/kgCVAE achieves significantly higher recall than the baseline in higher entropy contexts.", "Also it shows that CVAE suffers from lower precision, especially in low entropy contexts.", "Finally, kgCVAE gets higher precision than both the baseline and CVAE in the full spectrum of context entropy.", "Table 2 shows the outputs generated from the baseline and kgCVAE.", "In example 1, caller A begins with an open-ended question.", "The kgCVAE model generated highly diverse answers that cover multiple plausible dialog acts.", "Further, we notice that the generated text exhibits similar dialog acts compared to the ones predicted separately by the model, implying the consistency of natural language generation based on y.", "On the contrary, the responses from the baseline model are limited to local n-gram variations and share a similar prefix, i.e.", "\"I'm\".", "Example 2 is a situation where caller A is telling B stories.", "The ground truth response is a back-channel and the range of valid answers is more constrained than example 1 since B is playing the role of a listener.", "The baseline successfully predicts \"uh-huh\".", "The kgCVAE model is also able to generate various ways of back-channeling.", "This implies that the latent z is able to capture context-sensitive variations, i.e.", "in low-entropy dialog contexts modeling lexical diversity while in high-entropy ones modeling discourse-level diversity.", "Moreover, kgCVAE is occasionally able to generate more sophisticated grounding (sample 4) beyond a simple back-channel, which is also an acceptable response given the dialog context.", "Qualitative Analysis In addition, past work (Kingma and Welling, 2013) has shown that the recognition network is able to learn to cluster high-dimension data, so we conjecture that posterior z outputted from the recognition network should cluster the responses into meaningful groups.", "Figure 5 visualizes the posterior z of responses in the test dataset in 2D space using t-SNE (Maaten and Hinton, 2008) .", "We found that the learned latent space is highly correlated with the dialog act and length of responses, which confirms our assumption.", "Results for Bag-of-Word Loss Finally, we evaluate the effectiveness of bag-ofword (BOW) loss for training VAE/CVAE with the RNN decoder.", "To compare with past work (Bowman et al., 2015) , we conducted the same language modelling (LM) task on Penn Treebank using VAE.", "The network architecture is same except we use GRU instead of LSTM.", "We compared four different training setups: (1) standard VAE without any heuristics; (2) VAE with KL annealing (KLA); (3) VAE with BOW loss; (4) VAE with both BOW loss and KLA.", "Intuitively, a well trained model should lead to a low reconstruction loss and small but non-trivial KL cost.", "For all models with KLA, the KL weight increases linearly from 0 to 1 in the first 5000 batches.", "Table 3 shows the reconstruction perplexity and the KL cost on the test dataset.", "The standard VAE fails to learn a meaningful latent variable by hav- Table 2 : Generated responses from the baselines and kgCVAE in two examples.", "KgCVAE also provides the predicted dialog act for each response.", "The context only shows the last utterance due to space limit (the actual context window size is 10).", "ing a KL cost close to 0 and a reconstruction perplexity similar to a small LSTM LM (Zaremba et al., 2014) .", "KLA helps to improve the reconstruction loss, but it requires early stopping since the models will fall back to the standard VAE after the KL weight becomes 1.", "At last, the models with BOW loss achieved significantly lower perplexity and larger KL cost.", "Figure 6 visualizes the evolution of the KL cost.", "We can see that for the standard model, the KL cost crashes to 0 at the beginning of training and never recovers.", "On the contrary, the model with only KLA learns to encode substantial information in latent z when the KL cost weight is small.", "However, after the KL weight is increased to 1 (after 5000 batch), the model once again decides to ignore the latent z and falls back to the naive implementation.", "The model with BOW loss, however, consistently converges to a non-trivial KL cost even without KLA, which confirms the importance of BOW loss for training latent variable models with the RNN decoder.", "Last but not least, our experiments showed that the conclusions drawn from LM using VAE also apply to training CVAE/kgCVAE, so we used BOW loss together with KLA for all previous experiments.", "Conclusion and Future Work In conclusion, we identified the one-to-many nature of open-domain conversation and proposed two novel models that show superior performance in generating diverse and appropriate responses at the discourse level.", "While the current paper addresses diversifying responses in respect to dialogue acts, this work is part of a larger research direction that targets leveraging both past linguistic findings and the learning power of deep neural networks to learn better representation of the latent factors in dialog.", "In turn, the output of this novel neural dialog model will be easier to explain and control by humans.", "In addition to dialog acts, we plan to apply our kgCVAE model to capture other different linguistic phenomena including sentiment, named entities,etc.", "Last but not least, the recognition network in our model will serve as the foundation for designing a datadriven dialog manager, which automatically discovers useful high-level intents.", "All of the above suggest a promising research direction." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "3.3", "4.1", "4.2", "5.1", "5.2", "1.", "2.", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Encoder-decoder Dialog Models", "Conditional Variational Autoencoder", "Conditional Variational Autoencoder (CVAE) for Dialog Generation", "Knowledge-Guided CVAE (kgCVAE)", "Optimization Challenges", "Dataset", "Training", "Experiments Setup", "Quantitative Analysis", "Smoothed Sentence-level BLEU (Chen and", "Cosine", "Qualitative Analysis", "Results for Bag-of-Word Loss", "Conclusion and Future Work" ] }
GEM-SciDuet-train-100#paper-1263#slide-20
Testset Creation
Use 10-nearest neighbour to collect similar context in the training data Label a subset of the appropriateness of the 10 responses by 2 human bootstrap via SVM on the whole test set (5481 context/response) Resulting 6.79 Avg references responses/context Distinct reference dialog acts 4.2
Use 10-nearest neighbour to collect similar context in the training data Label a subset of the appropriateness of the 10 responses by 2 human bootstrap via SVM on the whole test set (5481 context/response) Resulting 6.79 Avg references responses/context Distinct reference dialog acts 4.2
[]
GEM-SciDuet-train-101#paper-1265#slide-0
1265
Detect Rumors in Microblog Posts Using Propagation Structure via Kernel Learning
How fake news goes viral via social media? How does its propagation pattern differ from real stories? In this paper, we attempt to address the problem of identifying rumors, i.e., fake information, out of microblog posts based on their propagation structure. We firstly model microblog posts diffusion with propagation trees, which provide valuable clues on how an original message is transmitted and developed over time. We then propose a kernel-based method called Propagation Tree Kernel, which captures high-order patterns differentiating different types of rumors by evaluating the similarities between their propagation tree structures. Experimental results on two real-world datasets demonstrate that the proposed kernel-based approach can detect rumors more quickly and accurately than state-ofthe-art rumor detection models.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170 ], "paper_content_text": [ "Introduction On November 9th, 2016, Eric Tucker, a grassroots user who had just about 40 followers on Twitter, tweeted his unverified observations about paid protesters being bused to attend anti-Trump demonstration in Austin, Texas.", "The tweet, which was proved false later, was shared over 16 thousand times on Twitter and 350 thousand times on Facebook within a couple of days, fueling a nation-wide conspiracy theory 1 .", "The diffusion of the story is illustrated as Figure 1 which gives the key spreading points of the story along the time line.", "We can see that after the initial post, the tweet was shared or promoted by some influential online communities and users (including Trump himself), resulting in its wide spread.", "A widely accepted definition of rumor is \"unverified and instrumentally relevant information statements in circulation\" (DiFonzo and Bordia, 2007) .", "This unverified information may eventually turn out to be true, or partly or entirely false.", "In today's ever-connected world, rumors can arise and spread at lightening speed thanks to social media platforms, which could not only be wrong, but be misleading and dangerous to the public society.", "Therefore, it is crucial to track and debunk such rumors in timely manner.", "Journalists and fact-checking websites such as snopes.com have made efforts to track and detect rumors.", "However, such endeavor is manual, thus prone to poor coverage and low speed.", "Feature-based methods (Castillo et al., 2011; Yang et al., 2012; Ma et al., 2015) achieved certain success by employing large feature sets crafted from message contents, user profiles and holistic statistics of diffusion patterns (e.g., number of retweets, propagation time, etc.).", "But such an approach was over simplified as they ignored the dynamics of rumor propagation.", "Existing studies considering propagation characteristics mainly focused on the temporal features (Kwon et al., 2013 (Kwon et al., , 2017 rather than the structure of propagation.", "So, can the propagation structure make any difference for differentiating rumors from nonrumors?", "Recent studies showed that rumor spreaders are persons who want to get attention and popularity (Sunstein, 2014) .", "However, popular users who get more attention on Twitter (e.g., with more followers) are actually less likely to spread rumor in a sense that the high audience size might hinder a user from participating in propagating unverified information (Kwon et al., 2017) .", "Intuitively, for \"successful\" rumors being propagated as widely as popular real news, initial spreaders (typically lack of popularity) must attract certain amount of broadcasting power, e.g., attention of influential users or communities that have a lot of audiences joining in promoting the propagation.", "We refer to this as a constrained mode propagation, relative to the open mode propagation of normal messages that everyone is open to share.", "Such different modes of propagation may imply some distinct propagation structures between rumors and nonrumors and even among different types of rumors.", "Due to the complex nature of information diffusion, explicitly defining discriminant features based on propagation structure is difficult and biased.", "Figure 2 exemplifies the propagation structures of two Twitter posts, a rumor and a nonrumor, initiated by two users shown as the root nodes (in green color).", "The information flows here illustrate that the rumorous tweet is first posted by a low-impact user, then some popular users joining in who boost the spreading, but the non-rumorous tweet is initially posted by a popular user and directly spread by many general users; contentbased signal like various users' stance (Zhao et al., 2015) and edge-based signal such as relative influence (Kwon et al., 2017) can also suggest the different nature of source tweets.", "Many of such implicit distinctions throughout message propagation are hard to hand craft specifically using flat summary of statistics as previous work did.", "In addition, unlike representation learning for plain text, learning for representation of structures such as networks is not well studied in general.", "Therefore, traditional and latest text-based models (Castillo (a) A rumor (b) A non-rumor Figure 2 : Fragments of the propagation for two source tweets.", "Node size: denotes the popularity of the user who tweet the post (represented by # of followers); Red, black, blue node: content-wise the user express doubt/denial, support, neutrality in the tweet, respectively; Solid (dotted) edge: information flow from a more (less) popular user to a less (more) popular user; Dashed concentric circles: time stamps.", "Ma et al., 2015 Ma et al., , 2016 cannot be applied easily on such complex, dynamic structures.", "To capture high-order propagation patterns for rumor detection, we firstly represent the propagation of each source tweet with a propagation tree which is formed by harvesting user's interactions to one another triggered by the source tweet.", "Then, we propose a kernel-based data-driven method called Propagation Tree Kernel (PTK) to generate relevant features (i.e., subtrees) automatically for estimating the similarity between two propagation trees.", "Unlike traditional tree kernel (Moschitti, 2006; Zhang et al., 2008) for modeling syntactic structure based on parse tree, our propagation tree consists of nodes corresponding to microblog posts, each represented as a continuous vector, and edges representing the direction of propagation and providing the context to individual posts.", "The basic idea is to find and capture the salient substructures in the propagation trees indicative of rumors.", "We also extend PTK into a context-enriched PTK (cPTK) to enhance the model by considering different propagation paths from source tweet to the roots of subtrees, which capture the context of transmission.", "Extensive experiments on two real-world Twitter datasets show that the proposed methods outperform state-of-the-art rumor detection models with large margin.", "Moreover, most existing approaches regard rumor detection as a binary classification problem, which predicts a candidate hypothesis as rumor or not.", "Since a rumor often begins as unverified and later turns out to be confirmed as true or false, or remains unverified (Zubiaga et al., 2016) , here we consider a set of more practical, finer-grained classes: false rumor, true rumor, unverified rumor, and non-rumor, which becomes an even more challenging problem.", "Related Work Tracking misinformation or debunking rumors has been a hot research topic in multiple disciplines (DiFonzo and Bordia, 2007; Morris et al., 2012; Rosnow, 1991) .", "Castillo et al.", "(2011) studied information credibility on Twitter using a wide range of hand-crafted features.", "Following that, various features corresponding to message contents, user profiles and statistics of propagation patterns were proposed in many studies (Yang et al., 2012; Wu et al., 2015; Sun et al., 2013; Liu et al., 2015) .", "Zhao et al.", "(2015) focused on early rumor detection by using regular expressions for finding questing and denying tweets as the key for debunking rumor.", "All such approaches are over simplistic because they ignore the dynamic propagation patterns given the rich structures of social media data.", "Some studies focus on finding temporal patterns for understanding rumor diffusion.", "Kown et al.", "(2013; 2017) introduced a time-series fitting model based on the temporal properties of tweet volume.", "Ma et al.", "(2015) extended the model using time series to capture the variation of features over time.", "Friggeri et al.", "(2014) and Hannak et al.", "(2014) studied the structure of misinformation cascades by analyzing comments linking to rumor debunking websites.", "More recently, Ma et al.", "(2016) used recurrent neural networks to learn the representations of rumor signals from tweet text at different times.", "Our work will consider temporal, structural and linguistic signals in a unified framework based on propagation tree kernel.", "Most previous work formulated the task as classification at event level where an event is comprised of a number of source tweets, each being associated with a group of retweets and replies.", "Here we focus on classifying a given source tweet regarding a claim which is a finer-grained task.", "Similar setting was also considered in (Wu et al., 2015; Qazvinian et al., 2011) .", "Kernel methods are designed to evaluate the similarity between two objects, and tree kernel specifically addresses structured data which has been successfully applied for modeling syntactic information in many natural language tasks such as syntactic parsing (Collins and Duffy, 2001) , question-answering (Moschitti, 2006) , semantic analysis (Moschitti, 2004) , relation extraction (Zhang et al., 2008) and machine translation (Sun et al., 2010) .", "These kernels are not suitable for modeling the social media propagation structures because the nodes are not given as discrete values like part-of-speech tags, but are represented as high dimensional real-valued vectors.", "Our proposed method is a substantial extension of tree kernel for modeling such structures.", "Representation of Tweets Propagation On microblogging platforms, the follower/friend relationship embeds shared interests among the users.", "Once a user has posted a tweet, all his followers will receive the tweet.", "Furthermore, Twitter allows a user to retweet or comment another user's post, so that the information could reach beyond the network of the original creator.", "We model the propagation of each source tweet as a tree structure T (r) = V, E , where r is the source tweet as well as the root of the tree, V refers to a set of nodes each representing a responsive post (i.e., retweet or reply) of a user at a certain time to the source tweet r which initiates the circulation, and E is a set of directed edges corresponding to the response relation among the nodes in V .", "If there exists a directed edge from v i to v j , it means v j is a direct response to v i .", "More specifically, each node v ∈ V is repre- sented as a tuple v = (u v , c v , t v ) , which provides the following information: u v is the creator of the post, c v represents the text content of the post, and t v is the time lag between the source tweet r and v. In our case, u v contains attributes of the user such as # of followers/friends, verification status, # of history posts, etc., c v is a vector of binary features based on uni-grams and/or bi-grams representing the post's content.", "Propagation Tree Kernel Modeling In this section, we describe our rumor detection model based on propagation trees using kernel method called Propagation Tree Kernel (PTK).", "Our task is, given a propagation tree T (r) of a source tweet r, to predict the label of r. Background of Tree Kernel Before presenting our proposed algorithm, we briefly present the traditional tree kernel, which our PTK model is based on.", "Tree kernel was designed to compute the syntactic and semantic similarity between two natural language sentences by implicitly counting the number of common subtrees between their corresponding parse trees.", "Given a syntactic parse tree, each node with its children is associated with a grammar production rule.", "Figure 3 illustrates the syntactic parse tree of \"cut a tree\" and its subtrees.", "A subtree is defined as any subgraph which has more than one nodes, with the restriction that entire (not partial) rule productions must be included.", "For example, the fragment [NP [D a]] is excluded because it contains only part of the production NP → D N (Collins and Duffy, 2001) .", "Following Collins and Duffy (2001) , given two parse trees T 1 and T 2 , the kernel function K(T 1 , T 2 ) is defined as: v i ∈V 1 v j ∈V 2 ∆(v i , v j ) (1) where V 1 and V 2 are the sets of all nodes respectively in T 1 and T 2 , and each node is associated with a production rule, and ∆(v i , v j ) evaluates the common subtrees rooted at v i and v j .", "∆(v i , v j ) can be computed using the following recursive procedure (Collins and Duffy, 2001) : 1) if the production rules at v i and v j are different, then ∆(v i , v j ) = 0; 2) else if the production rules at v i and v j are same, and v i and v j have only leaf children (i.e., they are pre-terminal symbols), then ∆(v i , v j ) = λ; 3) else ∆(v i , v j ) = λ min(nc(v i ),nc(v j )) k=1 (1 + ∆(ch(v i , k), ch(v j , k))).", "where nc(v) is the number of children of node v, ch(v, k) is the k-th child of node v, and λ (0 < λ ≤ 1) is a decay factor.", "λ = 1 yields the number of common subtrees; λ < 1 down weighs the contribution of larger subtrees to make the kernel value less variable with respect to subtree size.", "Our PTK Model To classify propagation trees, we can calculate the similarity between the trees, which is supposed to reflect the distinction of different types of rumors and non-rumors based on structural, linguistic and temporal properties.", "However, existing tree kernels cannot be readily applied on propagation trees because 1) unlike parse tree where the node is represented by enumerable nominal value (e.g., part-of-speech tag), the propagation tree node is given as a vector of continuous numerical values representing the basic properties of the node; 2) the similarity of two parse trees is based on the count of common subtrees, for which the commonality of subtrees is evaluated by checking if the same production rules and the same children are associated with the nodes in two subtrees being compared, whereas in our context the similarity function should be defined softly since hardly two nodes from different propagation trees are same.", "With the representation of propagation tree, we first define a function f to evaluate the similarity between two nodes v i and v j (we simplify the node representation for instance v i = (u i , c i , t i )) as the following: f (v i , v j ) = e −t (αE(u i , u j ) + (1 − α)J (c i , c j )) where t = |t i − t j | is the absolute difference between the time lags of v i and v j , E and J are user-based similarity and content-based similarity, respectively, and α is the trade-off parameter.", "The intuition of using exponential function of t to scale down the similarity is to capture the discriminant signals or patterns at the different stages of propagation.", "For example, a questioning message posted very early may signal a false rumor while the same posted far later from initial post may indicate the rumor is still unverified, despite that the two messages are semantically similar.", "The user-based similarity is defined as an Euclidean distance E(u i , u j ) = ||u i − u j || 2 , where u i and u j are the user vectors of node v i and v j and || • || 2 is the 2-norm of a vector.", "Here E is used to capture the characteristics of users participating in spreading rumors as discriminant signals, throughout the entire stage of propagation.", "Contentwise, we use Jaccard coefficient to measure the similarity of post content: J (c i , c j ) = |N gram(c i ) ∩ N gram(c j )| |N gram(c i ) ∪ N gram(c j )| where c i and c j are the sets of content words in two nodes.", "For n-grams here, we adopt both uni-grams and bi-grams.", "It can capture cue terms e.g., 'false', 'debunk', 'not true', etc.", "commonly occurring in rumors but not in non-rumors.", "Given two propagation trees T 1 = V 1 , E 1 and T 2 = V 2 , E 2 , PTK aims to compute the similarity between T 1 and T 2 iteratively based on enumerating all pairs of most similar subtrees.", "First, for each node v i ∈ V 1 , we obtain v i ∈ V 2 , the most similar node of v i from V 2 : v i = arg max v j ∈V 2 f (v i , v j ) Similarly, for each v j ∈ V 2 , we obtain v j ∈ V 1 : v j = arg max v i ∈V 1 f (v i , v j ) Then, the propagation tree kernel K P (T 1 , T 2 ) is defined as: v i ∈V 1 Λ(v i , v i ) + v j ∈V 2 Λ(v j , v j ) (2) where Λ(v, v ) evaluates the similarity of two subtrees rooted at v and v , which is computed recursively as follows: 1) if v or v are leaf nodes, then Λ(v, v ) = f (v, v ); 2) else Λ(v, v ) = f (v, v ) min(nc(v),nc(v )) k=1 (1 + Λ(ch(v, k), ch(v , k))) Note that unlike traditional tree kernel, in PTK the node similarity f ∈ [0, 1] is used for softly counting similar subtrees instead of common subtrees.", "Also, λ in tree kernel is not needed as subtree size is not an issue here thanks to node similarity f .", "PTK aims to capture discriminant patterns from propagation trees inclusive of user, content and temporal traits, which is inspired by prior analyses on rumors spreading, e.g., user information can be a strong clue in the initial broadcast, content features are important throughout entire propagation periods, and structural and temporal patterns help for longitudinal diffusion (Zubiaga et al., 2016; Kwon et al., 2017) .", "Context-Sensitive Extension of PTK One defect of PTK is that it ignores the clues outside the subtrees, e.g., how the information propagates from source post to the current subtree.", "Intuitively, propagation paths provide further clue for determining the truthfulness of information since they embed the route and context of how the propagation happens.", "Therefore, we propose contextsensitive PTK (cPTK) by considering the propagation paths from the root of the tree to the roots of subtrees, which shares similar intuition with the context-sensitive tree kernel (Zhou et al., 2007) .", "For a propagation tree node v ∈ T (r), let L r v be the length (i.e., # of nodes) of the propagation path from root r to v, and v[x] be the x-th ancestor of v on the path starting from v (0 ≤ x < L r v , v[0] = v, v[L r v − 1] = r) .", "cPTK evaluates the similarity between two trees T 1 (r 1 ) and T 2 (r 2 ) as follows: v i ∈V 1 L r 1 v i −1 x=0 Λ x (v i , v i ) + v j ∈V 2 L r 2 v j −1 x=0 Λ x (v j , v j ) (3) where Λ x (v, v ) measures the similarity of sub- trees rooted at v[x] and v [x] for context-sensitive evaluation, which is computed as follows: 1) if x > 0, Λ x (v, v ) = f (v[x], v [x]), where v[x] and v [x] are the x-th ancestor nodes of v and v on the respective propagation path.", "2) else Λ x (v, v ) = Λ(v, v ), namely PTK.", "Clearly, PTK is a special case of cPTK when x = 0 (see equation 3).", "cPTK evaluates the oc-currence of both context-free (without considering ancestors on propagation paths) and contextsensitive cases.", "Rumor Detection via Kernel Learning The advantage of kernel-based method is that we can avoid painstakingly engineering the features.", "This is possible because the kernel function can explore an implicit feature space when calculating the similarity between two objects (Culotta and Sorensen, 2004) .", "We incorporate the proposed tree kernel functions, i.e., PTK (equation 2) or cPTK (equation 3), into a supervised learning framework, for which we utilize a kernel-based SVM classifier.", "We treat each tree as an instance, and its similarity values with all training instances as feature space.", "Therefore, the kernel matrix of training set is m × m and that of test set is n × m where m and n are the sizes of training and test sets, respectively.", "For our multi-class task, we perform a one-vsall classification for each label and then assign the one with the highest likelihood among the four, i.e., non-rumor, false rumor, true rumor or unverified rumor.", "We choose this method due to interpretability of results, similar to recent work on occupational class classification (Preotiuc-Pietro et al., 2015; Lukasik et al., 2015) .", "Experiments and Results Data Sets To our knowledge, there is no public large dataset available for classifying propagation trees, where we need a good number of source tweets, more accurately, the tree roots together with the corresponding propagation structure, to be appropriately annotated with ground truth.", "We constructed our datasets based on a couple of reference datasets, namely Twitter15 (Liu et al., 2015) and Twitter16 (Ma et al., 2016) .", "The original datasets were released and used for binary classification of rumor and non-rumor with respect to given events that contain their relevant tweets.", "First, we extracted the popular source tweets 2 that are highly retweeted or replied.", "We then collected all the propagation threads (i.e., retweets and replies) for these source tweets.", "Because Twitter API cannot retrieve the retweets or replies, we gathered the retweet users for a given tweet from 2 Though unpopular tweets could be false, we ignore them as they do not draw much attention and are hardly impactful Twrench 3 and crawled the replies through Twitter's web interface.", "Finally, we annotated the source tweets by referring to the labels of the events they are from.", "We first turned the label of each event in Twitter15 and Twitter16 from binary to quaternary according to the veracity tag of the article in rumor debunking websites (e.g., snopes.com, Emergent.info, etc).", "Then we labeled the source tweets by following these rules: 1) Source tweets from unverified rumor events or non-rumor events are labeled the same as the corresponding event's label; 2) For a source tweet in false rumor event, we flip over the label and assign true to the source tweet if it expresses denial type of stance; otherwise, the label is assigned as false; 3) The analogous flip-over/nochange rule applies to the source tweets from true rumor events.", "We make the datasets produced publicly accessible 4 .", "Table 1 gives statistics on the resulting datasets.", "Experimental Setup We compare our kernel-based method against the following baselines: SVM-TS: A linear SVM classification model that uses time-series to model the variation of a set of hand-crafted features (Ma et al., 2015) .", "DTR: A Decision-Tree-based Ranking method to identify trending rumors (Zhao et al., 2015) , which searches for enquiry phrases and clusters disputed factual claims, and ranked the clustered results based on statistical features.", "DTC and SVM-RBF: The Twitter information credibility model using Decision Tree Classifier (Castillo et al., 2011) and the SVM-based model with RBF kernel (Yang et al., 2012) , respectively, both using hand-crafted features based on the overall statistics of the posts.", "RFC: The Random Forest Classifier proposed by Kwon et al.", "(2017) using three parameters to fit the temporal properties and an extensive set of hand-crafted features related to the user, linguistic and structure characteristics.", "GRU: The RNN-based rumor detection model proposed by Ma et al.", "(2016) with gated recurrent unit for representation learning of high-level features from relevant posts over time.", "BOW: A naive baseline we worked by representing the text in each tree using bag-of-words and building the rumor classifier with linear SVM.", "Our models: PTK and cPTK are our full PTK and cPTK models, respectively; PTKand cPTKare the setting of only using content while ignoring user properties.", "We implemented DTC and RFC with Weka 5 , SVM models with LibSVM 6 and GRU with Theano 7 .", "We held out 10% of the trees in each dataset for model tuning, and for the rest of the trees, we performed 3-fold cross-validation.", "We used accuracy, F 1 measure as evaluation metrics.", "Table 2 shows that our proposed methods outperform all the baselines on both datasets.", "Experimental Results Among all baselines, GRU performs the best, which learns the low-dimensional representation of responsive tweets by capturing the textual and temporal information.", "This indicates the effectiveness of complex signals indicative of rumors beyond cue words or phrases (e.g., \"what?", "\", \"really?", "\", \"not sure\", etc.).", "This also justifies the good performance of BOW even though it only uses uni-grams for representation.", "Although DTR uses a set of regular expressions, we found only 19.59% and 22.21% tweets in our datasets containing these expressions.", "That is why the results of DTR are not satisfactory.", "SVM-TS and RFC are comparable because both of them utilize an extensive set of features especially focusing on temporal traits.", "But none of the models can directly incorporate structured propagation patterns for deep similarity compar- ison between propagation trees.", "SVM-RBF, although using a non-linear kernel, is based on traditional hand-crafted features instead of the structural kernel like ours.", "So, they performed obviously worse than our approach.", "Representation learning methods like GRU cannot easily utilize complex structural information for learning important features from our networked data.", "In contrast, our models can capture complex propagation patterns from structured data rich of linguistic, user and temporal signals.", "Therefore, the superiority of our models is clear: PTKwhich only uses text is already better than GRU, demonstrating the importance of propagation structures.", "PTK that combines text and user yields better results on both datasets, implying that both properties are complementary and PTK integrating flat and structured information is obviously more effective.", "It is also observed that cPTK outperforms PTK except for non-rumor class.", "This suggests the context-sensitive modeling based on PTK is effective for different types of rumors, but for non- The example subtree of a rumor captured by the algorithm at early stage of propagation rumors, it seems that considering context of propagation path is not always helpful.", "This might be due to the generally weak signals originated from node properties on the paths during non-rumor's diffusion since user distribution patterns in nonrumors do not seem as obvious as in rumors.", "This is not an issue in cPTKsince user information is not considered at all.", "Over all classes, cPTK achieves the highest accuracies on both datasets.", "Furthermore, we observe that all the baseline methods perform much better on non-rumors than on rumors.", "This is because the features of existing methods were defined for a binary (rumor vs. non-rumor) classification problem.", "So, they do not perform well for finer-grained classes.", "Our ap-proach can differentiate various classes much better by deep, detailed comparison of different patterns based on propagation structure.", "Early Detection Performance Detecting rumors at an early stage of propagation is very important so that preventive measures could be taken as quickly as possible.", "In early detection task, all the posts after a detection deadline are invisible during test.", "The earlier the deadline, the less propagation information can be available.", "Figure 4 shows the performances of our PTK and cPTK models versus RFC (the best system based on feature engineering), GRU (the best system based on RNN) and DTR (an early-detection-specific algorithm) against various deadlines.", "In the first few hours, our approach demonstrates superior early detection performance than other models.", "Particularly, cPTK achieve 75% accuracy on Twitter15 and 73% on Twitter16 within 24 hours, that is much faster than other models.", "Our analysis shows that rumors typically demonstrate more complex propagation substructures especially at early stage.", "Figure 5 shows a detected subtree of a false rumor spread in its first few hours, where influential users are somehow captured to boost its propagation and the information flows among the users with an obvious unpopular-to-popular-to-unpopular trend in terms of user popularity, but such pattern was not witnessed in non-rumors in early stage.", "Many textual signals (underlined) can also be observed in that early period.", "Our method can learn such structures and patterns naturally, but it is difficult to know and hand-craft them in feature engineering.", "Conclusion and Future Work We propose a novel approach for detecting rumors in microblog posts based on kernel learning method using propagation trees.", "A propagation tree encodes the spread of a hypothesis (i.e., a source tweet) with complex structured patterns and flat information regarding content, user and time associated with the tree nodes.", "Enlightened by tree kernel techniques, our kernel method learns discriminant clues for identifying rumors of finer-grained levels by directly measuring the similarity among propagation trees via kernel functions.", "Experiments on two Twitter datasets show that our approach outperforms stateof-the-art baselines with large margin for both general and early rumor detection tasks.", "Since kernel-based approach covers more structural information than feature-based methods, it allows kernel to further incorporate information from a high dimensional space for possibly better discrimination.", "In the future, we will focus on improving the rumor detection task by exploring network representation learning framework.", "Moreover, we plan to investigate unsupervised models considering massive unlabeled rumorous data from social media." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5.1", "5.2", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Representation of Tweets Propagation", "Propagation Tree Kernel Modeling", "Background of Tree Kernel", "Our PTK Model", "Context-Sensitive Extension of PTK", "Rumor Detection via Kernel Learning", "Data Sets", "Experimental Setup", "Experimental Results", "Early Detection Performance", "Conclusion and Future Work" ] }
GEM-SciDuet-train-101#paper-1265#slide-0
Introduction
A story or statement whose truth value is unverified or deliberately false The fake news went viral indicates the level of influence. Start from a grass-roots users, promoted by some influential accounts, widely spread
A story or statement whose truth value is unverified or deliberately false The fake news went viral indicates the level of influence. Start from a grass-roots users, promoted by some influential accounts, widely spread
[]
GEM-SciDuet-train-101#paper-1265#slide-1
1265
Detect Rumors in Microblog Posts Using Propagation Structure via Kernel Learning
How fake news goes viral via social media? How does its propagation pattern differ from real stories? In this paper, we attempt to address the problem of identifying rumors, i.e., fake information, out of microblog posts based on their propagation structure. We firstly model microblog posts diffusion with propagation trees, which provide valuable clues on how an original message is transmitted and developed over time. We then propose a kernel-based method called Propagation Tree Kernel, which captures high-order patterns differentiating different types of rumors by evaluating the similarities between their propagation tree structures. Experimental results on two real-world datasets demonstrate that the proposed kernel-based approach can detect rumors more quickly and accurately than state-ofthe-art rumor detection models.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170 ], "paper_content_text": [ "Introduction On November 9th, 2016, Eric Tucker, a grassroots user who had just about 40 followers on Twitter, tweeted his unverified observations about paid protesters being bused to attend anti-Trump demonstration in Austin, Texas.", "The tweet, which was proved false later, was shared over 16 thousand times on Twitter and 350 thousand times on Facebook within a couple of days, fueling a nation-wide conspiracy theory 1 .", "The diffusion of the story is illustrated as Figure 1 which gives the key spreading points of the story along the time line.", "We can see that after the initial post, the tweet was shared or promoted by some influential online communities and users (including Trump himself), resulting in its wide spread.", "A widely accepted definition of rumor is \"unverified and instrumentally relevant information statements in circulation\" (DiFonzo and Bordia, 2007) .", "This unverified information may eventually turn out to be true, or partly or entirely false.", "In today's ever-connected world, rumors can arise and spread at lightening speed thanks to social media platforms, which could not only be wrong, but be misleading and dangerous to the public society.", "Therefore, it is crucial to track and debunk such rumors in timely manner.", "Journalists and fact-checking websites such as snopes.com have made efforts to track and detect rumors.", "However, such endeavor is manual, thus prone to poor coverage and low speed.", "Feature-based methods (Castillo et al., 2011; Yang et al., 2012; Ma et al., 2015) achieved certain success by employing large feature sets crafted from message contents, user profiles and holistic statistics of diffusion patterns (e.g., number of retweets, propagation time, etc.).", "But such an approach was over simplified as they ignored the dynamics of rumor propagation.", "Existing studies considering propagation characteristics mainly focused on the temporal features (Kwon et al., 2013 (Kwon et al., , 2017 rather than the structure of propagation.", "So, can the propagation structure make any difference for differentiating rumors from nonrumors?", "Recent studies showed that rumor spreaders are persons who want to get attention and popularity (Sunstein, 2014) .", "However, popular users who get more attention on Twitter (e.g., with more followers) are actually less likely to spread rumor in a sense that the high audience size might hinder a user from participating in propagating unverified information (Kwon et al., 2017) .", "Intuitively, for \"successful\" rumors being propagated as widely as popular real news, initial spreaders (typically lack of popularity) must attract certain amount of broadcasting power, e.g., attention of influential users or communities that have a lot of audiences joining in promoting the propagation.", "We refer to this as a constrained mode propagation, relative to the open mode propagation of normal messages that everyone is open to share.", "Such different modes of propagation may imply some distinct propagation structures between rumors and nonrumors and even among different types of rumors.", "Due to the complex nature of information diffusion, explicitly defining discriminant features based on propagation structure is difficult and biased.", "Figure 2 exemplifies the propagation structures of two Twitter posts, a rumor and a nonrumor, initiated by two users shown as the root nodes (in green color).", "The information flows here illustrate that the rumorous tweet is first posted by a low-impact user, then some popular users joining in who boost the spreading, but the non-rumorous tweet is initially posted by a popular user and directly spread by many general users; contentbased signal like various users' stance (Zhao et al., 2015) and edge-based signal such as relative influence (Kwon et al., 2017) can also suggest the different nature of source tweets.", "Many of such implicit distinctions throughout message propagation are hard to hand craft specifically using flat summary of statistics as previous work did.", "In addition, unlike representation learning for plain text, learning for representation of structures such as networks is not well studied in general.", "Therefore, traditional and latest text-based models (Castillo (a) A rumor (b) A non-rumor Figure 2 : Fragments of the propagation for two source tweets.", "Node size: denotes the popularity of the user who tweet the post (represented by # of followers); Red, black, blue node: content-wise the user express doubt/denial, support, neutrality in the tweet, respectively; Solid (dotted) edge: information flow from a more (less) popular user to a less (more) popular user; Dashed concentric circles: time stamps.", "Ma et al., 2015 Ma et al., , 2016 cannot be applied easily on such complex, dynamic structures.", "To capture high-order propagation patterns for rumor detection, we firstly represent the propagation of each source tweet with a propagation tree which is formed by harvesting user's interactions to one another triggered by the source tweet.", "Then, we propose a kernel-based data-driven method called Propagation Tree Kernel (PTK) to generate relevant features (i.e., subtrees) automatically for estimating the similarity between two propagation trees.", "Unlike traditional tree kernel (Moschitti, 2006; Zhang et al., 2008) for modeling syntactic structure based on parse tree, our propagation tree consists of nodes corresponding to microblog posts, each represented as a continuous vector, and edges representing the direction of propagation and providing the context to individual posts.", "The basic idea is to find and capture the salient substructures in the propagation trees indicative of rumors.", "We also extend PTK into a context-enriched PTK (cPTK) to enhance the model by considering different propagation paths from source tweet to the roots of subtrees, which capture the context of transmission.", "Extensive experiments on two real-world Twitter datasets show that the proposed methods outperform state-of-the-art rumor detection models with large margin.", "Moreover, most existing approaches regard rumor detection as a binary classification problem, which predicts a candidate hypothesis as rumor or not.", "Since a rumor often begins as unverified and later turns out to be confirmed as true or false, or remains unverified (Zubiaga et al., 2016) , here we consider a set of more practical, finer-grained classes: false rumor, true rumor, unverified rumor, and non-rumor, which becomes an even more challenging problem.", "Related Work Tracking misinformation or debunking rumors has been a hot research topic in multiple disciplines (DiFonzo and Bordia, 2007; Morris et al., 2012; Rosnow, 1991) .", "Castillo et al.", "(2011) studied information credibility on Twitter using a wide range of hand-crafted features.", "Following that, various features corresponding to message contents, user profiles and statistics of propagation patterns were proposed in many studies (Yang et al., 2012; Wu et al., 2015; Sun et al., 2013; Liu et al., 2015) .", "Zhao et al.", "(2015) focused on early rumor detection by using regular expressions for finding questing and denying tweets as the key for debunking rumor.", "All such approaches are over simplistic because they ignore the dynamic propagation patterns given the rich structures of social media data.", "Some studies focus on finding temporal patterns for understanding rumor diffusion.", "Kown et al.", "(2013; 2017) introduced a time-series fitting model based on the temporal properties of tweet volume.", "Ma et al.", "(2015) extended the model using time series to capture the variation of features over time.", "Friggeri et al.", "(2014) and Hannak et al.", "(2014) studied the structure of misinformation cascades by analyzing comments linking to rumor debunking websites.", "More recently, Ma et al.", "(2016) used recurrent neural networks to learn the representations of rumor signals from tweet text at different times.", "Our work will consider temporal, structural and linguistic signals in a unified framework based on propagation tree kernel.", "Most previous work formulated the task as classification at event level where an event is comprised of a number of source tweets, each being associated with a group of retweets and replies.", "Here we focus on classifying a given source tweet regarding a claim which is a finer-grained task.", "Similar setting was also considered in (Wu et al., 2015; Qazvinian et al., 2011) .", "Kernel methods are designed to evaluate the similarity between two objects, and tree kernel specifically addresses structured data which has been successfully applied for modeling syntactic information in many natural language tasks such as syntactic parsing (Collins and Duffy, 2001) , question-answering (Moschitti, 2006) , semantic analysis (Moschitti, 2004) , relation extraction (Zhang et al., 2008) and machine translation (Sun et al., 2010) .", "These kernels are not suitable for modeling the social media propagation structures because the nodes are not given as discrete values like part-of-speech tags, but are represented as high dimensional real-valued vectors.", "Our proposed method is a substantial extension of tree kernel for modeling such structures.", "Representation of Tweets Propagation On microblogging platforms, the follower/friend relationship embeds shared interests among the users.", "Once a user has posted a tweet, all his followers will receive the tweet.", "Furthermore, Twitter allows a user to retweet or comment another user's post, so that the information could reach beyond the network of the original creator.", "We model the propagation of each source tweet as a tree structure T (r) = V, E , where r is the source tweet as well as the root of the tree, V refers to a set of nodes each representing a responsive post (i.e., retweet or reply) of a user at a certain time to the source tweet r which initiates the circulation, and E is a set of directed edges corresponding to the response relation among the nodes in V .", "If there exists a directed edge from v i to v j , it means v j is a direct response to v i .", "More specifically, each node v ∈ V is repre- sented as a tuple v = (u v , c v , t v ) , which provides the following information: u v is the creator of the post, c v represents the text content of the post, and t v is the time lag between the source tweet r and v. In our case, u v contains attributes of the user such as # of followers/friends, verification status, # of history posts, etc., c v is a vector of binary features based on uni-grams and/or bi-grams representing the post's content.", "Propagation Tree Kernel Modeling In this section, we describe our rumor detection model based on propagation trees using kernel method called Propagation Tree Kernel (PTK).", "Our task is, given a propagation tree T (r) of a source tweet r, to predict the label of r. Background of Tree Kernel Before presenting our proposed algorithm, we briefly present the traditional tree kernel, which our PTK model is based on.", "Tree kernel was designed to compute the syntactic and semantic similarity between two natural language sentences by implicitly counting the number of common subtrees between their corresponding parse trees.", "Given a syntactic parse tree, each node with its children is associated with a grammar production rule.", "Figure 3 illustrates the syntactic parse tree of \"cut a tree\" and its subtrees.", "A subtree is defined as any subgraph which has more than one nodes, with the restriction that entire (not partial) rule productions must be included.", "For example, the fragment [NP [D a]] is excluded because it contains only part of the production NP → D N (Collins and Duffy, 2001) .", "Following Collins and Duffy (2001) , given two parse trees T 1 and T 2 , the kernel function K(T 1 , T 2 ) is defined as: v i ∈V 1 v j ∈V 2 ∆(v i , v j ) (1) where V 1 and V 2 are the sets of all nodes respectively in T 1 and T 2 , and each node is associated with a production rule, and ∆(v i , v j ) evaluates the common subtrees rooted at v i and v j .", "∆(v i , v j ) can be computed using the following recursive procedure (Collins and Duffy, 2001) : 1) if the production rules at v i and v j are different, then ∆(v i , v j ) = 0; 2) else if the production rules at v i and v j are same, and v i and v j have only leaf children (i.e., they are pre-terminal symbols), then ∆(v i , v j ) = λ; 3) else ∆(v i , v j ) = λ min(nc(v i ),nc(v j )) k=1 (1 + ∆(ch(v i , k), ch(v j , k))).", "where nc(v) is the number of children of node v, ch(v, k) is the k-th child of node v, and λ (0 < λ ≤ 1) is a decay factor.", "λ = 1 yields the number of common subtrees; λ < 1 down weighs the contribution of larger subtrees to make the kernel value less variable with respect to subtree size.", "Our PTK Model To classify propagation trees, we can calculate the similarity between the trees, which is supposed to reflect the distinction of different types of rumors and non-rumors based on structural, linguistic and temporal properties.", "However, existing tree kernels cannot be readily applied on propagation trees because 1) unlike parse tree where the node is represented by enumerable nominal value (e.g., part-of-speech tag), the propagation tree node is given as a vector of continuous numerical values representing the basic properties of the node; 2) the similarity of two parse trees is based on the count of common subtrees, for which the commonality of subtrees is evaluated by checking if the same production rules and the same children are associated with the nodes in two subtrees being compared, whereas in our context the similarity function should be defined softly since hardly two nodes from different propagation trees are same.", "With the representation of propagation tree, we first define a function f to evaluate the similarity between two nodes v i and v j (we simplify the node representation for instance v i = (u i , c i , t i )) as the following: f (v i , v j ) = e −t (αE(u i , u j ) + (1 − α)J (c i , c j )) where t = |t i − t j | is the absolute difference between the time lags of v i and v j , E and J are user-based similarity and content-based similarity, respectively, and α is the trade-off parameter.", "The intuition of using exponential function of t to scale down the similarity is to capture the discriminant signals or patterns at the different stages of propagation.", "For example, a questioning message posted very early may signal a false rumor while the same posted far later from initial post may indicate the rumor is still unverified, despite that the two messages are semantically similar.", "The user-based similarity is defined as an Euclidean distance E(u i , u j ) = ||u i − u j || 2 , where u i and u j are the user vectors of node v i and v j and || • || 2 is the 2-norm of a vector.", "Here E is used to capture the characteristics of users participating in spreading rumors as discriminant signals, throughout the entire stage of propagation.", "Contentwise, we use Jaccard coefficient to measure the similarity of post content: J (c i , c j ) = |N gram(c i ) ∩ N gram(c j )| |N gram(c i ) ∪ N gram(c j )| where c i and c j are the sets of content words in two nodes.", "For n-grams here, we adopt both uni-grams and bi-grams.", "It can capture cue terms e.g., 'false', 'debunk', 'not true', etc.", "commonly occurring in rumors but not in non-rumors.", "Given two propagation trees T 1 = V 1 , E 1 and T 2 = V 2 , E 2 , PTK aims to compute the similarity between T 1 and T 2 iteratively based on enumerating all pairs of most similar subtrees.", "First, for each node v i ∈ V 1 , we obtain v i ∈ V 2 , the most similar node of v i from V 2 : v i = arg max v j ∈V 2 f (v i , v j ) Similarly, for each v j ∈ V 2 , we obtain v j ∈ V 1 : v j = arg max v i ∈V 1 f (v i , v j ) Then, the propagation tree kernel K P (T 1 , T 2 ) is defined as: v i ∈V 1 Λ(v i , v i ) + v j ∈V 2 Λ(v j , v j ) (2) where Λ(v, v ) evaluates the similarity of two subtrees rooted at v and v , which is computed recursively as follows: 1) if v or v are leaf nodes, then Λ(v, v ) = f (v, v ); 2) else Λ(v, v ) = f (v, v ) min(nc(v),nc(v )) k=1 (1 + Λ(ch(v, k), ch(v , k))) Note that unlike traditional tree kernel, in PTK the node similarity f ∈ [0, 1] is used for softly counting similar subtrees instead of common subtrees.", "Also, λ in tree kernel is not needed as subtree size is not an issue here thanks to node similarity f .", "PTK aims to capture discriminant patterns from propagation trees inclusive of user, content and temporal traits, which is inspired by prior analyses on rumors spreading, e.g., user information can be a strong clue in the initial broadcast, content features are important throughout entire propagation periods, and structural and temporal patterns help for longitudinal diffusion (Zubiaga et al., 2016; Kwon et al., 2017) .", "Context-Sensitive Extension of PTK One defect of PTK is that it ignores the clues outside the subtrees, e.g., how the information propagates from source post to the current subtree.", "Intuitively, propagation paths provide further clue for determining the truthfulness of information since they embed the route and context of how the propagation happens.", "Therefore, we propose contextsensitive PTK (cPTK) by considering the propagation paths from the root of the tree to the roots of subtrees, which shares similar intuition with the context-sensitive tree kernel (Zhou et al., 2007) .", "For a propagation tree node v ∈ T (r), let L r v be the length (i.e., # of nodes) of the propagation path from root r to v, and v[x] be the x-th ancestor of v on the path starting from v (0 ≤ x < L r v , v[0] = v, v[L r v − 1] = r) .", "cPTK evaluates the similarity between two trees T 1 (r 1 ) and T 2 (r 2 ) as follows: v i ∈V 1 L r 1 v i −1 x=0 Λ x (v i , v i ) + v j ∈V 2 L r 2 v j −1 x=0 Λ x (v j , v j ) (3) where Λ x (v, v ) measures the similarity of sub- trees rooted at v[x] and v [x] for context-sensitive evaluation, which is computed as follows: 1) if x > 0, Λ x (v, v ) = f (v[x], v [x]), where v[x] and v [x] are the x-th ancestor nodes of v and v on the respective propagation path.", "2) else Λ x (v, v ) = Λ(v, v ), namely PTK.", "Clearly, PTK is a special case of cPTK when x = 0 (see equation 3).", "cPTK evaluates the oc-currence of both context-free (without considering ancestors on propagation paths) and contextsensitive cases.", "Rumor Detection via Kernel Learning The advantage of kernel-based method is that we can avoid painstakingly engineering the features.", "This is possible because the kernel function can explore an implicit feature space when calculating the similarity between two objects (Culotta and Sorensen, 2004) .", "We incorporate the proposed tree kernel functions, i.e., PTK (equation 2) or cPTK (equation 3), into a supervised learning framework, for which we utilize a kernel-based SVM classifier.", "We treat each tree as an instance, and its similarity values with all training instances as feature space.", "Therefore, the kernel matrix of training set is m × m and that of test set is n × m where m and n are the sizes of training and test sets, respectively.", "For our multi-class task, we perform a one-vsall classification for each label and then assign the one with the highest likelihood among the four, i.e., non-rumor, false rumor, true rumor or unverified rumor.", "We choose this method due to interpretability of results, similar to recent work on occupational class classification (Preotiuc-Pietro et al., 2015; Lukasik et al., 2015) .", "Experiments and Results Data Sets To our knowledge, there is no public large dataset available for classifying propagation trees, where we need a good number of source tweets, more accurately, the tree roots together with the corresponding propagation structure, to be appropriately annotated with ground truth.", "We constructed our datasets based on a couple of reference datasets, namely Twitter15 (Liu et al., 2015) and Twitter16 (Ma et al., 2016) .", "The original datasets were released and used for binary classification of rumor and non-rumor with respect to given events that contain their relevant tweets.", "First, we extracted the popular source tweets 2 that are highly retweeted or replied.", "We then collected all the propagation threads (i.e., retweets and replies) for these source tweets.", "Because Twitter API cannot retrieve the retweets or replies, we gathered the retweet users for a given tweet from 2 Though unpopular tweets could be false, we ignore them as they do not draw much attention and are hardly impactful Twrench 3 and crawled the replies through Twitter's web interface.", "Finally, we annotated the source tweets by referring to the labels of the events they are from.", "We first turned the label of each event in Twitter15 and Twitter16 from binary to quaternary according to the veracity tag of the article in rumor debunking websites (e.g., snopes.com, Emergent.info, etc).", "Then we labeled the source tweets by following these rules: 1) Source tweets from unverified rumor events or non-rumor events are labeled the same as the corresponding event's label; 2) For a source tweet in false rumor event, we flip over the label and assign true to the source tweet if it expresses denial type of stance; otherwise, the label is assigned as false; 3) The analogous flip-over/nochange rule applies to the source tweets from true rumor events.", "We make the datasets produced publicly accessible 4 .", "Table 1 gives statistics on the resulting datasets.", "Experimental Setup We compare our kernel-based method against the following baselines: SVM-TS: A linear SVM classification model that uses time-series to model the variation of a set of hand-crafted features (Ma et al., 2015) .", "DTR: A Decision-Tree-based Ranking method to identify trending rumors (Zhao et al., 2015) , which searches for enquiry phrases and clusters disputed factual claims, and ranked the clustered results based on statistical features.", "DTC and SVM-RBF: The Twitter information credibility model using Decision Tree Classifier (Castillo et al., 2011) and the SVM-based model with RBF kernel (Yang et al., 2012) , respectively, both using hand-crafted features based on the overall statistics of the posts.", "RFC: The Random Forest Classifier proposed by Kwon et al.", "(2017) using three parameters to fit the temporal properties and an extensive set of hand-crafted features related to the user, linguistic and structure characteristics.", "GRU: The RNN-based rumor detection model proposed by Ma et al.", "(2016) with gated recurrent unit for representation learning of high-level features from relevant posts over time.", "BOW: A naive baseline we worked by representing the text in each tree using bag-of-words and building the rumor classifier with linear SVM.", "Our models: PTK and cPTK are our full PTK and cPTK models, respectively; PTKand cPTKare the setting of only using content while ignoring user properties.", "We implemented DTC and RFC with Weka 5 , SVM models with LibSVM 6 and GRU with Theano 7 .", "We held out 10% of the trees in each dataset for model tuning, and for the rest of the trees, we performed 3-fold cross-validation.", "We used accuracy, F 1 measure as evaluation metrics.", "Table 2 shows that our proposed methods outperform all the baselines on both datasets.", "Experimental Results Among all baselines, GRU performs the best, which learns the low-dimensional representation of responsive tweets by capturing the textual and temporal information.", "This indicates the effectiveness of complex signals indicative of rumors beyond cue words or phrases (e.g., \"what?", "\", \"really?", "\", \"not sure\", etc.).", "This also justifies the good performance of BOW even though it only uses uni-grams for representation.", "Although DTR uses a set of regular expressions, we found only 19.59% and 22.21% tweets in our datasets containing these expressions.", "That is why the results of DTR are not satisfactory.", "SVM-TS and RFC are comparable because both of them utilize an extensive set of features especially focusing on temporal traits.", "But none of the models can directly incorporate structured propagation patterns for deep similarity compar- ison between propagation trees.", "SVM-RBF, although using a non-linear kernel, is based on traditional hand-crafted features instead of the structural kernel like ours.", "So, they performed obviously worse than our approach.", "Representation learning methods like GRU cannot easily utilize complex structural information for learning important features from our networked data.", "In contrast, our models can capture complex propagation patterns from structured data rich of linguistic, user and temporal signals.", "Therefore, the superiority of our models is clear: PTKwhich only uses text is already better than GRU, demonstrating the importance of propagation structures.", "PTK that combines text and user yields better results on both datasets, implying that both properties are complementary and PTK integrating flat and structured information is obviously more effective.", "It is also observed that cPTK outperforms PTK except for non-rumor class.", "This suggests the context-sensitive modeling based on PTK is effective for different types of rumors, but for non- The example subtree of a rumor captured by the algorithm at early stage of propagation rumors, it seems that considering context of propagation path is not always helpful.", "This might be due to the generally weak signals originated from node properties on the paths during non-rumor's diffusion since user distribution patterns in nonrumors do not seem as obvious as in rumors.", "This is not an issue in cPTKsince user information is not considered at all.", "Over all classes, cPTK achieves the highest accuracies on both datasets.", "Furthermore, we observe that all the baseline methods perform much better on non-rumors than on rumors.", "This is because the features of existing methods were defined for a binary (rumor vs. non-rumor) classification problem.", "So, they do not perform well for finer-grained classes.", "Our ap-proach can differentiate various classes much better by deep, detailed comparison of different patterns based on propagation structure.", "Early Detection Performance Detecting rumors at an early stage of propagation is very important so that preventive measures could be taken as quickly as possible.", "In early detection task, all the posts after a detection deadline are invisible during test.", "The earlier the deadline, the less propagation information can be available.", "Figure 4 shows the performances of our PTK and cPTK models versus RFC (the best system based on feature engineering), GRU (the best system based on RNN) and DTR (an early-detection-specific algorithm) against various deadlines.", "In the first few hours, our approach demonstrates superior early detection performance than other models.", "Particularly, cPTK achieve 75% accuracy on Twitter15 and 73% on Twitter16 within 24 hours, that is much faster than other models.", "Our analysis shows that rumors typically demonstrate more complex propagation substructures especially at early stage.", "Figure 5 shows a detected subtree of a false rumor spread in its first few hours, where influential users are somehow captured to boost its propagation and the information flows among the users with an obvious unpopular-to-popular-to-unpopular trend in terms of user popularity, but such pattern was not witnessed in non-rumors in early stage.", "Many textual signals (underlined) can also be observed in that early period.", "Our method can learn such structures and patterns naturally, but it is difficult to know and hand-craft them in feature engineering.", "Conclusion and Future Work We propose a novel approach for detecting rumors in microblog posts based on kernel learning method using propagation trees.", "A propagation tree encodes the spread of a hypothesis (i.e., a source tweet) with complex structured patterns and flat information regarding content, user and time associated with the tree nodes.", "Enlightened by tree kernel techniques, our kernel method learns discriminant clues for identifying rumors of finer-grained levels by directly measuring the similarity among propagation trees via kernel functions.", "Experiments on two Twitter datasets show that our approach outperforms stateof-the-art baselines with large margin for both general and early rumor detection tasks.", "Since kernel-based approach covers more structural information than feature-based methods, it allows kernel to further incorporate information from a high dimensional space for possibly better discrimination.", "In the future, we will focus on improving the rumor detection task by exploring network representation learning framework.", "Moreover, we plan to investigate unsupervised models considering massive unlabeled rumorous data from social media." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5.1", "5.2", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Representation of Tweets Propagation", "Propagation Tree Kernel Modeling", "Background of Tree Kernel", "Our PTK Model", "Context-Sensitive Extension of PTK", "Rumor Detection via Kernel Learning", "Data Sets", "Experimental Setup", "Experimental Results", "Early Detection Performance", "Conclusion and Future Work" ] }
GEM-SciDuet-train-101#paper-1265#slide-1
Motivation
We generally are not good at distinguishing rumors It is crucial to track and debunk rumors early to minimize their harmful effects. Online fact-checking services have limited topical coverage and long delay. Existing models use feature engineering over simplistic; or recently deep neural networks ignore propagation structures.
We generally are not good at distinguishing rumors It is crucial to track and debunk rumors early to minimize their harmful effects. Online fact-checking services have limited topical coverage and long delay. Existing models use feature engineering over simplistic; or recently deep neural networks ignore propagation structures.
[]
GEM-SciDuet-train-101#paper-1265#slide-2
1265
Detect Rumors in Microblog Posts Using Propagation Structure via Kernel Learning
How fake news goes viral via social media? How does its propagation pattern differ from real stories? In this paper, we attempt to address the problem of identifying rumors, i.e., fake information, out of microblog posts based on their propagation structure. We firstly model microblog posts diffusion with propagation trees, which provide valuable clues on how an original message is transmitted and developed over time. We then propose a kernel-based method called Propagation Tree Kernel, which captures high-order patterns differentiating different types of rumors by evaluating the similarities between their propagation tree structures. Experimental results on two real-world datasets demonstrate that the proposed kernel-based approach can detect rumors more quickly and accurately than state-ofthe-art rumor detection models.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170 ], "paper_content_text": [ "Introduction On November 9th, 2016, Eric Tucker, a grassroots user who had just about 40 followers on Twitter, tweeted his unverified observations about paid protesters being bused to attend anti-Trump demonstration in Austin, Texas.", "The tweet, which was proved false later, was shared over 16 thousand times on Twitter and 350 thousand times on Facebook within a couple of days, fueling a nation-wide conspiracy theory 1 .", "The diffusion of the story is illustrated as Figure 1 which gives the key spreading points of the story along the time line.", "We can see that after the initial post, the tweet was shared or promoted by some influential online communities and users (including Trump himself), resulting in its wide spread.", "A widely accepted definition of rumor is \"unverified and instrumentally relevant information statements in circulation\" (DiFonzo and Bordia, 2007) .", "This unverified information may eventually turn out to be true, or partly or entirely false.", "In today's ever-connected world, rumors can arise and spread at lightening speed thanks to social media platforms, which could not only be wrong, but be misleading and dangerous to the public society.", "Therefore, it is crucial to track and debunk such rumors in timely manner.", "Journalists and fact-checking websites such as snopes.com have made efforts to track and detect rumors.", "However, such endeavor is manual, thus prone to poor coverage and low speed.", "Feature-based methods (Castillo et al., 2011; Yang et al., 2012; Ma et al., 2015) achieved certain success by employing large feature sets crafted from message contents, user profiles and holistic statistics of diffusion patterns (e.g., number of retweets, propagation time, etc.).", "But such an approach was over simplified as they ignored the dynamics of rumor propagation.", "Existing studies considering propagation characteristics mainly focused on the temporal features (Kwon et al., 2013 (Kwon et al., , 2017 rather than the structure of propagation.", "So, can the propagation structure make any difference for differentiating rumors from nonrumors?", "Recent studies showed that rumor spreaders are persons who want to get attention and popularity (Sunstein, 2014) .", "However, popular users who get more attention on Twitter (e.g., with more followers) are actually less likely to spread rumor in a sense that the high audience size might hinder a user from participating in propagating unverified information (Kwon et al., 2017) .", "Intuitively, for \"successful\" rumors being propagated as widely as popular real news, initial spreaders (typically lack of popularity) must attract certain amount of broadcasting power, e.g., attention of influential users or communities that have a lot of audiences joining in promoting the propagation.", "We refer to this as a constrained mode propagation, relative to the open mode propagation of normal messages that everyone is open to share.", "Such different modes of propagation may imply some distinct propagation structures between rumors and nonrumors and even among different types of rumors.", "Due to the complex nature of information diffusion, explicitly defining discriminant features based on propagation structure is difficult and biased.", "Figure 2 exemplifies the propagation structures of two Twitter posts, a rumor and a nonrumor, initiated by two users shown as the root nodes (in green color).", "The information flows here illustrate that the rumorous tweet is first posted by a low-impact user, then some popular users joining in who boost the spreading, but the non-rumorous tweet is initially posted by a popular user and directly spread by many general users; contentbased signal like various users' stance (Zhao et al., 2015) and edge-based signal such as relative influence (Kwon et al., 2017) can also suggest the different nature of source tweets.", "Many of such implicit distinctions throughout message propagation are hard to hand craft specifically using flat summary of statistics as previous work did.", "In addition, unlike representation learning for plain text, learning for representation of structures such as networks is not well studied in general.", "Therefore, traditional and latest text-based models (Castillo (a) A rumor (b) A non-rumor Figure 2 : Fragments of the propagation for two source tweets.", "Node size: denotes the popularity of the user who tweet the post (represented by # of followers); Red, black, blue node: content-wise the user express doubt/denial, support, neutrality in the tweet, respectively; Solid (dotted) edge: information flow from a more (less) popular user to a less (more) popular user; Dashed concentric circles: time stamps.", "Ma et al., 2015 Ma et al., , 2016 cannot be applied easily on such complex, dynamic structures.", "To capture high-order propagation patterns for rumor detection, we firstly represent the propagation of each source tweet with a propagation tree which is formed by harvesting user's interactions to one another triggered by the source tweet.", "Then, we propose a kernel-based data-driven method called Propagation Tree Kernel (PTK) to generate relevant features (i.e., subtrees) automatically for estimating the similarity between two propagation trees.", "Unlike traditional tree kernel (Moschitti, 2006; Zhang et al., 2008) for modeling syntactic structure based on parse tree, our propagation tree consists of nodes corresponding to microblog posts, each represented as a continuous vector, and edges representing the direction of propagation and providing the context to individual posts.", "The basic idea is to find and capture the salient substructures in the propagation trees indicative of rumors.", "We also extend PTK into a context-enriched PTK (cPTK) to enhance the model by considering different propagation paths from source tweet to the roots of subtrees, which capture the context of transmission.", "Extensive experiments on two real-world Twitter datasets show that the proposed methods outperform state-of-the-art rumor detection models with large margin.", "Moreover, most existing approaches regard rumor detection as a binary classification problem, which predicts a candidate hypothesis as rumor or not.", "Since a rumor often begins as unverified and later turns out to be confirmed as true or false, or remains unverified (Zubiaga et al., 2016) , here we consider a set of more practical, finer-grained classes: false rumor, true rumor, unverified rumor, and non-rumor, which becomes an even more challenging problem.", "Related Work Tracking misinformation or debunking rumors has been a hot research topic in multiple disciplines (DiFonzo and Bordia, 2007; Morris et al., 2012; Rosnow, 1991) .", "Castillo et al.", "(2011) studied information credibility on Twitter using a wide range of hand-crafted features.", "Following that, various features corresponding to message contents, user profiles and statistics of propagation patterns were proposed in many studies (Yang et al., 2012; Wu et al., 2015; Sun et al., 2013; Liu et al., 2015) .", "Zhao et al.", "(2015) focused on early rumor detection by using regular expressions for finding questing and denying tweets as the key for debunking rumor.", "All such approaches are over simplistic because they ignore the dynamic propagation patterns given the rich structures of social media data.", "Some studies focus on finding temporal patterns for understanding rumor diffusion.", "Kown et al.", "(2013; 2017) introduced a time-series fitting model based on the temporal properties of tweet volume.", "Ma et al.", "(2015) extended the model using time series to capture the variation of features over time.", "Friggeri et al.", "(2014) and Hannak et al.", "(2014) studied the structure of misinformation cascades by analyzing comments linking to rumor debunking websites.", "More recently, Ma et al.", "(2016) used recurrent neural networks to learn the representations of rumor signals from tweet text at different times.", "Our work will consider temporal, structural and linguistic signals in a unified framework based on propagation tree kernel.", "Most previous work formulated the task as classification at event level where an event is comprised of a number of source tweets, each being associated with a group of retweets and replies.", "Here we focus on classifying a given source tweet regarding a claim which is a finer-grained task.", "Similar setting was also considered in (Wu et al., 2015; Qazvinian et al., 2011) .", "Kernel methods are designed to evaluate the similarity between two objects, and tree kernel specifically addresses structured data which has been successfully applied for modeling syntactic information in many natural language tasks such as syntactic parsing (Collins and Duffy, 2001) , question-answering (Moschitti, 2006) , semantic analysis (Moschitti, 2004) , relation extraction (Zhang et al., 2008) and machine translation (Sun et al., 2010) .", "These kernels are not suitable for modeling the social media propagation structures because the nodes are not given as discrete values like part-of-speech tags, but are represented as high dimensional real-valued vectors.", "Our proposed method is a substantial extension of tree kernel for modeling such structures.", "Representation of Tweets Propagation On microblogging platforms, the follower/friend relationship embeds shared interests among the users.", "Once a user has posted a tweet, all his followers will receive the tweet.", "Furthermore, Twitter allows a user to retweet or comment another user's post, so that the information could reach beyond the network of the original creator.", "We model the propagation of each source tweet as a tree structure T (r) = V, E , where r is the source tweet as well as the root of the tree, V refers to a set of nodes each representing a responsive post (i.e., retweet or reply) of a user at a certain time to the source tweet r which initiates the circulation, and E is a set of directed edges corresponding to the response relation among the nodes in V .", "If there exists a directed edge from v i to v j , it means v j is a direct response to v i .", "More specifically, each node v ∈ V is repre- sented as a tuple v = (u v , c v , t v ) , which provides the following information: u v is the creator of the post, c v represents the text content of the post, and t v is the time lag between the source tweet r and v. In our case, u v contains attributes of the user such as # of followers/friends, verification status, # of history posts, etc., c v is a vector of binary features based on uni-grams and/or bi-grams representing the post's content.", "Propagation Tree Kernel Modeling In this section, we describe our rumor detection model based on propagation trees using kernel method called Propagation Tree Kernel (PTK).", "Our task is, given a propagation tree T (r) of a source tweet r, to predict the label of r. Background of Tree Kernel Before presenting our proposed algorithm, we briefly present the traditional tree kernel, which our PTK model is based on.", "Tree kernel was designed to compute the syntactic and semantic similarity between two natural language sentences by implicitly counting the number of common subtrees between their corresponding parse trees.", "Given a syntactic parse tree, each node with its children is associated with a grammar production rule.", "Figure 3 illustrates the syntactic parse tree of \"cut a tree\" and its subtrees.", "A subtree is defined as any subgraph which has more than one nodes, with the restriction that entire (not partial) rule productions must be included.", "For example, the fragment [NP [D a]] is excluded because it contains only part of the production NP → D N (Collins and Duffy, 2001) .", "Following Collins and Duffy (2001) , given two parse trees T 1 and T 2 , the kernel function K(T 1 , T 2 ) is defined as: v i ∈V 1 v j ∈V 2 ∆(v i , v j ) (1) where V 1 and V 2 are the sets of all nodes respectively in T 1 and T 2 , and each node is associated with a production rule, and ∆(v i , v j ) evaluates the common subtrees rooted at v i and v j .", "∆(v i , v j ) can be computed using the following recursive procedure (Collins and Duffy, 2001) : 1) if the production rules at v i and v j are different, then ∆(v i , v j ) = 0; 2) else if the production rules at v i and v j are same, and v i and v j have only leaf children (i.e., they are pre-terminal symbols), then ∆(v i , v j ) = λ; 3) else ∆(v i , v j ) = λ min(nc(v i ),nc(v j )) k=1 (1 + ∆(ch(v i , k), ch(v j , k))).", "where nc(v) is the number of children of node v, ch(v, k) is the k-th child of node v, and λ (0 < λ ≤ 1) is a decay factor.", "λ = 1 yields the number of common subtrees; λ < 1 down weighs the contribution of larger subtrees to make the kernel value less variable with respect to subtree size.", "Our PTK Model To classify propagation trees, we can calculate the similarity between the trees, which is supposed to reflect the distinction of different types of rumors and non-rumors based on structural, linguistic and temporal properties.", "However, existing tree kernels cannot be readily applied on propagation trees because 1) unlike parse tree where the node is represented by enumerable nominal value (e.g., part-of-speech tag), the propagation tree node is given as a vector of continuous numerical values representing the basic properties of the node; 2) the similarity of two parse trees is based on the count of common subtrees, for which the commonality of subtrees is evaluated by checking if the same production rules and the same children are associated with the nodes in two subtrees being compared, whereas in our context the similarity function should be defined softly since hardly two nodes from different propagation trees are same.", "With the representation of propagation tree, we first define a function f to evaluate the similarity between two nodes v i and v j (we simplify the node representation for instance v i = (u i , c i , t i )) as the following: f (v i , v j ) = e −t (αE(u i , u j ) + (1 − α)J (c i , c j )) where t = |t i − t j | is the absolute difference between the time lags of v i and v j , E and J are user-based similarity and content-based similarity, respectively, and α is the trade-off parameter.", "The intuition of using exponential function of t to scale down the similarity is to capture the discriminant signals or patterns at the different stages of propagation.", "For example, a questioning message posted very early may signal a false rumor while the same posted far later from initial post may indicate the rumor is still unverified, despite that the two messages are semantically similar.", "The user-based similarity is defined as an Euclidean distance E(u i , u j ) = ||u i − u j || 2 , where u i and u j are the user vectors of node v i and v j and || • || 2 is the 2-norm of a vector.", "Here E is used to capture the characteristics of users participating in spreading rumors as discriminant signals, throughout the entire stage of propagation.", "Contentwise, we use Jaccard coefficient to measure the similarity of post content: J (c i , c j ) = |N gram(c i ) ∩ N gram(c j )| |N gram(c i ) ∪ N gram(c j )| where c i and c j are the sets of content words in two nodes.", "For n-grams here, we adopt both uni-grams and bi-grams.", "It can capture cue terms e.g., 'false', 'debunk', 'not true', etc.", "commonly occurring in rumors but not in non-rumors.", "Given two propagation trees T 1 = V 1 , E 1 and T 2 = V 2 , E 2 , PTK aims to compute the similarity between T 1 and T 2 iteratively based on enumerating all pairs of most similar subtrees.", "First, for each node v i ∈ V 1 , we obtain v i ∈ V 2 , the most similar node of v i from V 2 : v i = arg max v j ∈V 2 f (v i , v j ) Similarly, for each v j ∈ V 2 , we obtain v j ∈ V 1 : v j = arg max v i ∈V 1 f (v i , v j ) Then, the propagation tree kernel K P (T 1 , T 2 ) is defined as: v i ∈V 1 Λ(v i , v i ) + v j ∈V 2 Λ(v j , v j ) (2) where Λ(v, v ) evaluates the similarity of two subtrees rooted at v and v , which is computed recursively as follows: 1) if v or v are leaf nodes, then Λ(v, v ) = f (v, v ); 2) else Λ(v, v ) = f (v, v ) min(nc(v),nc(v )) k=1 (1 + Λ(ch(v, k), ch(v , k))) Note that unlike traditional tree kernel, in PTK the node similarity f ∈ [0, 1] is used for softly counting similar subtrees instead of common subtrees.", "Also, λ in tree kernel is not needed as subtree size is not an issue here thanks to node similarity f .", "PTK aims to capture discriminant patterns from propagation trees inclusive of user, content and temporal traits, which is inspired by prior analyses on rumors spreading, e.g., user information can be a strong clue in the initial broadcast, content features are important throughout entire propagation periods, and structural and temporal patterns help for longitudinal diffusion (Zubiaga et al., 2016; Kwon et al., 2017) .", "Context-Sensitive Extension of PTK One defect of PTK is that it ignores the clues outside the subtrees, e.g., how the information propagates from source post to the current subtree.", "Intuitively, propagation paths provide further clue for determining the truthfulness of information since they embed the route and context of how the propagation happens.", "Therefore, we propose contextsensitive PTK (cPTK) by considering the propagation paths from the root of the tree to the roots of subtrees, which shares similar intuition with the context-sensitive tree kernel (Zhou et al., 2007) .", "For a propagation tree node v ∈ T (r), let L r v be the length (i.e., # of nodes) of the propagation path from root r to v, and v[x] be the x-th ancestor of v on the path starting from v (0 ≤ x < L r v , v[0] = v, v[L r v − 1] = r) .", "cPTK evaluates the similarity between two trees T 1 (r 1 ) and T 2 (r 2 ) as follows: v i ∈V 1 L r 1 v i −1 x=0 Λ x (v i , v i ) + v j ∈V 2 L r 2 v j −1 x=0 Λ x (v j , v j ) (3) where Λ x (v, v ) measures the similarity of sub- trees rooted at v[x] and v [x] for context-sensitive evaluation, which is computed as follows: 1) if x > 0, Λ x (v, v ) = f (v[x], v [x]), where v[x] and v [x] are the x-th ancestor nodes of v and v on the respective propagation path.", "2) else Λ x (v, v ) = Λ(v, v ), namely PTK.", "Clearly, PTK is a special case of cPTK when x = 0 (see equation 3).", "cPTK evaluates the oc-currence of both context-free (without considering ancestors on propagation paths) and contextsensitive cases.", "Rumor Detection via Kernel Learning The advantage of kernel-based method is that we can avoid painstakingly engineering the features.", "This is possible because the kernel function can explore an implicit feature space when calculating the similarity between two objects (Culotta and Sorensen, 2004) .", "We incorporate the proposed tree kernel functions, i.e., PTK (equation 2) or cPTK (equation 3), into a supervised learning framework, for which we utilize a kernel-based SVM classifier.", "We treat each tree as an instance, and its similarity values with all training instances as feature space.", "Therefore, the kernel matrix of training set is m × m and that of test set is n × m where m and n are the sizes of training and test sets, respectively.", "For our multi-class task, we perform a one-vsall classification for each label and then assign the one with the highest likelihood among the four, i.e., non-rumor, false rumor, true rumor or unverified rumor.", "We choose this method due to interpretability of results, similar to recent work on occupational class classification (Preotiuc-Pietro et al., 2015; Lukasik et al., 2015) .", "Experiments and Results Data Sets To our knowledge, there is no public large dataset available for classifying propagation trees, where we need a good number of source tweets, more accurately, the tree roots together with the corresponding propagation structure, to be appropriately annotated with ground truth.", "We constructed our datasets based on a couple of reference datasets, namely Twitter15 (Liu et al., 2015) and Twitter16 (Ma et al., 2016) .", "The original datasets were released and used for binary classification of rumor and non-rumor with respect to given events that contain their relevant tweets.", "First, we extracted the popular source tweets 2 that are highly retweeted or replied.", "We then collected all the propagation threads (i.e., retweets and replies) for these source tweets.", "Because Twitter API cannot retrieve the retweets or replies, we gathered the retweet users for a given tweet from 2 Though unpopular tweets could be false, we ignore them as they do not draw much attention and are hardly impactful Twrench 3 and crawled the replies through Twitter's web interface.", "Finally, we annotated the source tweets by referring to the labels of the events they are from.", "We first turned the label of each event in Twitter15 and Twitter16 from binary to quaternary according to the veracity tag of the article in rumor debunking websites (e.g., snopes.com, Emergent.info, etc).", "Then we labeled the source tweets by following these rules: 1) Source tweets from unverified rumor events or non-rumor events are labeled the same as the corresponding event's label; 2) For a source tweet in false rumor event, we flip over the label and assign true to the source tweet if it expresses denial type of stance; otherwise, the label is assigned as false; 3) The analogous flip-over/nochange rule applies to the source tweets from true rumor events.", "We make the datasets produced publicly accessible 4 .", "Table 1 gives statistics on the resulting datasets.", "Experimental Setup We compare our kernel-based method against the following baselines: SVM-TS: A linear SVM classification model that uses time-series to model the variation of a set of hand-crafted features (Ma et al., 2015) .", "DTR: A Decision-Tree-based Ranking method to identify trending rumors (Zhao et al., 2015) , which searches for enquiry phrases and clusters disputed factual claims, and ranked the clustered results based on statistical features.", "DTC and SVM-RBF: The Twitter information credibility model using Decision Tree Classifier (Castillo et al., 2011) and the SVM-based model with RBF kernel (Yang et al., 2012) , respectively, both using hand-crafted features based on the overall statistics of the posts.", "RFC: The Random Forest Classifier proposed by Kwon et al.", "(2017) using three parameters to fit the temporal properties and an extensive set of hand-crafted features related to the user, linguistic and structure characteristics.", "GRU: The RNN-based rumor detection model proposed by Ma et al.", "(2016) with gated recurrent unit for representation learning of high-level features from relevant posts over time.", "BOW: A naive baseline we worked by representing the text in each tree using bag-of-words and building the rumor classifier with linear SVM.", "Our models: PTK and cPTK are our full PTK and cPTK models, respectively; PTKand cPTKare the setting of only using content while ignoring user properties.", "We implemented DTC and RFC with Weka 5 , SVM models with LibSVM 6 and GRU with Theano 7 .", "We held out 10% of the trees in each dataset for model tuning, and for the rest of the trees, we performed 3-fold cross-validation.", "We used accuracy, F 1 measure as evaluation metrics.", "Table 2 shows that our proposed methods outperform all the baselines on both datasets.", "Experimental Results Among all baselines, GRU performs the best, which learns the low-dimensional representation of responsive tweets by capturing the textual and temporal information.", "This indicates the effectiveness of complex signals indicative of rumors beyond cue words or phrases (e.g., \"what?", "\", \"really?", "\", \"not sure\", etc.).", "This also justifies the good performance of BOW even though it only uses uni-grams for representation.", "Although DTR uses a set of regular expressions, we found only 19.59% and 22.21% tweets in our datasets containing these expressions.", "That is why the results of DTR are not satisfactory.", "SVM-TS and RFC are comparable because both of them utilize an extensive set of features especially focusing on temporal traits.", "But none of the models can directly incorporate structured propagation patterns for deep similarity compar- ison between propagation trees.", "SVM-RBF, although using a non-linear kernel, is based on traditional hand-crafted features instead of the structural kernel like ours.", "So, they performed obviously worse than our approach.", "Representation learning methods like GRU cannot easily utilize complex structural information for learning important features from our networked data.", "In contrast, our models can capture complex propagation patterns from structured data rich of linguistic, user and temporal signals.", "Therefore, the superiority of our models is clear: PTKwhich only uses text is already better than GRU, demonstrating the importance of propagation structures.", "PTK that combines text and user yields better results on both datasets, implying that both properties are complementary and PTK integrating flat and structured information is obviously more effective.", "It is also observed that cPTK outperforms PTK except for non-rumor class.", "This suggests the context-sensitive modeling based on PTK is effective for different types of rumors, but for non- The example subtree of a rumor captured by the algorithm at early stage of propagation rumors, it seems that considering context of propagation path is not always helpful.", "This might be due to the generally weak signals originated from node properties on the paths during non-rumor's diffusion since user distribution patterns in nonrumors do not seem as obvious as in rumors.", "This is not an issue in cPTKsince user information is not considered at all.", "Over all classes, cPTK achieves the highest accuracies on both datasets.", "Furthermore, we observe that all the baseline methods perform much better on non-rumors than on rumors.", "This is because the features of existing methods were defined for a binary (rumor vs. non-rumor) classification problem.", "So, they do not perform well for finer-grained classes.", "Our ap-proach can differentiate various classes much better by deep, detailed comparison of different patterns based on propagation structure.", "Early Detection Performance Detecting rumors at an early stage of propagation is very important so that preventive measures could be taken as quickly as possible.", "In early detection task, all the posts after a detection deadline are invisible during test.", "The earlier the deadline, the less propagation information can be available.", "Figure 4 shows the performances of our PTK and cPTK models versus RFC (the best system based on feature engineering), GRU (the best system based on RNN) and DTR (an early-detection-specific algorithm) against various deadlines.", "In the first few hours, our approach demonstrates superior early detection performance than other models.", "Particularly, cPTK achieve 75% accuracy on Twitter15 and 73% on Twitter16 within 24 hours, that is much faster than other models.", "Our analysis shows that rumors typically demonstrate more complex propagation substructures especially at early stage.", "Figure 5 shows a detected subtree of a false rumor spread in its first few hours, where influential users are somehow captured to boost its propagation and the information flows among the users with an obvious unpopular-to-popular-to-unpopular trend in terms of user popularity, but such pattern was not witnessed in non-rumors in early stage.", "Many textual signals (underlined) can also be observed in that early period.", "Our method can learn such structures and patterns naturally, but it is difficult to know and hand-craft them in feature engineering.", "Conclusion and Future Work We propose a novel approach for detecting rumors in microblog posts based on kernel learning method using propagation trees.", "A propagation tree encodes the spread of a hypothesis (i.e., a source tweet) with complex structured patterns and flat information regarding content, user and time associated with the tree nodes.", "Enlightened by tree kernel techniques, our kernel method learns discriminant clues for identifying rumors of finer-grained levels by directly measuring the similarity among propagation trees via kernel functions.", "Experiments on two Twitter datasets show that our approach outperforms stateof-the-art baselines with large margin for both general and early rumor detection tasks.", "Since kernel-based approach covers more structural information than feature-based methods, it allows kernel to further incorporate information from a high dimensional space for possibly better discrimination.", "In the future, we will focus on improving the rumor detection task by exploring network representation learning framework.", "Moreover, we plan to investigate unsupervised models considering massive unlabeled rumorous data from social media." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5.1", "5.2", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Representation of Tweets Propagation", "Propagation Tree Kernel Modeling", "Background of Tree Kernel", "Our PTK Model", "Context-Sensitive Extension of PTK", "Rumor Detection via Kernel Learning", "Data Sets", "Experimental Setup", "Experimental Results", "Early Detection Performance", "Conclusion and Future Work" ] }
GEM-SciDuet-train-101#paper-1265#slide-2
Contributions
Represent information spread on Twitter with propagation tree, formed by harvesting users interactions, to capture high-order propagation patterns of rumors. Propose a kernel-based data-driven method to generate relevant features automatically for estimating the similarity between two propagation tees. Enhance the proposed model by considering propagation paths from source tweet to subtrees to capture the context of transmission. Release two real-world twitter datasets with finer-grained
Represent information spread on Twitter with propagation tree, formed by harvesting users interactions, to capture high-order propagation patterns of rumors. Propose a kernel-based data-driven method to generate relevant features automatically for estimating the similarity between two propagation tees. Enhance the proposed model by considering propagation paths from source tweet to subtrees to capture the context of transmission. Release two real-world twitter datasets with finer-grained
[]
GEM-SciDuet-train-101#paper-1265#slide-3
1265
Detect Rumors in Microblog Posts Using Propagation Structure via Kernel Learning
How fake news goes viral via social media? How does its propagation pattern differ from real stories? In this paper, we attempt to address the problem of identifying rumors, i.e., fake information, out of microblog posts based on their propagation structure. We firstly model microblog posts diffusion with propagation trees, which provide valuable clues on how an original message is transmitted and developed over time. We then propose a kernel-based method called Propagation Tree Kernel, which captures high-order patterns differentiating different types of rumors by evaluating the similarities between their propagation tree structures. Experimental results on two real-world datasets demonstrate that the proposed kernel-based approach can detect rumors more quickly and accurately than state-ofthe-art rumor detection models.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170 ], "paper_content_text": [ "Introduction On November 9th, 2016, Eric Tucker, a grassroots user who had just about 40 followers on Twitter, tweeted his unverified observations about paid protesters being bused to attend anti-Trump demonstration in Austin, Texas.", "The tweet, which was proved false later, was shared over 16 thousand times on Twitter and 350 thousand times on Facebook within a couple of days, fueling a nation-wide conspiracy theory 1 .", "The diffusion of the story is illustrated as Figure 1 which gives the key spreading points of the story along the time line.", "We can see that after the initial post, the tweet was shared or promoted by some influential online communities and users (including Trump himself), resulting in its wide spread.", "A widely accepted definition of rumor is \"unverified and instrumentally relevant information statements in circulation\" (DiFonzo and Bordia, 2007) .", "This unverified information may eventually turn out to be true, or partly or entirely false.", "In today's ever-connected world, rumors can arise and spread at lightening speed thanks to social media platforms, which could not only be wrong, but be misleading and dangerous to the public society.", "Therefore, it is crucial to track and debunk such rumors in timely manner.", "Journalists and fact-checking websites such as snopes.com have made efforts to track and detect rumors.", "However, such endeavor is manual, thus prone to poor coverage and low speed.", "Feature-based methods (Castillo et al., 2011; Yang et al., 2012; Ma et al., 2015) achieved certain success by employing large feature sets crafted from message contents, user profiles and holistic statistics of diffusion patterns (e.g., number of retweets, propagation time, etc.).", "But such an approach was over simplified as they ignored the dynamics of rumor propagation.", "Existing studies considering propagation characteristics mainly focused on the temporal features (Kwon et al., 2013 (Kwon et al., , 2017 rather than the structure of propagation.", "So, can the propagation structure make any difference for differentiating rumors from nonrumors?", "Recent studies showed that rumor spreaders are persons who want to get attention and popularity (Sunstein, 2014) .", "However, popular users who get more attention on Twitter (e.g., with more followers) are actually less likely to spread rumor in a sense that the high audience size might hinder a user from participating in propagating unverified information (Kwon et al., 2017) .", "Intuitively, for \"successful\" rumors being propagated as widely as popular real news, initial spreaders (typically lack of popularity) must attract certain amount of broadcasting power, e.g., attention of influential users or communities that have a lot of audiences joining in promoting the propagation.", "We refer to this as a constrained mode propagation, relative to the open mode propagation of normal messages that everyone is open to share.", "Such different modes of propagation may imply some distinct propagation structures between rumors and nonrumors and even among different types of rumors.", "Due to the complex nature of information diffusion, explicitly defining discriminant features based on propagation structure is difficult and biased.", "Figure 2 exemplifies the propagation structures of two Twitter posts, a rumor and a nonrumor, initiated by two users shown as the root nodes (in green color).", "The information flows here illustrate that the rumorous tweet is first posted by a low-impact user, then some popular users joining in who boost the spreading, but the non-rumorous tweet is initially posted by a popular user and directly spread by many general users; contentbased signal like various users' stance (Zhao et al., 2015) and edge-based signal such as relative influence (Kwon et al., 2017) can also suggest the different nature of source tweets.", "Many of such implicit distinctions throughout message propagation are hard to hand craft specifically using flat summary of statistics as previous work did.", "In addition, unlike representation learning for plain text, learning for representation of structures such as networks is not well studied in general.", "Therefore, traditional and latest text-based models (Castillo (a) A rumor (b) A non-rumor Figure 2 : Fragments of the propagation for two source tweets.", "Node size: denotes the popularity of the user who tweet the post (represented by # of followers); Red, black, blue node: content-wise the user express doubt/denial, support, neutrality in the tweet, respectively; Solid (dotted) edge: information flow from a more (less) popular user to a less (more) popular user; Dashed concentric circles: time stamps.", "Ma et al., 2015 Ma et al., , 2016 cannot be applied easily on such complex, dynamic structures.", "To capture high-order propagation patterns for rumor detection, we firstly represent the propagation of each source tweet with a propagation tree which is formed by harvesting user's interactions to one another triggered by the source tweet.", "Then, we propose a kernel-based data-driven method called Propagation Tree Kernel (PTK) to generate relevant features (i.e., subtrees) automatically for estimating the similarity between two propagation trees.", "Unlike traditional tree kernel (Moschitti, 2006; Zhang et al., 2008) for modeling syntactic structure based on parse tree, our propagation tree consists of nodes corresponding to microblog posts, each represented as a continuous vector, and edges representing the direction of propagation and providing the context to individual posts.", "The basic idea is to find and capture the salient substructures in the propagation trees indicative of rumors.", "We also extend PTK into a context-enriched PTK (cPTK) to enhance the model by considering different propagation paths from source tweet to the roots of subtrees, which capture the context of transmission.", "Extensive experiments on two real-world Twitter datasets show that the proposed methods outperform state-of-the-art rumor detection models with large margin.", "Moreover, most existing approaches regard rumor detection as a binary classification problem, which predicts a candidate hypothesis as rumor or not.", "Since a rumor often begins as unverified and later turns out to be confirmed as true or false, or remains unverified (Zubiaga et al., 2016) , here we consider a set of more practical, finer-grained classes: false rumor, true rumor, unverified rumor, and non-rumor, which becomes an even more challenging problem.", "Related Work Tracking misinformation or debunking rumors has been a hot research topic in multiple disciplines (DiFonzo and Bordia, 2007; Morris et al., 2012; Rosnow, 1991) .", "Castillo et al.", "(2011) studied information credibility on Twitter using a wide range of hand-crafted features.", "Following that, various features corresponding to message contents, user profiles and statistics of propagation patterns were proposed in many studies (Yang et al., 2012; Wu et al., 2015; Sun et al., 2013; Liu et al., 2015) .", "Zhao et al.", "(2015) focused on early rumor detection by using regular expressions for finding questing and denying tweets as the key for debunking rumor.", "All such approaches are over simplistic because they ignore the dynamic propagation patterns given the rich structures of social media data.", "Some studies focus on finding temporal patterns for understanding rumor diffusion.", "Kown et al.", "(2013; 2017) introduced a time-series fitting model based on the temporal properties of tweet volume.", "Ma et al.", "(2015) extended the model using time series to capture the variation of features over time.", "Friggeri et al.", "(2014) and Hannak et al.", "(2014) studied the structure of misinformation cascades by analyzing comments linking to rumor debunking websites.", "More recently, Ma et al.", "(2016) used recurrent neural networks to learn the representations of rumor signals from tweet text at different times.", "Our work will consider temporal, structural and linguistic signals in a unified framework based on propagation tree kernel.", "Most previous work formulated the task as classification at event level where an event is comprised of a number of source tweets, each being associated with a group of retweets and replies.", "Here we focus on classifying a given source tweet regarding a claim which is a finer-grained task.", "Similar setting was also considered in (Wu et al., 2015; Qazvinian et al., 2011) .", "Kernel methods are designed to evaluate the similarity between two objects, and tree kernel specifically addresses structured data which has been successfully applied for modeling syntactic information in many natural language tasks such as syntactic parsing (Collins and Duffy, 2001) , question-answering (Moschitti, 2006) , semantic analysis (Moschitti, 2004) , relation extraction (Zhang et al., 2008) and machine translation (Sun et al., 2010) .", "These kernels are not suitable for modeling the social media propagation structures because the nodes are not given as discrete values like part-of-speech tags, but are represented as high dimensional real-valued vectors.", "Our proposed method is a substantial extension of tree kernel for modeling such structures.", "Representation of Tweets Propagation On microblogging platforms, the follower/friend relationship embeds shared interests among the users.", "Once a user has posted a tweet, all his followers will receive the tweet.", "Furthermore, Twitter allows a user to retweet or comment another user's post, so that the information could reach beyond the network of the original creator.", "We model the propagation of each source tweet as a tree structure T (r) = V, E , where r is the source tweet as well as the root of the tree, V refers to a set of nodes each representing a responsive post (i.e., retweet or reply) of a user at a certain time to the source tweet r which initiates the circulation, and E is a set of directed edges corresponding to the response relation among the nodes in V .", "If there exists a directed edge from v i to v j , it means v j is a direct response to v i .", "More specifically, each node v ∈ V is repre- sented as a tuple v = (u v , c v , t v ) , which provides the following information: u v is the creator of the post, c v represents the text content of the post, and t v is the time lag between the source tweet r and v. In our case, u v contains attributes of the user such as # of followers/friends, verification status, # of history posts, etc., c v is a vector of binary features based on uni-grams and/or bi-grams representing the post's content.", "Propagation Tree Kernel Modeling In this section, we describe our rumor detection model based on propagation trees using kernel method called Propagation Tree Kernel (PTK).", "Our task is, given a propagation tree T (r) of a source tweet r, to predict the label of r. Background of Tree Kernel Before presenting our proposed algorithm, we briefly present the traditional tree kernel, which our PTK model is based on.", "Tree kernel was designed to compute the syntactic and semantic similarity between two natural language sentences by implicitly counting the number of common subtrees between their corresponding parse trees.", "Given a syntactic parse tree, each node with its children is associated with a grammar production rule.", "Figure 3 illustrates the syntactic parse tree of \"cut a tree\" and its subtrees.", "A subtree is defined as any subgraph which has more than one nodes, with the restriction that entire (not partial) rule productions must be included.", "For example, the fragment [NP [D a]] is excluded because it contains only part of the production NP → D N (Collins and Duffy, 2001) .", "Following Collins and Duffy (2001) , given two parse trees T 1 and T 2 , the kernel function K(T 1 , T 2 ) is defined as: v i ∈V 1 v j ∈V 2 ∆(v i , v j ) (1) where V 1 and V 2 are the sets of all nodes respectively in T 1 and T 2 , and each node is associated with a production rule, and ∆(v i , v j ) evaluates the common subtrees rooted at v i and v j .", "∆(v i , v j ) can be computed using the following recursive procedure (Collins and Duffy, 2001) : 1) if the production rules at v i and v j are different, then ∆(v i , v j ) = 0; 2) else if the production rules at v i and v j are same, and v i and v j have only leaf children (i.e., they are pre-terminal symbols), then ∆(v i , v j ) = λ; 3) else ∆(v i , v j ) = λ min(nc(v i ),nc(v j )) k=1 (1 + ∆(ch(v i , k), ch(v j , k))).", "where nc(v) is the number of children of node v, ch(v, k) is the k-th child of node v, and λ (0 < λ ≤ 1) is a decay factor.", "λ = 1 yields the number of common subtrees; λ < 1 down weighs the contribution of larger subtrees to make the kernel value less variable with respect to subtree size.", "Our PTK Model To classify propagation trees, we can calculate the similarity between the trees, which is supposed to reflect the distinction of different types of rumors and non-rumors based on structural, linguistic and temporal properties.", "However, existing tree kernels cannot be readily applied on propagation trees because 1) unlike parse tree where the node is represented by enumerable nominal value (e.g., part-of-speech tag), the propagation tree node is given as a vector of continuous numerical values representing the basic properties of the node; 2) the similarity of two parse trees is based on the count of common subtrees, for which the commonality of subtrees is evaluated by checking if the same production rules and the same children are associated with the nodes in two subtrees being compared, whereas in our context the similarity function should be defined softly since hardly two nodes from different propagation trees are same.", "With the representation of propagation tree, we first define a function f to evaluate the similarity between two nodes v i and v j (we simplify the node representation for instance v i = (u i , c i , t i )) as the following: f (v i , v j ) = e −t (αE(u i , u j ) + (1 − α)J (c i , c j )) where t = |t i − t j | is the absolute difference between the time lags of v i and v j , E and J are user-based similarity and content-based similarity, respectively, and α is the trade-off parameter.", "The intuition of using exponential function of t to scale down the similarity is to capture the discriminant signals or patterns at the different stages of propagation.", "For example, a questioning message posted very early may signal a false rumor while the same posted far later from initial post may indicate the rumor is still unverified, despite that the two messages are semantically similar.", "The user-based similarity is defined as an Euclidean distance E(u i , u j ) = ||u i − u j || 2 , where u i and u j are the user vectors of node v i and v j and || • || 2 is the 2-norm of a vector.", "Here E is used to capture the characteristics of users participating in spreading rumors as discriminant signals, throughout the entire stage of propagation.", "Contentwise, we use Jaccard coefficient to measure the similarity of post content: J (c i , c j ) = |N gram(c i ) ∩ N gram(c j )| |N gram(c i ) ∪ N gram(c j )| where c i and c j are the sets of content words in two nodes.", "For n-grams here, we adopt both uni-grams and bi-grams.", "It can capture cue terms e.g., 'false', 'debunk', 'not true', etc.", "commonly occurring in rumors but not in non-rumors.", "Given two propagation trees T 1 = V 1 , E 1 and T 2 = V 2 , E 2 , PTK aims to compute the similarity between T 1 and T 2 iteratively based on enumerating all pairs of most similar subtrees.", "First, for each node v i ∈ V 1 , we obtain v i ∈ V 2 , the most similar node of v i from V 2 : v i = arg max v j ∈V 2 f (v i , v j ) Similarly, for each v j ∈ V 2 , we obtain v j ∈ V 1 : v j = arg max v i ∈V 1 f (v i , v j ) Then, the propagation tree kernel K P (T 1 , T 2 ) is defined as: v i ∈V 1 Λ(v i , v i ) + v j ∈V 2 Λ(v j , v j ) (2) where Λ(v, v ) evaluates the similarity of two subtrees rooted at v and v , which is computed recursively as follows: 1) if v or v are leaf nodes, then Λ(v, v ) = f (v, v ); 2) else Λ(v, v ) = f (v, v ) min(nc(v),nc(v )) k=1 (1 + Λ(ch(v, k), ch(v , k))) Note that unlike traditional tree kernel, in PTK the node similarity f ∈ [0, 1] is used for softly counting similar subtrees instead of common subtrees.", "Also, λ in tree kernel is not needed as subtree size is not an issue here thanks to node similarity f .", "PTK aims to capture discriminant patterns from propagation trees inclusive of user, content and temporal traits, which is inspired by prior analyses on rumors spreading, e.g., user information can be a strong clue in the initial broadcast, content features are important throughout entire propagation periods, and structural and temporal patterns help for longitudinal diffusion (Zubiaga et al., 2016; Kwon et al., 2017) .", "Context-Sensitive Extension of PTK One defect of PTK is that it ignores the clues outside the subtrees, e.g., how the information propagates from source post to the current subtree.", "Intuitively, propagation paths provide further clue for determining the truthfulness of information since they embed the route and context of how the propagation happens.", "Therefore, we propose contextsensitive PTK (cPTK) by considering the propagation paths from the root of the tree to the roots of subtrees, which shares similar intuition with the context-sensitive tree kernel (Zhou et al., 2007) .", "For a propagation tree node v ∈ T (r), let L r v be the length (i.e., # of nodes) of the propagation path from root r to v, and v[x] be the x-th ancestor of v on the path starting from v (0 ≤ x < L r v , v[0] = v, v[L r v − 1] = r) .", "cPTK evaluates the similarity between two trees T 1 (r 1 ) and T 2 (r 2 ) as follows: v i ∈V 1 L r 1 v i −1 x=0 Λ x (v i , v i ) + v j ∈V 2 L r 2 v j −1 x=0 Λ x (v j , v j ) (3) where Λ x (v, v ) measures the similarity of sub- trees rooted at v[x] and v [x] for context-sensitive evaluation, which is computed as follows: 1) if x > 0, Λ x (v, v ) = f (v[x], v [x]), where v[x] and v [x] are the x-th ancestor nodes of v and v on the respective propagation path.", "2) else Λ x (v, v ) = Λ(v, v ), namely PTK.", "Clearly, PTK is a special case of cPTK when x = 0 (see equation 3).", "cPTK evaluates the oc-currence of both context-free (without considering ancestors on propagation paths) and contextsensitive cases.", "Rumor Detection via Kernel Learning The advantage of kernel-based method is that we can avoid painstakingly engineering the features.", "This is possible because the kernel function can explore an implicit feature space when calculating the similarity between two objects (Culotta and Sorensen, 2004) .", "We incorporate the proposed tree kernel functions, i.e., PTK (equation 2) or cPTK (equation 3), into a supervised learning framework, for which we utilize a kernel-based SVM classifier.", "We treat each tree as an instance, and its similarity values with all training instances as feature space.", "Therefore, the kernel matrix of training set is m × m and that of test set is n × m where m and n are the sizes of training and test sets, respectively.", "For our multi-class task, we perform a one-vsall classification for each label and then assign the one with the highest likelihood among the four, i.e., non-rumor, false rumor, true rumor or unverified rumor.", "We choose this method due to interpretability of results, similar to recent work on occupational class classification (Preotiuc-Pietro et al., 2015; Lukasik et al., 2015) .", "Experiments and Results Data Sets To our knowledge, there is no public large dataset available for classifying propagation trees, where we need a good number of source tweets, more accurately, the tree roots together with the corresponding propagation structure, to be appropriately annotated with ground truth.", "We constructed our datasets based on a couple of reference datasets, namely Twitter15 (Liu et al., 2015) and Twitter16 (Ma et al., 2016) .", "The original datasets were released and used for binary classification of rumor and non-rumor with respect to given events that contain their relevant tweets.", "First, we extracted the popular source tweets 2 that are highly retweeted or replied.", "We then collected all the propagation threads (i.e., retweets and replies) for these source tweets.", "Because Twitter API cannot retrieve the retweets or replies, we gathered the retweet users for a given tweet from 2 Though unpopular tweets could be false, we ignore them as they do not draw much attention and are hardly impactful Twrench 3 and crawled the replies through Twitter's web interface.", "Finally, we annotated the source tweets by referring to the labels of the events they are from.", "We first turned the label of each event in Twitter15 and Twitter16 from binary to quaternary according to the veracity tag of the article in rumor debunking websites (e.g., snopes.com, Emergent.info, etc).", "Then we labeled the source tweets by following these rules: 1) Source tweets from unverified rumor events or non-rumor events are labeled the same as the corresponding event's label; 2) For a source tweet in false rumor event, we flip over the label and assign true to the source tweet if it expresses denial type of stance; otherwise, the label is assigned as false; 3) The analogous flip-over/nochange rule applies to the source tweets from true rumor events.", "We make the datasets produced publicly accessible 4 .", "Table 1 gives statistics on the resulting datasets.", "Experimental Setup We compare our kernel-based method against the following baselines: SVM-TS: A linear SVM classification model that uses time-series to model the variation of a set of hand-crafted features (Ma et al., 2015) .", "DTR: A Decision-Tree-based Ranking method to identify trending rumors (Zhao et al., 2015) , which searches for enquiry phrases and clusters disputed factual claims, and ranked the clustered results based on statistical features.", "DTC and SVM-RBF: The Twitter information credibility model using Decision Tree Classifier (Castillo et al., 2011) and the SVM-based model with RBF kernel (Yang et al., 2012) , respectively, both using hand-crafted features based on the overall statistics of the posts.", "RFC: The Random Forest Classifier proposed by Kwon et al.", "(2017) using three parameters to fit the temporal properties and an extensive set of hand-crafted features related to the user, linguistic and structure characteristics.", "GRU: The RNN-based rumor detection model proposed by Ma et al.", "(2016) with gated recurrent unit for representation learning of high-level features from relevant posts over time.", "BOW: A naive baseline we worked by representing the text in each tree using bag-of-words and building the rumor classifier with linear SVM.", "Our models: PTK and cPTK are our full PTK and cPTK models, respectively; PTKand cPTKare the setting of only using content while ignoring user properties.", "We implemented DTC and RFC with Weka 5 , SVM models with LibSVM 6 and GRU with Theano 7 .", "We held out 10% of the trees in each dataset for model tuning, and for the rest of the trees, we performed 3-fold cross-validation.", "We used accuracy, F 1 measure as evaluation metrics.", "Table 2 shows that our proposed methods outperform all the baselines on both datasets.", "Experimental Results Among all baselines, GRU performs the best, which learns the low-dimensional representation of responsive tweets by capturing the textual and temporal information.", "This indicates the effectiveness of complex signals indicative of rumors beyond cue words or phrases (e.g., \"what?", "\", \"really?", "\", \"not sure\", etc.).", "This also justifies the good performance of BOW even though it only uses uni-grams for representation.", "Although DTR uses a set of regular expressions, we found only 19.59% and 22.21% tweets in our datasets containing these expressions.", "That is why the results of DTR are not satisfactory.", "SVM-TS and RFC are comparable because both of them utilize an extensive set of features especially focusing on temporal traits.", "But none of the models can directly incorporate structured propagation patterns for deep similarity compar- ison between propagation trees.", "SVM-RBF, although using a non-linear kernel, is based on traditional hand-crafted features instead of the structural kernel like ours.", "So, they performed obviously worse than our approach.", "Representation learning methods like GRU cannot easily utilize complex structural information for learning important features from our networked data.", "In contrast, our models can capture complex propagation patterns from structured data rich of linguistic, user and temporal signals.", "Therefore, the superiority of our models is clear: PTKwhich only uses text is already better than GRU, demonstrating the importance of propagation structures.", "PTK that combines text and user yields better results on both datasets, implying that both properties are complementary and PTK integrating flat and structured information is obviously more effective.", "It is also observed that cPTK outperforms PTK except for non-rumor class.", "This suggests the context-sensitive modeling based on PTK is effective for different types of rumors, but for non- The example subtree of a rumor captured by the algorithm at early stage of propagation rumors, it seems that considering context of propagation path is not always helpful.", "This might be due to the generally weak signals originated from node properties on the paths during non-rumor's diffusion since user distribution patterns in nonrumors do not seem as obvious as in rumors.", "This is not an issue in cPTKsince user information is not considered at all.", "Over all classes, cPTK achieves the highest accuracies on both datasets.", "Furthermore, we observe that all the baseline methods perform much better on non-rumors than on rumors.", "This is because the features of existing methods were defined for a binary (rumor vs. non-rumor) classification problem.", "So, they do not perform well for finer-grained classes.", "Our ap-proach can differentiate various classes much better by deep, detailed comparison of different patterns based on propagation structure.", "Early Detection Performance Detecting rumors at an early stage of propagation is very important so that preventive measures could be taken as quickly as possible.", "In early detection task, all the posts after a detection deadline are invisible during test.", "The earlier the deadline, the less propagation information can be available.", "Figure 4 shows the performances of our PTK and cPTK models versus RFC (the best system based on feature engineering), GRU (the best system based on RNN) and DTR (an early-detection-specific algorithm) against various deadlines.", "In the first few hours, our approach demonstrates superior early detection performance than other models.", "Particularly, cPTK achieve 75% accuracy on Twitter15 and 73% on Twitter16 within 24 hours, that is much faster than other models.", "Our analysis shows that rumors typically demonstrate more complex propagation substructures especially at early stage.", "Figure 5 shows a detected subtree of a false rumor spread in its first few hours, where influential users are somehow captured to boost its propagation and the information flows among the users with an obvious unpopular-to-popular-to-unpopular trend in terms of user popularity, but such pattern was not witnessed in non-rumors in early stage.", "Many textual signals (underlined) can also be observed in that early period.", "Our method can learn such structures and patterns naturally, but it is difficult to know and hand-craft them in feature engineering.", "Conclusion and Future Work We propose a novel approach for detecting rumors in microblog posts based on kernel learning method using propagation trees.", "A propagation tree encodes the spread of a hypothesis (i.e., a source tweet) with complex structured patterns and flat information regarding content, user and time associated with the tree nodes.", "Enlightened by tree kernel techniques, our kernel method learns discriminant clues for identifying rumors of finer-grained levels by directly measuring the similarity among propagation trees via kernel functions.", "Experiments on two Twitter datasets show that our approach outperforms stateof-the-art baselines with large margin for both general and early rumor detection tasks.", "Since kernel-based approach covers more structural information than feature-based methods, it allows kernel to further incorporate information from a high dimensional space for possibly better discrimination.", "In the future, we will focus on improving the rumor detection task by exploring network representation learning framework.", "Moreover, we plan to investigate unsupervised models considering massive unlabeled rumorous data from social media." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5.1", "5.2", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Representation of Tweets Propagation", "Propagation Tree Kernel Modeling", "Background of Tree Kernel", "Our PTK Model", "Context-Sensitive Extension of PTK", "Rumor Detection via Kernel Learning", "Data Sets", "Experimental Setup", "Experimental Results", "Early Detection Performance", "Conclusion and Future Work" ] }
GEM-SciDuet-train-101#paper-1265#slide-3
Related Work
Systems based on common sense and investigative Learning-based models for rumor detection Using handcrafted and temporal features: Liu et al. (2015), Ma Using recurrent neural networks: Ma et al. (2016) Tree kernel: syntactic parsing (Collins and Duffy, 2001) Semantic analysis (Moschitti, 2004) Relation extraction (Zhang et al., 2008) Machine translation (Sun et al., 2010)
Systems based on common sense and investigative Learning-based models for rumor detection Using handcrafted and temporal features: Liu et al. (2015), Ma Using recurrent neural networks: Ma et al. (2016) Tree kernel: syntactic parsing (Collins and Duffy, 2001) Semantic analysis (Moschitti, 2004) Relation extraction (Zhang et al., 2008) Machine translation (Sun et al., 2010)
[]
GEM-SciDuet-train-101#paper-1265#slide-4
1265
Detect Rumors in Microblog Posts Using Propagation Structure via Kernel Learning
How fake news goes viral via social media? How does its propagation pattern differ from real stories? In this paper, we attempt to address the problem of identifying rumors, i.e., fake information, out of microblog posts based on their propagation structure. We firstly model microblog posts diffusion with propagation trees, which provide valuable clues on how an original message is transmitted and developed over time. We then propose a kernel-based method called Propagation Tree Kernel, which captures high-order patterns differentiating different types of rumors by evaluating the similarities between their propagation tree structures. Experimental results on two real-world datasets demonstrate that the proposed kernel-based approach can detect rumors more quickly and accurately than state-ofthe-art rumor detection models.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170 ], "paper_content_text": [ "Introduction On November 9th, 2016, Eric Tucker, a grassroots user who had just about 40 followers on Twitter, tweeted his unverified observations about paid protesters being bused to attend anti-Trump demonstration in Austin, Texas.", "The tweet, which was proved false later, was shared over 16 thousand times on Twitter and 350 thousand times on Facebook within a couple of days, fueling a nation-wide conspiracy theory 1 .", "The diffusion of the story is illustrated as Figure 1 which gives the key spreading points of the story along the time line.", "We can see that after the initial post, the tweet was shared or promoted by some influential online communities and users (including Trump himself), resulting in its wide spread.", "A widely accepted definition of rumor is \"unverified and instrumentally relevant information statements in circulation\" (DiFonzo and Bordia, 2007) .", "This unverified information may eventually turn out to be true, or partly or entirely false.", "In today's ever-connected world, rumors can arise and spread at lightening speed thanks to social media platforms, which could not only be wrong, but be misleading and dangerous to the public society.", "Therefore, it is crucial to track and debunk such rumors in timely manner.", "Journalists and fact-checking websites such as snopes.com have made efforts to track and detect rumors.", "However, such endeavor is manual, thus prone to poor coverage and low speed.", "Feature-based methods (Castillo et al., 2011; Yang et al., 2012; Ma et al., 2015) achieved certain success by employing large feature sets crafted from message contents, user profiles and holistic statistics of diffusion patterns (e.g., number of retweets, propagation time, etc.).", "But such an approach was over simplified as they ignored the dynamics of rumor propagation.", "Existing studies considering propagation characteristics mainly focused on the temporal features (Kwon et al., 2013 (Kwon et al., , 2017 rather than the structure of propagation.", "So, can the propagation structure make any difference for differentiating rumors from nonrumors?", "Recent studies showed that rumor spreaders are persons who want to get attention and popularity (Sunstein, 2014) .", "However, popular users who get more attention on Twitter (e.g., with more followers) are actually less likely to spread rumor in a sense that the high audience size might hinder a user from participating in propagating unverified information (Kwon et al., 2017) .", "Intuitively, for \"successful\" rumors being propagated as widely as popular real news, initial spreaders (typically lack of popularity) must attract certain amount of broadcasting power, e.g., attention of influential users or communities that have a lot of audiences joining in promoting the propagation.", "We refer to this as a constrained mode propagation, relative to the open mode propagation of normal messages that everyone is open to share.", "Such different modes of propagation may imply some distinct propagation structures between rumors and nonrumors and even among different types of rumors.", "Due to the complex nature of information diffusion, explicitly defining discriminant features based on propagation structure is difficult and biased.", "Figure 2 exemplifies the propagation structures of two Twitter posts, a rumor and a nonrumor, initiated by two users shown as the root nodes (in green color).", "The information flows here illustrate that the rumorous tweet is first posted by a low-impact user, then some popular users joining in who boost the spreading, but the non-rumorous tweet is initially posted by a popular user and directly spread by many general users; contentbased signal like various users' stance (Zhao et al., 2015) and edge-based signal such as relative influence (Kwon et al., 2017) can also suggest the different nature of source tweets.", "Many of such implicit distinctions throughout message propagation are hard to hand craft specifically using flat summary of statistics as previous work did.", "In addition, unlike representation learning for plain text, learning for representation of structures such as networks is not well studied in general.", "Therefore, traditional and latest text-based models (Castillo (a) A rumor (b) A non-rumor Figure 2 : Fragments of the propagation for two source tweets.", "Node size: denotes the popularity of the user who tweet the post (represented by # of followers); Red, black, blue node: content-wise the user express doubt/denial, support, neutrality in the tweet, respectively; Solid (dotted) edge: information flow from a more (less) popular user to a less (more) popular user; Dashed concentric circles: time stamps.", "Ma et al., 2015 Ma et al., , 2016 cannot be applied easily on such complex, dynamic structures.", "To capture high-order propagation patterns for rumor detection, we firstly represent the propagation of each source tweet with a propagation tree which is formed by harvesting user's interactions to one another triggered by the source tweet.", "Then, we propose a kernel-based data-driven method called Propagation Tree Kernel (PTK) to generate relevant features (i.e., subtrees) automatically for estimating the similarity between two propagation trees.", "Unlike traditional tree kernel (Moschitti, 2006; Zhang et al., 2008) for modeling syntactic structure based on parse tree, our propagation tree consists of nodes corresponding to microblog posts, each represented as a continuous vector, and edges representing the direction of propagation and providing the context to individual posts.", "The basic idea is to find and capture the salient substructures in the propagation trees indicative of rumors.", "We also extend PTK into a context-enriched PTK (cPTK) to enhance the model by considering different propagation paths from source tweet to the roots of subtrees, which capture the context of transmission.", "Extensive experiments on two real-world Twitter datasets show that the proposed methods outperform state-of-the-art rumor detection models with large margin.", "Moreover, most existing approaches regard rumor detection as a binary classification problem, which predicts a candidate hypothesis as rumor or not.", "Since a rumor often begins as unverified and later turns out to be confirmed as true or false, or remains unverified (Zubiaga et al., 2016) , here we consider a set of more practical, finer-grained classes: false rumor, true rumor, unverified rumor, and non-rumor, which becomes an even more challenging problem.", "Related Work Tracking misinformation or debunking rumors has been a hot research topic in multiple disciplines (DiFonzo and Bordia, 2007; Morris et al., 2012; Rosnow, 1991) .", "Castillo et al.", "(2011) studied information credibility on Twitter using a wide range of hand-crafted features.", "Following that, various features corresponding to message contents, user profiles and statistics of propagation patterns were proposed in many studies (Yang et al., 2012; Wu et al., 2015; Sun et al., 2013; Liu et al., 2015) .", "Zhao et al.", "(2015) focused on early rumor detection by using regular expressions for finding questing and denying tweets as the key for debunking rumor.", "All such approaches are over simplistic because they ignore the dynamic propagation patterns given the rich structures of social media data.", "Some studies focus on finding temporal patterns for understanding rumor diffusion.", "Kown et al.", "(2013; 2017) introduced a time-series fitting model based on the temporal properties of tweet volume.", "Ma et al.", "(2015) extended the model using time series to capture the variation of features over time.", "Friggeri et al.", "(2014) and Hannak et al.", "(2014) studied the structure of misinformation cascades by analyzing comments linking to rumor debunking websites.", "More recently, Ma et al.", "(2016) used recurrent neural networks to learn the representations of rumor signals from tweet text at different times.", "Our work will consider temporal, structural and linguistic signals in a unified framework based on propagation tree kernel.", "Most previous work formulated the task as classification at event level where an event is comprised of a number of source tweets, each being associated with a group of retweets and replies.", "Here we focus on classifying a given source tweet regarding a claim which is a finer-grained task.", "Similar setting was also considered in (Wu et al., 2015; Qazvinian et al., 2011) .", "Kernel methods are designed to evaluate the similarity between two objects, and tree kernel specifically addresses structured data which has been successfully applied for modeling syntactic information in many natural language tasks such as syntactic parsing (Collins and Duffy, 2001) , question-answering (Moschitti, 2006) , semantic analysis (Moschitti, 2004) , relation extraction (Zhang et al., 2008) and machine translation (Sun et al., 2010) .", "These kernels are not suitable for modeling the social media propagation structures because the nodes are not given as discrete values like part-of-speech tags, but are represented as high dimensional real-valued vectors.", "Our proposed method is a substantial extension of tree kernel for modeling such structures.", "Representation of Tweets Propagation On microblogging platforms, the follower/friend relationship embeds shared interests among the users.", "Once a user has posted a tweet, all his followers will receive the tweet.", "Furthermore, Twitter allows a user to retweet or comment another user's post, so that the information could reach beyond the network of the original creator.", "We model the propagation of each source tweet as a tree structure T (r) = V, E , where r is the source tweet as well as the root of the tree, V refers to a set of nodes each representing a responsive post (i.e., retweet or reply) of a user at a certain time to the source tweet r which initiates the circulation, and E is a set of directed edges corresponding to the response relation among the nodes in V .", "If there exists a directed edge from v i to v j , it means v j is a direct response to v i .", "More specifically, each node v ∈ V is repre- sented as a tuple v = (u v , c v , t v ) , which provides the following information: u v is the creator of the post, c v represents the text content of the post, and t v is the time lag between the source tweet r and v. In our case, u v contains attributes of the user such as # of followers/friends, verification status, # of history posts, etc., c v is a vector of binary features based on uni-grams and/or bi-grams representing the post's content.", "Propagation Tree Kernel Modeling In this section, we describe our rumor detection model based on propagation trees using kernel method called Propagation Tree Kernel (PTK).", "Our task is, given a propagation tree T (r) of a source tweet r, to predict the label of r. Background of Tree Kernel Before presenting our proposed algorithm, we briefly present the traditional tree kernel, which our PTK model is based on.", "Tree kernel was designed to compute the syntactic and semantic similarity between two natural language sentences by implicitly counting the number of common subtrees between their corresponding parse trees.", "Given a syntactic parse tree, each node with its children is associated with a grammar production rule.", "Figure 3 illustrates the syntactic parse tree of \"cut a tree\" and its subtrees.", "A subtree is defined as any subgraph which has more than one nodes, with the restriction that entire (not partial) rule productions must be included.", "For example, the fragment [NP [D a]] is excluded because it contains only part of the production NP → D N (Collins and Duffy, 2001) .", "Following Collins and Duffy (2001) , given two parse trees T 1 and T 2 , the kernel function K(T 1 , T 2 ) is defined as: v i ∈V 1 v j ∈V 2 ∆(v i , v j ) (1) where V 1 and V 2 are the sets of all nodes respectively in T 1 and T 2 , and each node is associated with a production rule, and ∆(v i , v j ) evaluates the common subtrees rooted at v i and v j .", "∆(v i , v j ) can be computed using the following recursive procedure (Collins and Duffy, 2001) : 1) if the production rules at v i and v j are different, then ∆(v i , v j ) = 0; 2) else if the production rules at v i and v j are same, and v i and v j have only leaf children (i.e., they are pre-terminal symbols), then ∆(v i , v j ) = λ; 3) else ∆(v i , v j ) = λ min(nc(v i ),nc(v j )) k=1 (1 + ∆(ch(v i , k), ch(v j , k))).", "where nc(v) is the number of children of node v, ch(v, k) is the k-th child of node v, and λ (0 < λ ≤ 1) is a decay factor.", "λ = 1 yields the number of common subtrees; λ < 1 down weighs the contribution of larger subtrees to make the kernel value less variable with respect to subtree size.", "Our PTK Model To classify propagation trees, we can calculate the similarity between the trees, which is supposed to reflect the distinction of different types of rumors and non-rumors based on structural, linguistic and temporal properties.", "However, existing tree kernels cannot be readily applied on propagation trees because 1) unlike parse tree where the node is represented by enumerable nominal value (e.g., part-of-speech tag), the propagation tree node is given as a vector of continuous numerical values representing the basic properties of the node; 2) the similarity of two parse trees is based on the count of common subtrees, for which the commonality of subtrees is evaluated by checking if the same production rules and the same children are associated with the nodes in two subtrees being compared, whereas in our context the similarity function should be defined softly since hardly two nodes from different propagation trees are same.", "With the representation of propagation tree, we first define a function f to evaluate the similarity between two nodes v i and v j (we simplify the node representation for instance v i = (u i , c i , t i )) as the following: f (v i , v j ) = e −t (αE(u i , u j ) + (1 − α)J (c i , c j )) where t = |t i − t j | is the absolute difference between the time lags of v i and v j , E and J are user-based similarity and content-based similarity, respectively, and α is the trade-off parameter.", "The intuition of using exponential function of t to scale down the similarity is to capture the discriminant signals or patterns at the different stages of propagation.", "For example, a questioning message posted very early may signal a false rumor while the same posted far later from initial post may indicate the rumor is still unverified, despite that the two messages are semantically similar.", "The user-based similarity is defined as an Euclidean distance E(u i , u j ) = ||u i − u j || 2 , where u i and u j are the user vectors of node v i and v j and || • || 2 is the 2-norm of a vector.", "Here E is used to capture the characteristics of users participating in spreading rumors as discriminant signals, throughout the entire stage of propagation.", "Contentwise, we use Jaccard coefficient to measure the similarity of post content: J (c i , c j ) = |N gram(c i ) ∩ N gram(c j )| |N gram(c i ) ∪ N gram(c j )| where c i and c j are the sets of content words in two nodes.", "For n-grams here, we adopt both uni-grams and bi-grams.", "It can capture cue terms e.g., 'false', 'debunk', 'not true', etc.", "commonly occurring in rumors but not in non-rumors.", "Given two propagation trees T 1 = V 1 , E 1 and T 2 = V 2 , E 2 , PTK aims to compute the similarity between T 1 and T 2 iteratively based on enumerating all pairs of most similar subtrees.", "First, for each node v i ∈ V 1 , we obtain v i ∈ V 2 , the most similar node of v i from V 2 : v i = arg max v j ∈V 2 f (v i , v j ) Similarly, for each v j ∈ V 2 , we obtain v j ∈ V 1 : v j = arg max v i ∈V 1 f (v i , v j ) Then, the propagation tree kernel K P (T 1 , T 2 ) is defined as: v i ∈V 1 Λ(v i , v i ) + v j ∈V 2 Λ(v j , v j ) (2) where Λ(v, v ) evaluates the similarity of two subtrees rooted at v and v , which is computed recursively as follows: 1) if v or v are leaf nodes, then Λ(v, v ) = f (v, v ); 2) else Λ(v, v ) = f (v, v ) min(nc(v),nc(v )) k=1 (1 + Λ(ch(v, k), ch(v , k))) Note that unlike traditional tree kernel, in PTK the node similarity f ∈ [0, 1] is used for softly counting similar subtrees instead of common subtrees.", "Also, λ in tree kernel is not needed as subtree size is not an issue here thanks to node similarity f .", "PTK aims to capture discriminant patterns from propagation trees inclusive of user, content and temporal traits, which is inspired by prior analyses on rumors spreading, e.g., user information can be a strong clue in the initial broadcast, content features are important throughout entire propagation periods, and structural and temporal patterns help for longitudinal diffusion (Zubiaga et al., 2016; Kwon et al., 2017) .", "Context-Sensitive Extension of PTK One defect of PTK is that it ignores the clues outside the subtrees, e.g., how the information propagates from source post to the current subtree.", "Intuitively, propagation paths provide further clue for determining the truthfulness of information since they embed the route and context of how the propagation happens.", "Therefore, we propose contextsensitive PTK (cPTK) by considering the propagation paths from the root of the tree to the roots of subtrees, which shares similar intuition with the context-sensitive tree kernel (Zhou et al., 2007) .", "For a propagation tree node v ∈ T (r), let L r v be the length (i.e., # of nodes) of the propagation path from root r to v, and v[x] be the x-th ancestor of v on the path starting from v (0 ≤ x < L r v , v[0] = v, v[L r v − 1] = r) .", "cPTK evaluates the similarity between two trees T 1 (r 1 ) and T 2 (r 2 ) as follows: v i ∈V 1 L r 1 v i −1 x=0 Λ x (v i , v i ) + v j ∈V 2 L r 2 v j −1 x=0 Λ x (v j , v j ) (3) where Λ x (v, v ) measures the similarity of sub- trees rooted at v[x] and v [x] for context-sensitive evaluation, which is computed as follows: 1) if x > 0, Λ x (v, v ) = f (v[x], v [x]), where v[x] and v [x] are the x-th ancestor nodes of v and v on the respective propagation path.", "2) else Λ x (v, v ) = Λ(v, v ), namely PTK.", "Clearly, PTK is a special case of cPTK when x = 0 (see equation 3).", "cPTK evaluates the oc-currence of both context-free (without considering ancestors on propagation paths) and contextsensitive cases.", "Rumor Detection via Kernel Learning The advantage of kernel-based method is that we can avoid painstakingly engineering the features.", "This is possible because the kernel function can explore an implicit feature space when calculating the similarity between two objects (Culotta and Sorensen, 2004) .", "We incorporate the proposed tree kernel functions, i.e., PTK (equation 2) or cPTK (equation 3), into a supervised learning framework, for which we utilize a kernel-based SVM classifier.", "We treat each tree as an instance, and its similarity values with all training instances as feature space.", "Therefore, the kernel matrix of training set is m × m and that of test set is n × m where m and n are the sizes of training and test sets, respectively.", "For our multi-class task, we perform a one-vsall classification for each label and then assign the one with the highest likelihood among the four, i.e., non-rumor, false rumor, true rumor or unverified rumor.", "We choose this method due to interpretability of results, similar to recent work on occupational class classification (Preotiuc-Pietro et al., 2015; Lukasik et al., 2015) .", "Experiments and Results Data Sets To our knowledge, there is no public large dataset available for classifying propagation trees, where we need a good number of source tweets, more accurately, the tree roots together with the corresponding propagation structure, to be appropriately annotated with ground truth.", "We constructed our datasets based on a couple of reference datasets, namely Twitter15 (Liu et al., 2015) and Twitter16 (Ma et al., 2016) .", "The original datasets were released and used for binary classification of rumor and non-rumor with respect to given events that contain their relevant tweets.", "First, we extracted the popular source tweets 2 that are highly retweeted or replied.", "We then collected all the propagation threads (i.e., retweets and replies) for these source tweets.", "Because Twitter API cannot retrieve the retweets or replies, we gathered the retweet users for a given tweet from 2 Though unpopular tweets could be false, we ignore them as they do not draw much attention and are hardly impactful Twrench 3 and crawled the replies through Twitter's web interface.", "Finally, we annotated the source tweets by referring to the labels of the events they are from.", "We first turned the label of each event in Twitter15 and Twitter16 from binary to quaternary according to the veracity tag of the article in rumor debunking websites (e.g., snopes.com, Emergent.info, etc).", "Then we labeled the source tweets by following these rules: 1) Source tweets from unverified rumor events or non-rumor events are labeled the same as the corresponding event's label; 2) For a source tweet in false rumor event, we flip over the label and assign true to the source tweet if it expresses denial type of stance; otherwise, the label is assigned as false; 3) The analogous flip-over/nochange rule applies to the source tweets from true rumor events.", "We make the datasets produced publicly accessible 4 .", "Table 1 gives statistics on the resulting datasets.", "Experimental Setup We compare our kernel-based method against the following baselines: SVM-TS: A linear SVM classification model that uses time-series to model the variation of a set of hand-crafted features (Ma et al., 2015) .", "DTR: A Decision-Tree-based Ranking method to identify trending rumors (Zhao et al., 2015) , which searches for enquiry phrases and clusters disputed factual claims, and ranked the clustered results based on statistical features.", "DTC and SVM-RBF: The Twitter information credibility model using Decision Tree Classifier (Castillo et al., 2011) and the SVM-based model with RBF kernel (Yang et al., 2012) , respectively, both using hand-crafted features based on the overall statistics of the posts.", "RFC: The Random Forest Classifier proposed by Kwon et al.", "(2017) using three parameters to fit the temporal properties and an extensive set of hand-crafted features related to the user, linguistic and structure characteristics.", "GRU: The RNN-based rumor detection model proposed by Ma et al.", "(2016) with gated recurrent unit for representation learning of high-level features from relevant posts over time.", "BOW: A naive baseline we worked by representing the text in each tree using bag-of-words and building the rumor classifier with linear SVM.", "Our models: PTK and cPTK are our full PTK and cPTK models, respectively; PTKand cPTKare the setting of only using content while ignoring user properties.", "We implemented DTC and RFC with Weka 5 , SVM models with LibSVM 6 and GRU with Theano 7 .", "We held out 10% of the trees in each dataset for model tuning, and for the rest of the trees, we performed 3-fold cross-validation.", "We used accuracy, F 1 measure as evaluation metrics.", "Table 2 shows that our proposed methods outperform all the baselines on both datasets.", "Experimental Results Among all baselines, GRU performs the best, which learns the low-dimensional representation of responsive tweets by capturing the textual and temporal information.", "This indicates the effectiveness of complex signals indicative of rumors beyond cue words or phrases (e.g., \"what?", "\", \"really?", "\", \"not sure\", etc.).", "This also justifies the good performance of BOW even though it only uses uni-grams for representation.", "Although DTR uses a set of regular expressions, we found only 19.59% and 22.21% tweets in our datasets containing these expressions.", "That is why the results of DTR are not satisfactory.", "SVM-TS and RFC are comparable because both of them utilize an extensive set of features especially focusing on temporal traits.", "But none of the models can directly incorporate structured propagation patterns for deep similarity compar- ison between propagation trees.", "SVM-RBF, although using a non-linear kernel, is based on traditional hand-crafted features instead of the structural kernel like ours.", "So, they performed obviously worse than our approach.", "Representation learning methods like GRU cannot easily utilize complex structural information for learning important features from our networked data.", "In contrast, our models can capture complex propagation patterns from structured data rich of linguistic, user and temporal signals.", "Therefore, the superiority of our models is clear: PTKwhich only uses text is already better than GRU, demonstrating the importance of propagation structures.", "PTK that combines text and user yields better results on both datasets, implying that both properties are complementary and PTK integrating flat and structured information is obviously more effective.", "It is also observed that cPTK outperforms PTK except for non-rumor class.", "This suggests the context-sensitive modeling based on PTK is effective for different types of rumors, but for non- The example subtree of a rumor captured by the algorithm at early stage of propagation rumors, it seems that considering context of propagation path is not always helpful.", "This might be due to the generally weak signals originated from node properties on the paths during non-rumor's diffusion since user distribution patterns in nonrumors do not seem as obvious as in rumors.", "This is not an issue in cPTKsince user information is not considered at all.", "Over all classes, cPTK achieves the highest accuracies on both datasets.", "Furthermore, we observe that all the baseline methods perform much better on non-rumors than on rumors.", "This is because the features of existing methods were defined for a binary (rumor vs. non-rumor) classification problem.", "So, they do not perform well for finer-grained classes.", "Our ap-proach can differentiate various classes much better by deep, detailed comparison of different patterns based on propagation structure.", "Early Detection Performance Detecting rumors at an early stage of propagation is very important so that preventive measures could be taken as quickly as possible.", "In early detection task, all the posts after a detection deadline are invisible during test.", "The earlier the deadline, the less propagation information can be available.", "Figure 4 shows the performances of our PTK and cPTK models versus RFC (the best system based on feature engineering), GRU (the best system based on RNN) and DTR (an early-detection-specific algorithm) against various deadlines.", "In the first few hours, our approach demonstrates superior early detection performance than other models.", "Particularly, cPTK achieve 75% accuracy on Twitter15 and 73% on Twitter16 within 24 hours, that is much faster than other models.", "Our analysis shows that rumors typically demonstrate more complex propagation substructures especially at early stage.", "Figure 5 shows a detected subtree of a false rumor spread in its first few hours, where influential users are somehow captured to boost its propagation and the information flows among the users with an obvious unpopular-to-popular-to-unpopular trend in terms of user popularity, but such pattern was not witnessed in non-rumors in early stage.", "Many textual signals (underlined) can also be observed in that early period.", "Our method can learn such structures and patterns naturally, but it is difficult to know and hand-craft them in feature engineering.", "Conclusion and Future Work We propose a novel approach for detecting rumors in microblog posts based on kernel learning method using propagation trees.", "A propagation tree encodes the spread of a hypothesis (i.e., a source tweet) with complex structured patterns and flat information regarding content, user and time associated with the tree nodes.", "Enlightened by tree kernel techniques, our kernel method learns discriminant clues for identifying rumors of finer-grained levels by directly measuring the similarity among propagation trees via kernel functions.", "Experiments on two Twitter datasets show that our approach outperforms stateof-the-art baselines with large margin for both general and early rumor detection tasks.", "Since kernel-based approach covers more structural information than feature-based methods, it allows kernel to further incorporate information from a high dimensional space for possibly better discrimination.", "In the future, we will focus on improving the rumor detection task by exploring network representation learning framework.", "Moreover, we plan to investigate unsupervised models considering massive unlabeled rumorous data from social media." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5.1", "5.2", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Representation of Tweets Propagation", "Propagation Tree Kernel Modeling", "Background of Tree Kernel", "Our PTK Model", "Context-Sensitive Extension of PTK", "Rumor Detection via Kernel Learning", "Data Sets", "Experimental Setup", "Experimental Results", "Early Detection Performance", "Conclusion and Future Work" ] }
GEM-SciDuet-train-101#paper-1265#slide-4
Problem Statement
Given a set of microblog posts R = {}, model each source tweet as a tree structure T = < , >, where each node provide the creator of the post, the text content and post time. And is directed edges corresponding to response relation. Task 1 finer-grained classification for each source post false rumor, true rumor, non-rumor, unverified rumor Task 2 detect rumor as early as possible
Given a set of microblog posts R = {}, model each source tweet as a tree structure T = < , >, where each node provide the creator of the post, the text content and post time. And is directed edges corresponding to response relation. Task 1 finer-grained classification for each source post false rumor, true rumor, non-rumor, unverified rumor Task 2 detect rumor as early as possible
[]
GEM-SciDuet-train-101#paper-1265#slide-5
1265
Detect Rumors in Microblog Posts Using Propagation Structure via Kernel Learning
How fake news goes viral via social media? How does its propagation pattern differ from real stories? In this paper, we attempt to address the problem of identifying rumors, i.e., fake information, out of microblog posts based on their propagation structure. We firstly model microblog posts diffusion with propagation trees, which provide valuable clues on how an original message is transmitted and developed over time. We then propose a kernel-based method called Propagation Tree Kernel, which captures high-order patterns differentiating different types of rumors by evaluating the similarities between their propagation tree structures. Experimental results on two real-world datasets demonstrate that the proposed kernel-based approach can detect rumors more quickly and accurately than state-ofthe-art rumor detection models.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170 ], "paper_content_text": [ "Introduction On November 9th, 2016, Eric Tucker, a grassroots user who had just about 40 followers on Twitter, tweeted his unverified observations about paid protesters being bused to attend anti-Trump demonstration in Austin, Texas.", "The tweet, which was proved false later, was shared over 16 thousand times on Twitter and 350 thousand times on Facebook within a couple of days, fueling a nation-wide conspiracy theory 1 .", "The diffusion of the story is illustrated as Figure 1 which gives the key spreading points of the story along the time line.", "We can see that after the initial post, the tweet was shared or promoted by some influential online communities and users (including Trump himself), resulting in its wide spread.", "A widely accepted definition of rumor is \"unverified and instrumentally relevant information statements in circulation\" (DiFonzo and Bordia, 2007) .", "This unverified information may eventually turn out to be true, or partly or entirely false.", "In today's ever-connected world, rumors can arise and spread at lightening speed thanks to social media platforms, which could not only be wrong, but be misleading and dangerous to the public society.", "Therefore, it is crucial to track and debunk such rumors in timely manner.", "Journalists and fact-checking websites such as snopes.com have made efforts to track and detect rumors.", "However, such endeavor is manual, thus prone to poor coverage and low speed.", "Feature-based methods (Castillo et al., 2011; Yang et al., 2012; Ma et al., 2015) achieved certain success by employing large feature sets crafted from message contents, user profiles and holistic statistics of diffusion patterns (e.g., number of retweets, propagation time, etc.).", "But such an approach was over simplified as they ignored the dynamics of rumor propagation.", "Existing studies considering propagation characteristics mainly focused on the temporal features (Kwon et al., 2013 (Kwon et al., , 2017 rather than the structure of propagation.", "So, can the propagation structure make any difference for differentiating rumors from nonrumors?", "Recent studies showed that rumor spreaders are persons who want to get attention and popularity (Sunstein, 2014) .", "However, popular users who get more attention on Twitter (e.g., with more followers) are actually less likely to spread rumor in a sense that the high audience size might hinder a user from participating in propagating unverified information (Kwon et al., 2017) .", "Intuitively, for \"successful\" rumors being propagated as widely as popular real news, initial spreaders (typically lack of popularity) must attract certain amount of broadcasting power, e.g., attention of influential users or communities that have a lot of audiences joining in promoting the propagation.", "We refer to this as a constrained mode propagation, relative to the open mode propagation of normal messages that everyone is open to share.", "Such different modes of propagation may imply some distinct propagation structures between rumors and nonrumors and even among different types of rumors.", "Due to the complex nature of information diffusion, explicitly defining discriminant features based on propagation structure is difficult and biased.", "Figure 2 exemplifies the propagation structures of two Twitter posts, a rumor and a nonrumor, initiated by two users shown as the root nodes (in green color).", "The information flows here illustrate that the rumorous tweet is first posted by a low-impact user, then some popular users joining in who boost the spreading, but the non-rumorous tweet is initially posted by a popular user and directly spread by many general users; contentbased signal like various users' stance (Zhao et al., 2015) and edge-based signal such as relative influence (Kwon et al., 2017) can also suggest the different nature of source tweets.", "Many of such implicit distinctions throughout message propagation are hard to hand craft specifically using flat summary of statistics as previous work did.", "In addition, unlike representation learning for plain text, learning for representation of structures such as networks is not well studied in general.", "Therefore, traditional and latest text-based models (Castillo (a) A rumor (b) A non-rumor Figure 2 : Fragments of the propagation for two source tweets.", "Node size: denotes the popularity of the user who tweet the post (represented by # of followers); Red, black, blue node: content-wise the user express doubt/denial, support, neutrality in the tweet, respectively; Solid (dotted) edge: information flow from a more (less) popular user to a less (more) popular user; Dashed concentric circles: time stamps.", "Ma et al., 2015 Ma et al., , 2016 cannot be applied easily on such complex, dynamic structures.", "To capture high-order propagation patterns for rumor detection, we firstly represent the propagation of each source tweet with a propagation tree which is formed by harvesting user's interactions to one another triggered by the source tweet.", "Then, we propose a kernel-based data-driven method called Propagation Tree Kernel (PTK) to generate relevant features (i.e., subtrees) automatically for estimating the similarity between two propagation trees.", "Unlike traditional tree kernel (Moschitti, 2006; Zhang et al., 2008) for modeling syntactic structure based on parse tree, our propagation tree consists of nodes corresponding to microblog posts, each represented as a continuous vector, and edges representing the direction of propagation and providing the context to individual posts.", "The basic idea is to find and capture the salient substructures in the propagation trees indicative of rumors.", "We also extend PTK into a context-enriched PTK (cPTK) to enhance the model by considering different propagation paths from source tweet to the roots of subtrees, which capture the context of transmission.", "Extensive experiments on two real-world Twitter datasets show that the proposed methods outperform state-of-the-art rumor detection models with large margin.", "Moreover, most existing approaches regard rumor detection as a binary classification problem, which predicts a candidate hypothesis as rumor or not.", "Since a rumor often begins as unverified and later turns out to be confirmed as true or false, or remains unverified (Zubiaga et al., 2016) , here we consider a set of more practical, finer-grained classes: false rumor, true rumor, unverified rumor, and non-rumor, which becomes an even more challenging problem.", "Related Work Tracking misinformation or debunking rumors has been a hot research topic in multiple disciplines (DiFonzo and Bordia, 2007; Morris et al., 2012; Rosnow, 1991) .", "Castillo et al.", "(2011) studied information credibility on Twitter using a wide range of hand-crafted features.", "Following that, various features corresponding to message contents, user profiles and statistics of propagation patterns were proposed in many studies (Yang et al., 2012; Wu et al., 2015; Sun et al., 2013; Liu et al., 2015) .", "Zhao et al.", "(2015) focused on early rumor detection by using regular expressions for finding questing and denying tweets as the key for debunking rumor.", "All such approaches are over simplistic because they ignore the dynamic propagation patterns given the rich structures of social media data.", "Some studies focus on finding temporal patterns for understanding rumor diffusion.", "Kown et al.", "(2013; 2017) introduced a time-series fitting model based on the temporal properties of tweet volume.", "Ma et al.", "(2015) extended the model using time series to capture the variation of features over time.", "Friggeri et al.", "(2014) and Hannak et al.", "(2014) studied the structure of misinformation cascades by analyzing comments linking to rumor debunking websites.", "More recently, Ma et al.", "(2016) used recurrent neural networks to learn the representations of rumor signals from tweet text at different times.", "Our work will consider temporal, structural and linguistic signals in a unified framework based on propagation tree kernel.", "Most previous work formulated the task as classification at event level where an event is comprised of a number of source tweets, each being associated with a group of retweets and replies.", "Here we focus on classifying a given source tweet regarding a claim which is a finer-grained task.", "Similar setting was also considered in (Wu et al., 2015; Qazvinian et al., 2011) .", "Kernel methods are designed to evaluate the similarity between two objects, and tree kernel specifically addresses structured data which has been successfully applied for modeling syntactic information in many natural language tasks such as syntactic parsing (Collins and Duffy, 2001) , question-answering (Moschitti, 2006) , semantic analysis (Moschitti, 2004) , relation extraction (Zhang et al., 2008) and machine translation (Sun et al., 2010) .", "These kernels are not suitable for modeling the social media propagation structures because the nodes are not given as discrete values like part-of-speech tags, but are represented as high dimensional real-valued vectors.", "Our proposed method is a substantial extension of tree kernel for modeling such structures.", "Representation of Tweets Propagation On microblogging platforms, the follower/friend relationship embeds shared interests among the users.", "Once a user has posted a tweet, all his followers will receive the tweet.", "Furthermore, Twitter allows a user to retweet or comment another user's post, so that the information could reach beyond the network of the original creator.", "We model the propagation of each source tweet as a tree structure T (r) = V, E , where r is the source tweet as well as the root of the tree, V refers to a set of nodes each representing a responsive post (i.e., retweet or reply) of a user at a certain time to the source tweet r which initiates the circulation, and E is a set of directed edges corresponding to the response relation among the nodes in V .", "If there exists a directed edge from v i to v j , it means v j is a direct response to v i .", "More specifically, each node v ∈ V is repre- sented as a tuple v = (u v , c v , t v ) , which provides the following information: u v is the creator of the post, c v represents the text content of the post, and t v is the time lag between the source tweet r and v. In our case, u v contains attributes of the user such as # of followers/friends, verification status, # of history posts, etc., c v is a vector of binary features based on uni-grams and/or bi-grams representing the post's content.", "Propagation Tree Kernel Modeling In this section, we describe our rumor detection model based on propagation trees using kernel method called Propagation Tree Kernel (PTK).", "Our task is, given a propagation tree T (r) of a source tweet r, to predict the label of r. Background of Tree Kernel Before presenting our proposed algorithm, we briefly present the traditional tree kernel, which our PTK model is based on.", "Tree kernel was designed to compute the syntactic and semantic similarity between two natural language sentences by implicitly counting the number of common subtrees between their corresponding parse trees.", "Given a syntactic parse tree, each node with its children is associated with a grammar production rule.", "Figure 3 illustrates the syntactic parse tree of \"cut a tree\" and its subtrees.", "A subtree is defined as any subgraph which has more than one nodes, with the restriction that entire (not partial) rule productions must be included.", "For example, the fragment [NP [D a]] is excluded because it contains only part of the production NP → D N (Collins and Duffy, 2001) .", "Following Collins and Duffy (2001) , given two parse trees T 1 and T 2 , the kernel function K(T 1 , T 2 ) is defined as: v i ∈V 1 v j ∈V 2 ∆(v i , v j ) (1) where V 1 and V 2 are the sets of all nodes respectively in T 1 and T 2 , and each node is associated with a production rule, and ∆(v i , v j ) evaluates the common subtrees rooted at v i and v j .", "∆(v i , v j ) can be computed using the following recursive procedure (Collins and Duffy, 2001) : 1) if the production rules at v i and v j are different, then ∆(v i , v j ) = 0; 2) else if the production rules at v i and v j are same, and v i and v j have only leaf children (i.e., they are pre-terminal symbols), then ∆(v i , v j ) = λ; 3) else ∆(v i , v j ) = λ min(nc(v i ),nc(v j )) k=1 (1 + ∆(ch(v i , k), ch(v j , k))).", "where nc(v) is the number of children of node v, ch(v, k) is the k-th child of node v, and λ (0 < λ ≤ 1) is a decay factor.", "λ = 1 yields the number of common subtrees; λ < 1 down weighs the contribution of larger subtrees to make the kernel value less variable with respect to subtree size.", "Our PTK Model To classify propagation trees, we can calculate the similarity between the trees, which is supposed to reflect the distinction of different types of rumors and non-rumors based on structural, linguistic and temporal properties.", "However, existing tree kernels cannot be readily applied on propagation trees because 1) unlike parse tree where the node is represented by enumerable nominal value (e.g., part-of-speech tag), the propagation tree node is given as a vector of continuous numerical values representing the basic properties of the node; 2) the similarity of two parse trees is based on the count of common subtrees, for which the commonality of subtrees is evaluated by checking if the same production rules and the same children are associated with the nodes in two subtrees being compared, whereas in our context the similarity function should be defined softly since hardly two nodes from different propagation trees are same.", "With the representation of propagation tree, we first define a function f to evaluate the similarity between two nodes v i and v j (we simplify the node representation for instance v i = (u i , c i , t i )) as the following: f (v i , v j ) = e −t (αE(u i , u j ) + (1 − α)J (c i , c j )) where t = |t i − t j | is the absolute difference between the time lags of v i and v j , E and J are user-based similarity and content-based similarity, respectively, and α is the trade-off parameter.", "The intuition of using exponential function of t to scale down the similarity is to capture the discriminant signals or patterns at the different stages of propagation.", "For example, a questioning message posted very early may signal a false rumor while the same posted far later from initial post may indicate the rumor is still unverified, despite that the two messages are semantically similar.", "The user-based similarity is defined as an Euclidean distance E(u i , u j ) = ||u i − u j || 2 , where u i and u j are the user vectors of node v i and v j and || • || 2 is the 2-norm of a vector.", "Here E is used to capture the characteristics of users participating in spreading rumors as discriminant signals, throughout the entire stage of propagation.", "Contentwise, we use Jaccard coefficient to measure the similarity of post content: J (c i , c j ) = |N gram(c i ) ∩ N gram(c j )| |N gram(c i ) ∪ N gram(c j )| where c i and c j are the sets of content words in two nodes.", "For n-grams here, we adopt both uni-grams and bi-grams.", "It can capture cue terms e.g., 'false', 'debunk', 'not true', etc.", "commonly occurring in rumors but not in non-rumors.", "Given two propagation trees T 1 = V 1 , E 1 and T 2 = V 2 , E 2 , PTK aims to compute the similarity between T 1 and T 2 iteratively based on enumerating all pairs of most similar subtrees.", "First, for each node v i ∈ V 1 , we obtain v i ∈ V 2 , the most similar node of v i from V 2 : v i = arg max v j ∈V 2 f (v i , v j ) Similarly, for each v j ∈ V 2 , we obtain v j ∈ V 1 : v j = arg max v i ∈V 1 f (v i , v j ) Then, the propagation tree kernel K P (T 1 , T 2 ) is defined as: v i ∈V 1 Λ(v i , v i ) + v j ∈V 2 Λ(v j , v j ) (2) where Λ(v, v ) evaluates the similarity of two subtrees rooted at v and v , which is computed recursively as follows: 1) if v or v are leaf nodes, then Λ(v, v ) = f (v, v ); 2) else Λ(v, v ) = f (v, v ) min(nc(v),nc(v )) k=1 (1 + Λ(ch(v, k), ch(v , k))) Note that unlike traditional tree kernel, in PTK the node similarity f ∈ [0, 1] is used for softly counting similar subtrees instead of common subtrees.", "Also, λ in tree kernel is not needed as subtree size is not an issue here thanks to node similarity f .", "PTK aims to capture discriminant patterns from propagation trees inclusive of user, content and temporal traits, which is inspired by prior analyses on rumors spreading, e.g., user information can be a strong clue in the initial broadcast, content features are important throughout entire propagation periods, and structural and temporal patterns help for longitudinal diffusion (Zubiaga et al., 2016; Kwon et al., 2017) .", "Context-Sensitive Extension of PTK One defect of PTK is that it ignores the clues outside the subtrees, e.g., how the information propagates from source post to the current subtree.", "Intuitively, propagation paths provide further clue for determining the truthfulness of information since they embed the route and context of how the propagation happens.", "Therefore, we propose contextsensitive PTK (cPTK) by considering the propagation paths from the root of the tree to the roots of subtrees, which shares similar intuition with the context-sensitive tree kernel (Zhou et al., 2007) .", "For a propagation tree node v ∈ T (r), let L r v be the length (i.e., # of nodes) of the propagation path from root r to v, and v[x] be the x-th ancestor of v on the path starting from v (0 ≤ x < L r v , v[0] = v, v[L r v − 1] = r) .", "cPTK evaluates the similarity between two trees T 1 (r 1 ) and T 2 (r 2 ) as follows: v i ∈V 1 L r 1 v i −1 x=0 Λ x (v i , v i ) + v j ∈V 2 L r 2 v j −1 x=0 Λ x (v j , v j ) (3) where Λ x (v, v ) measures the similarity of sub- trees rooted at v[x] and v [x] for context-sensitive evaluation, which is computed as follows: 1) if x > 0, Λ x (v, v ) = f (v[x], v [x]), where v[x] and v [x] are the x-th ancestor nodes of v and v on the respective propagation path.", "2) else Λ x (v, v ) = Λ(v, v ), namely PTK.", "Clearly, PTK is a special case of cPTK when x = 0 (see equation 3).", "cPTK evaluates the oc-currence of both context-free (without considering ancestors on propagation paths) and contextsensitive cases.", "Rumor Detection via Kernel Learning The advantage of kernel-based method is that we can avoid painstakingly engineering the features.", "This is possible because the kernel function can explore an implicit feature space when calculating the similarity between two objects (Culotta and Sorensen, 2004) .", "We incorporate the proposed tree kernel functions, i.e., PTK (equation 2) or cPTK (equation 3), into a supervised learning framework, for which we utilize a kernel-based SVM classifier.", "We treat each tree as an instance, and its similarity values with all training instances as feature space.", "Therefore, the kernel matrix of training set is m × m and that of test set is n × m where m and n are the sizes of training and test sets, respectively.", "For our multi-class task, we perform a one-vsall classification for each label and then assign the one with the highest likelihood among the four, i.e., non-rumor, false rumor, true rumor or unverified rumor.", "We choose this method due to interpretability of results, similar to recent work on occupational class classification (Preotiuc-Pietro et al., 2015; Lukasik et al., 2015) .", "Experiments and Results Data Sets To our knowledge, there is no public large dataset available for classifying propagation trees, where we need a good number of source tweets, more accurately, the tree roots together with the corresponding propagation structure, to be appropriately annotated with ground truth.", "We constructed our datasets based on a couple of reference datasets, namely Twitter15 (Liu et al., 2015) and Twitter16 (Ma et al., 2016) .", "The original datasets were released and used for binary classification of rumor and non-rumor with respect to given events that contain their relevant tweets.", "First, we extracted the popular source tweets 2 that are highly retweeted or replied.", "We then collected all the propagation threads (i.e., retweets and replies) for these source tweets.", "Because Twitter API cannot retrieve the retweets or replies, we gathered the retweet users for a given tweet from 2 Though unpopular tweets could be false, we ignore them as they do not draw much attention and are hardly impactful Twrench 3 and crawled the replies through Twitter's web interface.", "Finally, we annotated the source tweets by referring to the labels of the events they are from.", "We first turned the label of each event in Twitter15 and Twitter16 from binary to quaternary according to the veracity tag of the article in rumor debunking websites (e.g., snopes.com, Emergent.info, etc).", "Then we labeled the source tweets by following these rules: 1) Source tweets from unverified rumor events or non-rumor events are labeled the same as the corresponding event's label; 2) For a source tweet in false rumor event, we flip over the label and assign true to the source tweet if it expresses denial type of stance; otherwise, the label is assigned as false; 3) The analogous flip-over/nochange rule applies to the source tweets from true rumor events.", "We make the datasets produced publicly accessible 4 .", "Table 1 gives statistics on the resulting datasets.", "Experimental Setup We compare our kernel-based method against the following baselines: SVM-TS: A linear SVM classification model that uses time-series to model the variation of a set of hand-crafted features (Ma et al., 2015) .", "DTR: A Decision-Tree-based Ranking method to identify trending rumors (Zhao et al., 2015) , which searches for enquiry phrases and clusters disputed factual claims, and ranked the clustered results based on statistical features.", "DTC and SVM-RBF: The Twitter information credibility model using Decision Tree Classifier (Castillo et al., 2011) and the SVM-based model with RBF kernel (Yang et al., 2012) , respectively, both using hand-crafted features based on the overall statistics of the posts.", "RFC: The Random Forest Classifier proposed by Kwon et al.", "(2017) using three parameters to fit the temporal properties and an extensive set of hand-crafted features related to the user, linguistic and structure characteristics.", "GRU: The RNN-based rumor detection model proposed by Ma et al.", "(2016) with gated recurrent unit for representation learning of high-level features from relevant posts over time.", "BOW: A naive baseline we worked by representing the text in each tree using bag-of-words and building the rumor classifier with linear SVM.", "Our models: PTK and cPTK are our full PTK and cPTK models, respectively; PTKand cPTKare the setting of only using content while ignoring user properties.", "We implemented DTC and RFC with Weka 5 , SVM models with LibSVM 6 and GRU with Theano 7 .", "We held out 10% of the trees in each dataset for model tuning, and for the rest of the trees, we performed 3-fold cross-validation.", "We used accuracy, F 1 measure as evaluation metrics.", "Table 2 shows that our proposed methods outperform all the baselines on both datasets.", "Experimental Results Among all baselines, GRU performs the best, which learns the low-dimensional representation of responsive tweets by capturing the textual and temporal information.", "This indicates the effectiveness of complex signals indicative of rumors beyond cue words or phrases (e.g., \"what?", "\", \"really?", "\", \"not sure\", etc.).", "This also justifies the good performance of BOW even though it only uses uni-grams for representation.", "Although DTR uses a set of regular expressions, we found only 19.59% and 22.21% tweets in our datasets containing these expressions.", "That is why the results of DTR are not satisfactory.", "SVM-TS and RFC are comparable because both of them utilize an extensive set of features especially focusing on temporal traits.", "But none of the models can directly incorporate structured propagation patterns for deep similarity compar- ison between propagation trees.", "SVM-RBF, although using a non-linear kernel, is based on traditional hand-crafted features instead of the structural kernel like ours.", "So, they performed obviously worse than our approach.", "Representation learning methods like GRU cannot easily utilize complex structural information for learning important features from our networked data.", "In contrast, our models can capture complex propagation patterns from structured data rich of linguistic, user and temporal signals.", "Therefore, the superiority of our models is clear: PTKwhich only uses text is already better than GRU, demonstrating the importance of propagation structures.", "PTK that combines text and user yields better results on both datasets, implying that both properties are complementary and PTK integrating flat and structured information is obviously more effective.", "It is also observed that cPTK outperforms PTK except for non-rumor class.", "This suggests the context-sensitive modeling based on PTK is effective for different types of rumors, but for non- The example subtree of a rumor captured by the algorithm at early stage of propagation rumors, it seems that considering context of propagation path is not always helpful.", "This might be due to the generally weak signals originated from node properties on the paths during non-rumor's diffusion since user distribution patterns in nonrumors do not seem as obvious as in rumors.", "This is not an issue in cPTKsince user information is not considered at all.", "Over all classes, cPTK achieves the highest accuracies on both datasets.", "Furthermore, we observe that all the baseline methods perform much better on non-rumors than on rumors.", "This is because the features of existing methods were defined for a binary (rumor vs. non-rumor) classification problem.", "So, they do not perform well for finer-grained classes.", "Our ap-proach can differentiate various classes much better by deep, detailed comparison of different patterns based on propagation structure.", "Early Detection Performance Detecting rumors at an early stage of propagation is very important so that preventive measures could be taken as quickly as possible.", "In early detection task, all the posts after a detection deadline are invisible during test.", "The earlier the deadline, the less propagation information can be available.", "Figure 4 shows the performances of our PTK and cPTK models versus RFC (the best system based on feature engineering), GRU (the best system based on RNN) and DTR (an early-detection-specific algorithm) against various deadlines.", "In the first few hours, our approach demonstrates superior early detection performance than other models.", "Particularly, cPTK achieve 75% accuracy on Twitter15 and 73% on Twitter16 within 24 hours, that is much faster than other models.", "Our analysis shows that rumors typically demonstrate more complex propagation substructures especially at early stage.", "Figure 5 shows a detected subtree of a false rumor spread in its first few hours, where influential users are somehow captured to boost its propagation and the information flows among the users with an obvious unpopular-to-popular-to-unpopular trend in terms of user popularity, but such pattern was not witnessed in non-rumors in early stage.", "Many textual signals (underlined) can also be observed in that early period.", "Our method can learn such structures and patterns naturally, but it is difficult to know and hand-craft them in feature engineering.", "Conclusion and Future Work We propose a novel approach for detecting rumors in microblog posts based on kernel learning method using propagation trees.", "A propagation tree encodes the spread of a hypothesis (i.e., a source tweet) with complex structured patterns and flat information regarding content, user and time associated with the tree nodes.", "Enlightened by tree kernel techniques, our kernel method learns discriminant clues for identifying rumors of finer-grained levels by directly measuring the similarity among propagation trees via kernel functions.", "Experiments on two Twitter datasets show that our approach outperforms stateof-the-art baselines with large margin for both general and early rumor detection tasks.", "Since kernel-based approach covers more structural information than feature-based methods, it allows kernel to further incorporate information from a high dimensional space for possibly better discrimination.", "In the future, we will focus on improving the rumor detection task by exploring network representation learning framework.", "Moreover, we plan to investigate unsupervised models considering massive unlabeled rumorous data from social media." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5.1", "5.2", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Representation of Tweets Propagation", "Propagation Tree Kernel Modeling", "Background of Tree Kernel", "Our PTK Model", "Context-Sensitive Extension of PTK", "Rumor Detection via Kernel Learning", "Data Sets", "Experimental Setup", "Experimental Results", "Early Detection Performance", "Conclusion and Future Work" ] }
GEM-SciDuet-train-101#paper-1265#slide-5
Propagation Structure
O: Walmart donates $10,000 to support Darren Wilson and the on going racist police murders Propagatio n tree U6: Need proof of U1: You don't honestly this-can't find any... believe that, do you? U7: not sure....sorry I U2: i honestly do see a meme trending but no proof...perhaps U3: Sam Walton gave 300k to Obama's campaign? THINK. if we had real journalists? U5: where is the credible link? U8: I'm pretty good at research-I think this is not U4: Sam Walton was dead before #Obama was born. He have wired campaign true-plenty of other reasons to boycott WalMart. :) donation from heavens. Jing Ma (CUHK)
O: Walmart donates $10,000 to support Darren Wilson and the on going racist police murders Propagatio n tree U6: Need proof of U1: You don't honestly this-can't find any... believe that, do you? U7: not sure....sorry I U2: i honestly do see a meme trending but no proof...perhaps U3: Sam Walton gave 300k to Obama's campaign? THINK. if we had real journalists? U5: where is the credible link? U8: I'm pretty good at research-I think this is not U4: Sam Walton was dead before #Obama was born. He have wired campaign true-plenty of other reasons to boycott WalMart. :) donation from heavens. Jing Ma (CUHK)
[]
GEM-SciDuet-train-101#paper-1265#slide-6
1265
Detect Rumors in Microblog Posts Using Propagation Structure via Kernel Learning
How fake news goes viral via social media? How does its propagation pattern differ from real stories? In this paper, we attempt to address the problem of identifying rumors, i.e., fake information, out of microblog posts based on their propagation structure. We firstly model microblog posts diffusion with propagation trees, which provide valuable clues on how an original message is transmitted and developed over time. We then propose a kernel-based method called Propagation Tree Kernel, which captures high-order patterns differentiating different types of rumors by evaluating the similarities between their propagation tree structures. Experimental results on two real-world datasets demonstrate that the proposed kernel-based approach can detect rumors more quickly and accurately than state-ofthe-art rumor detection models.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170 ], "paper_content_text": [ "Introduction On November 9th, 2016, Eric Tucker, a grassroots user who had just about 40 followers on Twitter, tweeted his unverified observations about paid protesters being bused to attend anti-Trump demonstration in Austin, Texas.", "The tweet, which was proved false later, was shared over 16 thousand times on Twitter and 350 thousand times on Facebook within a couple of days, fueling a nation-wide conspiracy theory 1 .", "The diffusion of the story is illustrated as Figure 1 which gives the key spreading points of the story along the time line.", "We can see that after the initial post, the tweet was shared or promoted by some influential online communities and users (including Trump himself), resulting in its wide spread.", "A widely accepted definition of rumor is \"unverified and instrumentally relevant information statements in circulation\" (DiFonzo and Bordia, 2007) .", "This unverified information may eventually turn out to be true, or partly or entirely false.", "In today's ever-connected world, rumors can arise and spread at lightening speed thanks to social media platforms, which could not only be wrong, but be misleading and dangerous to the public society.", "Therefore, it is crucial to track and debunk such rumors in timely manner.", "Journalists and fact-checking websites such as snopes.com have made efforts to track and detect rumors.", "However, such endeavor is manual, thus prone to poor coverage and low speed.", "Feature-based methods (Castillo et al., 2011; Yang et al., 2012; Ma et al., 2015) achieved certain success by employing large feature sets crafted from message contents, user profiles and holistic statistics of diffusion patterns (e.g., number of retweets, propagation time, etc.).", "But such an approach was over simplified as they ignored the dynamics of rumor propagation.", "Existing studies considering propagation characteristics mainly focused on the temporal features (Kwon et al., 2013 (Kwon et al., , 2017 rather than the structure of propagation.", "So, can the propagation structure make any difference for differentiating rumors from nonrumors?", "Recent studies showed that rumor spreaders are persons who want to get attention and popularity (Sunstein, 2014) .", "However, popular users who get more attention on Twitter (e.g., with more followers) are actually less likely to spread rumor in a sense that the high audience size might hinder a user from participating in propagating unverified information (Kwon et al., 2017) .", "Intuitively, for \"successful\" rumors being propagated as widely as popular real news, initial spreaders (typically lack of popularity) must attract certain amount of broadcasting power, e.g., attention of influential users or communities that have a lot of audiences joining in promoting the propagation.", "We refer to this as a constrained mode propagation, relative to the open mode propagation of normal messages that everyone is open to share.", "Such different modes of propagation may imply some distinct propagation structures between rumors and nonrumors and even among different types of rumors.", "Due to the complex nature of information diffusion, explicitly defining discriminant features based on propagation structure is difficult and biased.", "Figure 2 exemplifies the propagation structures of two Twitter posts, a rumor and a nonrumor, initiated by two users shown as the root nodes (in green color).", "The information flows here illustrate that the rumorous tweet is first posted by a low-impact user, then some popular users joining in who boost the spreading, but the non-rumorous tweet is initially posted by a popular user and directly spread by many general users; contentbased signal like various users' stance (Zhao et al., 2015) and edge-based signal such as relative influence (Kwon et al., 2017) can also suggest the different nature of source tweets.", "Many of such implicit distinctions throughout message propagation are hard to hand craft specifically using flat summary of statistics as previous work did.", "In addition, unlike representation learning for plain text, learning for representation of structures such as networks is not well studied in general.", "Therefore, traditional and latest text-based models (Castillo (a) A rumor (b) A non-rumor Figure 2 : Fragments of the propagation for two source tweets.", "Node size: denotes the popularity of the user who tweet the post (represented by # of followers); Red, black, blue node: content-wise the user express doubt/denial, support, neutrality in the tweet, respectively; Solid (dotted) edge: information flow from a more (less) popular user to a less (more) popular user; Dashed concentric circles: time stamps.", "Ma et al., 2015 Ma et al., , 2016 cannot be applied easily on such complex, dynamic structures.", "To capture high-order propagation patterns for rumor detection, we firstly represent the propagation of each source tweet with a propagation tree which is formed by harvesting user's interactions to one another triggered by the source tweet.", "Then, we propose a kernel-based data-driven method called Propagation Tree Kernel (PTK) to generate relevant features (i.e., subtrees) automatically for estimating the similarity between two propagation trees.", "Unlike traditional tree kernel (Moschitti, 2006; Zhang et al., 2008) for modeling syntactic structure based on parse tree, our propagation tree consists of nodes corresponding to microblog posts, each represented as a continuous vector, and edges representing the direction of propagation and providing the context to individual posts.", "The basic idea is to find and capture the salient substructures in the propagation trees indicative of rumors.", "We also extend PTK into a context-enriched PTK (cPTK) to enhance the model by considering different propagation paths from source tweet to the roots of subtrees, which capture the context of transmission.", "Extensive experiments on two real-world Twitter datasets show that the proposed methods outperform state-of-the-art rumor detection models with large margin.", "Moreover, most existing approaches regard rumor detection as a binary classification problem, which predicts a candidate hypothesis as rumor or not.", "Since a rumor often begins as unverified and later turns out to be confirmed as true or false, or remains unverified (Zubiaga et al., 2016) , here we consider a set of more practical, finer-grained classes: false rumor, true rumor, unverified rumor, and non-rumor, which becomes an even more challenging problem.", "Related Work Tracking misinformation or debunking rumors has been a hot research topic in multiple disciplines (DiFonzo and Bordia, 2007; Morris et al., 2012; Rosnow, 1991) .", "Castillo et al.", "(2011) studied information credibility on Twitter using a wide range of hand-crafted features.", "Following that, various features corresponding to message contents, user profiles and statistics of propagation patterns were proposed in many studies (Yang et al., 2012; Wu et al., 2015; Sun et al., 2013; Liu et al., 2015) .", "Zhao et al.", "(2015) focused on early rumor detection by using regular expressions for finding questing and denying tweets as the key for debunking rumor.", "All such approaches are over simplistic because they ignore the dynamic propagation patterns given the rich structures of social media data.", "Some studies focus on finding temporal patterns for understanding rumor diffusion.", "Kown et al.", "(2013; 2017) introduced a time-series fitting model based on the temporal properties of tweet volume.", "Ma et al.", "(2015) extended the model using time series to capture the variation of features over time.", "Friggeri et al.", "(2014) and Hannak et al.", "(2014) studied the structure of misinformation cascades by analyzing comments linking to rumor debunking websites.", "More recently, Ma et al.", "(2016) used recurrent neural networks to learn the representations of rumor signals from tweet text at different times.", "Our work will consider temporal, structural and linguistic signals in a unified framework based on propagation tree kernel.", "Most previous work formulated the task as classification at event level where an event is comprised of a number of source tweets, each being associated with a group of retweets and replies.", "Here we focus on classifying a given source tweet regarding a claim which is a finer-grained task.", "Similar setting was also considered in (Wu et al., 2015; Qazvinian et al., 2011) .", "Kernel methods are designed to evaluate the similarity between two objects, and tree kernel specifically addresses structured data which has been successfully applied for modeling syntactic information in many natural language tasks such as syntactic parsing (Collins and Duffy, 2001) , question-answering (Moschitti, 2006) , semantic analysis (Moschitti, 2004) , relation extraction (Zhang et al., 2008) and machine translation (Sun et al., 2010) .", "These kernels are not suitable for modeling the social media propagation structures because the nodes are not given as discrete values like part-of-speech tags, but are represented as high dimensional real-valued vectors.", "Our proposed method is a substantial extension of tree kernel for modeling such structures.", "Representation of Tweets Propagation On microblogging platforms, the follower/friend relationship embeds shared interests among the users.", "Once a user has posted a tweet, all his followers will receive the tweet.", "Furthermore, Twitter allows a user to retweet or comment another user's post, so that the information could reach beyond the network of the original creator.", "We model the propagation of each source tweet as a tree structure T (r) = V, E , where r is the source tweet as well as the root of the tree, V refers to a set of nodes each representing a responsive post (i.e., retweet or reply) of a user at a certain time to the source tweet r which initiates the circulation, and E is a set of directed edges corresponding to the response relation among the nodes in V .", "If there exists a directed edge from v i to v j , it means v j is a direct response to v i .", "More specifically, each node v ∈ V is repre- sented as a tuple v = (u v , c v , t v ) , which provides the following information: u v is the creator of the post, c v represents the text content of the post, and t v is the time lag between the source tweet r and v. In our case, u v contains attributes of the user such as # of followers/friends, verification status, # of history posts, etc., c v is a vector of binary features based on uni-grams and/or bi-grams representing the post's content.", "Propagation Tree Kernel Modeling In this section, we describe our rumor detection model based on propagation trees using kernel method called Propagation Tree Kernel (PTK).", "Our task is, given a propagation tree T (r) of a source tweet r, to predict the label of r. Background of Tree Kernel Before presenting our proposed algorithm, we briefly present the traditional tree kernel, which our PTK model is based on.", "Tree kernel was designed to compute the syntactic and semantic similarity between two natural language sentences by implicitly counting the number of common subtrees between their corresponding parse trees.", "Given a syntactic parse tree, each node with its children is associated with a grammar production rule.", "Figure 3 illustrates the syntactic parse tree of \"cut a tree\" and its subtrees.", "A subtree is defined as any subgraph which has more than one nodes, with the restriction that entire (not partial) rule productions must be included.", "For example, the fragment [NP [D a]] is excluded because it contains only part of the production NP → D N (Collins and Duffy, 2001) .", "Following Collins and Duffy (2001) , given two parse trees T 1 and T 2 , the kernel function K(T 1 , T 2 ) is defined as: v i ∈V 1 v j ∈V 2 ∆(v i , v j ) (1) where V 1 and V 2 are the sets of all nodes respectively in T 1 and T 2 , and each node is associated with a production rule, and ∆(v i , v j ) evaluates the common subtrees rooted at v i and v j .", "∆(v i , v j ) can be computed using the following recursive procedure (Collins and Duffy, 2001) : 1) if the production rules at v i and v j are different, then ∆(v i , v j ) = 0; 2) else if the production rules at v i and v j are same, and v i and v j have only leaf children (i.e., they are pre-terminal symbols), then ∆(v i , v j ) = λ; 3) else ∆(v i , v j ) = λ min(nc(v i ),nc(v j )) k=1 (1 + ∆(ch(v i , k), ch(v j , k))).", "where nc(v) is the number of children of node v, ch(v, k) is the k-th child of node v, and λ (0 < λ ≤ 1) is a decay factor.", "λ = 1 yields the number of common subtrees; λ < 1 down weighs the contribution of larger subtrees to make the kernel value less variable with respect to subtree size.", "Our PTK Model To classify propagation trees, we can calculate the similarity between the trees, which is supposed to reflect the distinction of different types of rumors and non-rumors based on structural, linguistic and temporal properties.", "However, existing tree kernels cannot be readily applied on propagation trees because 1) unlike parse tree where the node is represented by enumerable nominal value (e.g., part-of-speech tag), the propagation tree node is given as a vector of continuous numerical values representing the basic properties of the node; 2) the similarity of two parse trees is based on the count of common subtrees, for which the commonality of subtrees is evaluated by checking if the same production rules and the same children are associated with the nodes in two subtrees being compared, whereas in our context the similarity function should be defined softly since hardly two nodes from different propagation trees are same.", "With the representation of propagation tree, we first define a function f to evaluate the similarity between two nodes v i and v j (we simplify the node representation for instance v i = (u i , c i , t i )) as the following: f (v i , v j ) = e −t (αE(u i , u j ) + (1 − α)J (c i , c j )) where t = |t i − t j | is the absolute difference between the time lags of v i and v j , E and J are user-based similarity and content-based similarity, respectively, and α is the trade-off parameter.", "The intuition of using exponential function of t to scale down the similarity is to capture the discriminant signals or patterns at the different stages of propagation.", "For example, a questioning message posted very early may signal a false rumor while the same posted far later from initial post may indicate the rumor is still unverified, despite that the two messages are semantically similar.", "The user-based similarity is defined as an Euclidean distance E(u i , u j ) = ||u i − u j || 2 , where u i and u j are the user vectors of node v i and v j and || • || 2 is the 2-norm of a vector.", "Here E is used to capture the characteristics of users participating in spreading rumors as discriminant signals, throughout the entire stage of propagation.", "Contentwise, we use Jaccard coefficient to measure the similarity of post content: J (c i , c j ) = |N gram(c i ) ∩ N gram(c j )| |N gram(c i ) ∪ N gram(c j )| where c i and c j are the sets of content words in two nodes.", "For n-grams here, we adopt both uni-grams and bi-grams.", "It can capture cue terms e.g., 'false', 'debunk', 'not true', etc.", "commonly occurring in rumors but not in non-rumors.", "Given two propagation trees T 1 = V 1 , E 1 and T 2 = V 2 , E 2 , PTK aims to compute the similarity between T 1 and T 2 iteratively based on enumerating all pairs of most similar subtrees.", "First, for each node v i ∈ V 1 , we obtain v i ∈ V 2 , the most similar node of v i from V 2 : v i = arg max v j ∈V 2 f (v i , v j ) Similarly, for each v j ∈ V 2 , we obtain v j ∈ V 1 : v j = arg max v i ∈V 1 f (v i , v j ) Then, the propagation tree kernel K P (T 1 , T 2 ) is defined as: v i ∈V 1 Λ(v i , v i ) + v j ∈V 2 Λ(v j , v j ) (2) where Λ(v, v ) evaluates the similarity of two subtrees rooted at v and v , which is computed recursively as follows: 1) if v or v are leaf nodes, then Λ(v, v ) = f (v, v ); 2) else Λ(v, v ) = f (v, v ) min(nc(v),nc(v )) k=1 (1 + Λ(ch(v, k), ch(v , k))) Note that unlike traditional tree kernel, in PTK the node similarity f ∈ [0, 1] is used for softly counting similar subtrees instead of common subtrees.", "Also, λ in tree kernel is not needed as subtree size is not an issue here thanks to node similarity f .", "PTK aims to capture discriminant patterns from propagation trees inclusive of user, content and temporal traits, which is inspired by prior analyses on rumors spreading, e.g., user information can be a strong clue in the initial broadcast, content features are important throughout entire propagation periods, and structural and temporal patterns help for longitudinal diffusion (Zubiaga et al., 2016; Kwon et al., 2017) .", "Context-Sensitive Extension of PTK One defect of PTK is that it ignores the clues outside the subtrees, e.g., how the information propagates from source post to the current subtree.", "Intuitively, propagation paths provide further clue for determining the truthfulness of information since they embed the route and context of how the propagation happens.", "Therefore, we propose contextsensitive PTK (cPTK) by considering the propagation paths from the root of the tree to the roots of subtrees, which shares similar intuition with the context-sensitive tree kernel (Zhou et al., 2007) .", "For a propagation tree node v ∈ T (r), let L r v be the length (i.e., # of nodes) of the propagation path from root r to v, and v[x] be the x-th ancestor of v on the path starting from v (0 ≤ x < L r v , v[0] = v, v[L r v − 1] = r) .", "cPTK evaluates the similarity between two trees T 1 (r 1 ) and T 2 (r 2 ) as follows: v i ∈V 1 L r 1 v i −1 x=0 Λ x (v i , v i ) + v j ∈V 2 L r 2 v j −1 x=0 Λ x (v j , v j ) (3) where Λ x (v, v ) measures the similarity of sub- trees rooted at v[x] and v [x] for context-sensitive evaluation, which is computed as follows: 1) if x > 0, Λ x (v, v ) = f (v[x], v [x]), where v[x] and v [x] are the x-th ancestor nodes of v and v on the respective propagation path.", "2) else Λ x (v, v ) = Λ(v, v ), namely PTK.", "Clearly, PTK is a special case of cPTK when x = 0 (see equation 3).", "cPTK evaluates the oc-currence of both context-free (without considering ancestors on propagation paths) and contextsensitive cases.", "Rumor Detection via Kernel Learning The advantage of kernel-based method is that we can avoid painstakingly engineering the features.", "This is possible because the kernel function can explore an implicit feature space when calculating the similarity between two objects (Culotta and Sorensen, 2004) .", "We incorporate the proposed tree kernel functions, i.e., PTK (equation 2) or cPTK (equation 3), into a supervised learning framework, for which we utilize a kernel-based SVM classifier.", "We treat each tree as an instance, and its similarity values with all training instances as feature space.", "Therefore, the kernel matrix of training set is m × m and that of test set is n × m where m and n are the sizes of training and test sets, respectively.", "For our multi-class task, we perform a one-vsall classification for each label and then assign the one with the highest likelihood among the four, i.e., non-rumor, false rumor, true rumor or unverified rumor.", "We choose this method due to interpretability of results, similar to recent work on occupational class classification (Preotiuc-Pietro et al., 2015; Lukasik et al., 2015) .", "Experiments and Results Data Sets To our knowledge, there is no public large dataset available for classifying propagation trees, where we need a good number of source tweets, more accurately, the tree roots together with the corresponding propagation structure, to be appropriately annotated with ground truth.", "We constructed our datasets based on a couple of reference datasets, namely Twitter15 (Liu et al., 2015) and Twitter16 (Ma et al., 2016) .", "The original datasets were released and used for binary classification of rumor and non-rumor with respect to given events that contain their relevant tweets.", "First, we extracted the popular source tweets 2 that are highly retweeted or replied.", "We then collected all the propagation threads (i.e., retweets and replies) for these source tweets.", "Because Twitter API cannot retrieve the retweets or replies, we gathered the retweet users for a given tweet from 2 Though unpopular tweets could be false, we ignore them as they do not draw much attention and are hardly impactful Twrench 3 and crawled the replies through Twitter's web interface.", "Finally, we annotated the source tweets by referring to the labels of the events they are from.", "We first turned the label of each event in Twitter15 and Twitter16 from binary to quaternary according to the veracity tag of the article in rumor debunking websites (e.g., snopes.com, Emergent.info, etc).", "Then we labeled the source tweets by following these rules: 1) Source tweets from unverified rumor events or non-rumor events are labeled the same as the corresponding event's label; 2) For a source tweet in false rumor event, we flip over the label and assign true to the source tweet if it expresses denial type of stance; otherwise, the label is assigned as false; 3) The analogous flip-over/nochange rule applies to the source tweets from true rumor events.", "We make the datasets produced publicly accessible 4 .", "Table 1 gives statistics on the resulting datasets.", "Experimental Setup We compare our kernel-based method against the following baselines: SVM-TS: A linear SVM classification model that uses time-series to model the variation of a set of hand-crafted features (Ma et al., 2015) .", "DTR: A Decision-Tree-based Ranking method to identify trending rumors (Zhao et al., 2015) , which searches for enquiry phrases and clusters disputed factual claims, and ranked the clustered results based on statistical features.", "DTC and SVM-RBF: The Twitter information credibility model using Decision Tree Classifier (Castillo et al., 2011) and the SVM-based model with RBF kernel (Yang et al., 2012) , respectively, both using hand-crafted features based on the overall statistics of the posts.", "RFC: The Random Forest Classifier proposed by Kwon et al.", "(2017) using three parameters to fit the temporal properties and an extensive set of hand-crafted features related to the user, linguistic and structure characteristics.", "GRU: The RNN-based rumor detection model proposed by Ma et al.", "(2016) with gated recurrent unit for representation learning of high-level features from relevant posts over time.", "BOW: A naive baseline we worked by representing the text in each tree using bag-of-words and building the rumor classifier with linear SVM.", "Our models: PTK and cPTK are our full PTK and cPTK models, respectively; PTKand cPTKare the setting of only using content while ignoring user properties.", "We implemented DTC and RFC with Weka 5 , SVM models with LibSVM 6 and GRU with Theano 7 .", "We held out 10% of the trees in each dataset for model tuning, and for the rest of the trees, we performed 3-fold cross-validation.", "We used accuracy, F 1 measure as evaluation metrics.", "Table 2 shows that our proposed methods outperform all the baselines on both datasets.", "Experimental Results Among all baselines, GRU performs the best, which learns the low-dimensional representation of responsive tweets by capturing the textual and temporal information.", "This indicates the effectiveness of complex signals indicative of rumors beyond cue words or phrases (e.g., \"what?", "\", \"really?", "\", \"not sure\", etc.).", "This also justifies the good performance of BOW even though it only uses uni-grams for representation.", "Although DTR uses a set of regular expressions, we found only 19.59% and 22.21% tweets in our datasets containing these expressions.", "That is why the results of DTR are not satisfactory.", "SVM-TS and RFC are comparable because both of them utilize an extensive set of features especially focusing on temporal traits.", "But none of the models can directly incorporate structured propagation patterns for deep similarity compar- ison between propagation trees.", "SVM-RBF, although using a non-linear kernel, is based on traditional hand-crafted features instead of the structural kernel like ours.", "So, they performed obviously worse than our approach.", "Representation learning methods like GRU cannot easily utilize complex structural information for learning important features from our networked data.", "In contrast, our models can capture complex propagation patterns from structured data rich of linguistic, user and temporal signals.", "Therefore, the superiority of our models is clear: PTKwhich only uses text is already better than GRU, demonstrating the importance of propagation structures.", "PTK that combines text and user yields better results on both datasets, implying that both properties are complementary and PTK integrating flat and structured information is obviously more effective.", "It is also observed that cPTK outperforms PTK except for non-rumor class.", "This suggests the context-sensitive modeling based on PTK is effective for different types of rumors, but for non- The example subtree of a rumor captured by the algorithm at early stage of propagation rumors, it seems that considering context of propagation path is not always helpful.", "This might be due to the generally weak signals originated from node properties on the paths during non-rumor's diffusion since user distribution patterns in nonrumors do not seem as obvious as in rumors.", "This is not an issue in cPTKsince user information is not considered at all.", "Over all classes, cPTK achieves the highest accuracies on both datasets.", "Furthermore, we observe that all the baseline methods perform much better on non-rumors than on rumors.", "This is because the features of existing methods were defined for a binary (rumor vs. non-rumor) classification problem.", "So, they do not perform well for finer-grained classes.", "Our ap-proach can differentiate various classes much better by deep, detailed comparison of different patterns based on propagation structure.", "Early Detection Performance Detecting rumors at an early stage of propagation is very important so that preventive measures could be taken as quickly as possible.", "In early detection task, all the posts after a detection deadline are invisible during test.", "The earlier the deadline, the less propagation information can be available.", "Figure 4 shows the performances of our PTK and cPTK models versus RFC (the best system based on feature engineering), GRU (the best system based on RNN) and DTR (an early-detection-specific algorithm) against various deadlines.", "In the first few hours, our approach demonstrates superior early detection performance than other models.", "Particularly, cPTK achieve 75% accuracy on Twitter15 and 73% on Twitter16 within 24 hours, that is much faster than other models.", "Our analysis shows that rumors typically demonstrate more complex propagation substructures especially at early stage.", "Figure 5 shows a detected subtree of a false rumor spread in its first few hours, where influential users are somehow captured to boost its propagation and the information flows among the users with an obvious unpopular-to-popular-to-unpopular trend in terms of user popularity, but such pattern was not witnessed in non-rumors in early stage.", "Many textual signals (underlined) can also be observed in that early period.", "Our method can learn such structures and patterns naturally, but it is difficult to know and hand-craft them in feature engineering.", "Conclusion and Future Work We propose a novel approach for detecting rumors in microblog posts based on kernel learning method using propagation trees.", "A propagation tree encodes the spread of a hypothesis (i.e., a source tweet) with complex structured patterns and flat information regarding content, user and time associated with the tree nodes.", "Enlightened by tree kernel techniques, our kernel method learns discriminant clues for identifying rumors of finer-grained levels by directly measuring the similarity among propagation trees via kernel functions.", "Experiments on two Twitter datasets show that our approach outperforms stateof-the-art baselines with large margin for both general and early rumor detection tasks.", "Since kernel-based approach covers more structural information than feature-based methods, it allows kernel to further incorporate information from a high dimensional space for possibly better discrimination.", "In the future, we will focus on improving the rumor detection task by exploring network representation learning framework.", "Moreover, we plan to investigate unsupervised models considering massive unlabeled rumorous data from social media." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5.1", "5.2", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Representation of Tweets Propagation", "Propagation Tree Kernel Modeling", "Background of Tree Kernel", "Our PTK Model", "Context-Sensitive Extension of PTK", "Rumor Detection via Kernel Learning", "Data Sets", "Experimental Setup", "Experimental Results", "Early Detection Performance", "Conclusion and Future Work" ] }
GEM-SciDuet-train-101#paper-1265#slide-6
Observation and Hypothesis
(a) In rumors Influence/Popularity (b) In non-rumors Network-based signals (e.g., relative influence) and Our hypothesis: high-order patterns needs to/could be captured using kernel method
(a) In rumors Influence/Popularity (b) In non-rumors Network-based signals (e.g., relative influence) and Our hypothesis: high-order patterns needs to/could be captured using kernel method
[]
GEM-SciDuet-train-101#paper-1265#slide-7
1265
Detect Rumors in Microblog Posts Using Propagation Structure via Kernel Learning
How fake news goes viral via social media? How does its propagation pattern differ from real stories? In this paper, we attempt to address the problem of identifying rumors, i.e., fake information, out of microblog posts based on their propagation structure. We firstly model microblog posts diffusion with propagation trees, which provide valuable clues on how an original message is transmitted and developed over time. We then propose a kernel-based method called Propagation Tree Kernel, which captures high-order patterns differentiating different types of rumors by evaluating the similarities between their propagation tree structures. Experimental results on two real-world datasets demonstrate that the proposed kernel-based approach can detect rumors more quickly and accurately than state-ofthe-art rumor detection models.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170 ], "paper_content_text": [ "Introduction On November 9th, 2016, Eric Tucker, a grassroots user who had just about 40 followers on Twitter, tweeted his unverified observations about paid protesters being bused to attend anti-Trump demonstration in Austin, Texas.", "The tweet, which was proved false later, was shared over 16 thousand times on Twitter and 350 thousand times on Facebook within a couple of days, fueling a nation-wide conspiracy theory 1 .", "The diffusion of the story is illustrated as Figure 1 which gives the key spreading points of the story along the time line.", "We can see that after the initial post, the tweet was shared or promoted by some influential online communities and users (including Trump himself), resulting in its wide spread.", "A widely accepted definition of rumor is \"unverified and instrumentally relevant information statements in circulation\" (DiFonzo and Bordia, 2007) .", "This unverified information may eventually turn out to be true, or partly or entirely false.", "In today's ever-connected world, rumors can arise and spread at lightening speed thanks to social media platforms, which could not only be wrong, but be misleading and dangerous to the public society.", "Therefore, it is crucial to track and debunk such rumors in timely manner.", "Journalists and fact-checking websites such as snopes.com have made efforts to track and detect rumors.", "However, such endeavor is manual, thus prone to poor coverage and low speed.", "Feature-based methods (Castillo et al., 2011; Yang et al., 2012; Ma et al., 2015) achieved certain success by employing large feature sets crafted from message contents, user profiles and holistic statistics of diffusion patterns (e.g., number of retweets, propagation time, etc.).", "But such an approach was over simplified as they ignored the dynamics of rumor propagation.", "Existing studies considering propagation characteristics mainly focused on the temporal features (Kwon et al., 2013 (Kwon et al., , 2017 rather than the structure of propagation.", "So, can the propagation structure make any difference for differentiating rumors from nonrumors?", "Recent studies showed that rumor spreaders are persons who want to get attention and popularity (Sunstein, 2014) .", "However, popular users who get more attention on Twitter (e.g., with more followers) are actually less likely to spread rumor in a sense that the high audience size might hinder a user from participating in propagating unverified information (Kwon et al., 2017) .", "Intuitively, for \"successful\" rumors being propagated as widely as popular real news, initial spreaders (typically lack of popularity) must attract certain amount of broadcasting power, e.g., attention of influential users or communities that have a lot of audiences joining in promoting the propagation.", "We refer to this as a constrained mode propagation, relative to the open mode propagation of normal messages that everyone is open to share.", "Such different modes of propagation may imply some distinct propagation structures between rumors and nonrumors and even among different types of rumors.", "Due to the complex nature of information diffusion, explicitly defining discriminant features based on propagation structure is difficult and biased.", "Figure 2 exemplifies the propagation structures of two Twitter posts, a rumor and a nonrumor, initiated by two users shown as the root nodes (in green color).", "The information flows here illustrate that the rumorous tweet is first posted by a low-impact user, then some popular users joining in who boost the spreading, but the non-rumorous tweet is initially posted by a popular user and directly spread by many general users; contentbased signal like various users' stance (Zhao et al., 2015) and edge-based signal such as relative influence (Kwon et al., 2017) can also suggest the different nature of source tweets.", "Many of such implicit distinctions throughout message propagation are hard to hand craft specifically using flat summary of statistics as previous work did.", "In addition, unlike representation learning for plain text, learning for representation of structures such as networks is not well studied in general.", "Therefore, traditional and latest text-based models (Castillo (a) A rumor (b) A non-rumor Figure 2 : Fragments of the propagation for two source tweets.", "Node size: denotes the popularity of the user who tweet the post (represented by # of followers); Red, black, blue node: content-wise the user express doubt/denial, support, neutrality in the tweet, respectively; Solid (dotted) edge: information flow from a more (less) popular user to a less (more) popular user; Dashed concentric circles: time stamps.", "Ma et al., 2015 Ma et al., , 2016 cannot be applied easily on such complex, dynamic structures.", "To capture high-order propagation patterns for rumor detection, we firstly represent the propagation of each source tweet with a propagation tree which is formed by harvesting user's interactions to one another triggered by the source tweet.", "Then, we propose a kernel-based data-driven method called Propagation Tree Kernel (PTK) to generate relevant features (i.e., subtrees) automatically for estimating the similarity between two propagation trees.", "Unlike traditional tree kernel (Moschitti, 2006; Zhang et al., 2008) for modeling syntactic structure based on parse tree, our propagation tree consists of nodes corresponding to microblog posts, each represented as a continuous vector, and edges representing the direction of propagation and providing the context to individual posts.", "The basic idea is to find and capture the salient substructures in the propagation trees indicative of rumors.", "We also extend PTK into a context-enriched PTK (cPTK) to enhance the model by considering different propagation paths from source tweet to the roots of subtrees, which capture the context of transmission.", "Extensive experiments on two real-world Twitter datasets show that the proposed methods outperform state-of-the-art rumor detection models with large margin.", "Moreover, most existing approaches regard rumor detection as a binary classification problem, which predicts a candidate hypothesis as rumor or not.", "Since a rumor often begins as unverified and later turns out to be confirmed as true or false, or remains unverified (Zubiaga et al., 2016) , here we consider a set of more practical, finer-grained classes: false rumor, true rumor, unverified rumor, and non-rumor, which becomes an even more challenging problem.", "Related Work Tracking misinformation or debunking rumors has been a hot research topic in multiple disciplines (DiFonzo and Bordia, 2007; Morris et al., 2012; Rosnow, 1991) .", "Castillo et al.", "(2011) studied information credibility on Twitter using a wide range of hand-crafted features.", "Following that, various features corresponding to message contents, user profiles and statistics of propagation patterns were proposed in many studies (Yang et al., 2012; Wu et al., 2015; Sun et al., 2013; Liu et al., 2015) .", "Zhao et al.", "(2015) focused on early rumor detection by using regular expressions for finding questing and denying tweets as the key for debunking rumor.", "All such approaches are over simplistic because they ignore the dynamic propagation patterns given the rich structures of social media data.", "Some studies focus on finding temporal patterns for understanding rumor diffusion.", "Kown et al.", "(2013; 2017) introduced a time-series fitting model based on the temporal properties of tweet volume.", "Ma et al.", "(2015) extended the model using time series to capture the variation of features over time.", "Friggeri et al.", "(2014) and Hannak et al.", "(2014) studied the structure of misinformation cascades by analyzing comments linking to rumor debunking websites.", "More recently, Ma et al.", "(2016) used recurrent neural networks to learn the representations of rumor signals from tweet text at different times.", "Our work will consider temporal, structural and linguistic signals in a unified framework based on propagation tree kernel.", "Most previous work formulated the task as classification at event level where an event is comprised of a number of source tweets, each being associated with a group of retweets and replies.", "Here we focus on classifying a given source tweet regarding a claim which is a finer-grained task.", "Similar setting was also considered in (Wu et al., 2015; Qazvinian et al., 2011) .", "Kernel methods are designed to evaluate the similarity between two objects, and tree kernel specifically addresses structured data which has been successfully applied for modeling syntactic information in many natural language tasks such as syntactic parsing (Collins and Duffy, 2001) , question-answering (Moschitti, 2006) , semantic analysis (Moschitti, 2004) , relation extraction (Zhang et al., 2008) and machine translation (Sun et al., 2010) .", "These kernels are not suitable for modeling the social media propagation structures because the nodes are not given as discrete values like part-of-speech tags, but are represented as high dimensional real-valued vectors.", "Our proposed method is a substantial extension of tree kernel for modeling such structures.", "Representation of Tweets Propagation On microblogging platforms, the follower/friend relationship embeds shared interests among the users.", "Once a user has posted a tweet, all his followers will receive the tweet.", "Furthermore, Twitter allows a user to retweet or comment another user's post, so that the information could reach beyond the network of the original creator.", "We model the propagation of each source tweet as a tree structure T (r) = V, E , where r is the source tweet as well as the root of the tree, V refers to a set of nodes each representing a responsive post (i.e., retweet or reply) of a user at a certain time to the source tweet r which initiates the circulation, and E is a set of directed edges corresponding to the response relation among the nodes in V .", "If there exists a directed edge from v i to v j , it means v j is a direct response to v i .", "More specifically, each node v ∈ V is repre- sented as a tuple v = (u v , c v , t v ) , which provides the following information: u v is the creator of the post, c v represents the text content of the post, and t v is the time lag between the source tweet r and v. In our case, u v contains attributes of the user such as # of followers/friends, verification status, # of history posts, etc., c v is a vector of binary features based on uni-grams and/or bi-grams representing the post's content.", "Propagation Tree Kernel Modeling In this section, we describe our rumor detection model based on propagation trees using kernel method called Propagation Tree Kernel (PTK).", "Our task is, given a propagation tree T (r) of a source tweet r, to predict the label of r. Background of Tree Kernel Before presenting our proposed algorithm, we briefly present the traditional tree kernel, which our PTK model is based on.", "Tree kernel was designed to compute the syntactic and semantic similarity between two natural language sentences by implicitly counting the number of common subtrees between their corresponding parse trees.", "Given a syntactic parse tree, each node with its children is associated with a grammar production rule.", "Figure 3 illustrates the syntactic parse tree of \"cut a tree\" and its subtrees.", "A subtree is defined as any subgraph which has more than one nodes, with the restriction that entire (not partial) rule productions must be included.", "For example, the fragment [NP [D a]] is excluded because it contains only part of the production NP → D N (Collins and Duffy, 2001) .", "Following Collins and Duffy (2001) , given two parse trees T 1 and T 2 , the kernel function K(T 1 , T 2 ) is defined as: v i ∈V 1 v j ∈V 2 ∆(v i , v j ) (1) where V 1 and V 2 are the sets of all nodes respectively in T 1 and T 2 , and each node is associated with a production rule, and ∆(v i , v j ) evaluates the common subtrees rooted at v i and v j .", "∆(v i , v j ) can be computed using the following recursive procedure (Collins and Duffy, 2001) : 1) if the production rules at v i and v j are different, then ∆(v i , v j ) = 0; 2) else if the production rules at v i and v j are same, and v i and v j have only leaf children (i.e., they are pre-terminal symbols), then ∆(v i , v j ) = λ; 3) else ∆(v i , v j ) = λ min(nc(v i ),nc(v j )) k=1 (1 + ∆(ch(v i , k), ch(v j , k))).", "where nc(v) is the number of children of node v, ch(v, k) is the k-th child of node v, and λ (0 < λ ≤ 1) is a decay factor.", "λ = 1 yields the number of common subtrees; λ < 1 down weighs the contribution of larger subtrees to make the kernel value less variable with respect to subtree size.", "Our PTK Model To classify propagation trees, we can calculate the similarity between the trees, which is supposed to reflect the distinction of different types of rumors and non-rumors based on structural, linguistic and temporal properties.", "However, existing tree kernels cannot be readily applied on propagation trees because 1) unlike parse tree where the node is represented by enumerable nominal value (e.g., part-of-speech tag), the propagation tree node is given as a vector of continuous numerical values representing the basic properties of the node; 2) the similarity of two parse trees is based on the count of common subtrees, for which the commonality of subtrees is evaluated by checking if the same production rules and the same children are associated with the nodes in two subtrees being compared, whereas in our context the similarity function should be defined softly since hardly two nodes from different propagation trees are same.", "With the representation of propagation tree, we first define a function f to evaluate the similarity between two nodes v i and v j (we simplify the node representation for instance v i = (u i , c i , t i )) as the following: f (v i , v j ) = e −t (αE(u i , u j ) + (1 − α)J (c i , c j )) where t = |t i − t j | is the absolute difference between the time lags of v i and v j , E and J are user-based similarity and content-based similarity, respectively, and α is the trade-off parameter.", "The intuition of using exponential function of t to scale down the similarity is to capture the discriminant signals or patterns at the different stages of propagation.", "For example, a questioning message posted very early may signal a false rumor while the same posted far later from initial post may indicate the rumor is still unverified, despite that the two messages are semantically similar.", "The user-based similarity is defined as an Euclidean distance E(u i , u j ) = ||u i − u j || 2 , where u i and u j are the user vectors of node v i and v j and || • || 2 is the 2-norm of a vector.", "Here E is used to capture the characteristics of users participating in spreading rumors as discriminant signals, throughout the entire stage of propagation.", "Contentwise, we use Jaccard coefficient to measure the similarity of post content: J (c i , c j ) = |N gram(c i ) ∩ N gram(c j )| |N gram(c i ) ∪ N gram(c j )| where c i and c j are the sets of content words in two nodes.", "For n-grams here, we adopt both uni-grams and bi-grams.", "It can capture cue terms e.g., 'false', 'debunk', 'not true', etc.", "commonly occurring in rumors but not in non-rumors.", "Given two propagation trees T 1 = V 1 , E 1 and T 2 = V 2 , E 2 , PTK aims to compute the similarity between T 1 and T 2 iteratively based on enumerating all pairs of most similar subtrees.", "First, for each node v i ∈ V 1 , we obtain v i ∈ V 2 , the most similar node of v i from V 2 : v i = arg max v j ∈V 2 f (v i , v j ) Similarly, for each v j ∈ V 2 , we obtain v j ∈ V 1 : v j = arg max v i ∈V 1 f (v i , v j ) Then, the propagation tree kernel K P (T 1 , T 2 ) is defined as: v i ∈V 1 Λ(v i , v i ) + v j ∈V 2 Λ(v j , v j ) (2) where Λ(v, v ) evaluates the similarity of two subtrees rooted at v and v , which is computed recursively as follows: 1) if v or v are leaf nodes, then Λ(v, v ) = f (v, v ); 2) else Λ(v, v ) = f (v, v ) min(nc(v),nc(v )) k=1 (1 + Λ(ch(v, k), ch(v , k))) Note that unlike traditional tree kernel, in PTK the node similarity f ∈ [0, 1] is used for softly counting similar subtrees instead of common subtrees.", "Also, λ in tree kernel is not needed as subtree size is not an issue here thanks to node similarity f .", "PTK aims to capture discriminant patterns from propagation trees inclusive of user, content and temporal traits, which is inspired by prior analyses on rumors spreading, e.g., user information can be a strong clue in the initial broadcast, content features are important throughout entire propagation periods, and structural and temporal patterns help for longitudinal diffusion (Zubiaga et al., 2016; Kwon et al., 2017) .", "Context-Sensitive Extension of PTK One defect of PTK is that it ignores the clues outside the subtrees, e.g., how the information propagates from source post to the current subtree.", "Intuitively, propagation paths provide further clue for determining the truthfulness of information since they embed the route and context of how the propagation happens.", "Therefore, we propose contextsensitive PTK (cPTK) by considering the propagation paths from the root of the tree to the roots of subtrees, which shares similar intuition with the context-sensitive tree kernel (Zhou et al., 2007) .", "For a propagation tree node v ∈ T (r), let L r v be the length (i.e., # of nodes) of the propagation path from root r to v, and v[x] be the x-th ancestor of v on the path starting from v (0 ≤ x < L r v , v[0] = v, v[L r v − 1] = r) .", "cPTK evaluates the similarity between two trees T 1 (r 1 ) and T 2 (r 2 ) as follows: v i ∈V 1 L r 1 v i −1 x=0 Λ x (v i , v i ) + v j ∈V 2 L r 2 v j −1 x=0 Λ x (v j , v j ) (3) where Λ x (v, v ) measures the similarity of sub- trees rooted at v[x] and v [x] for context-sensitive evaluation, which is computed as follows: 1) if x > 0, Λ x (v, v ) = f (v[x], v [x]), where v[x] and v [x] are the x-th ancestor nodes of v and v on the respective propagation path.", "2) else Λ x (v, v ) = Λ(v, v ), namely PTK.", "Clearly, PTK is a special case of cPTK when x = 0 (see equation 3).", "cPTK evaluates the oc-currence of both context-free (without considering ancestors on propagation paths) and contextsensitive cases.", "Rumor Detection via Kernel Learning The advantage of kernel-based method is that we can avoid painstakingly engineering the features.", "This is possible because the kernel function can explore an implicit feature space when calculating the similarity between two objects (Culotta and Sorensen, 2004) .", "We incorporate the proposed tree kernel functions, i.e., PTK (equation 2) or cPTK (equation 3), into a supervised learning framework, for which we utilize a kernel-based SVM classifier.", "We treat each tree as an instance, and its similarity values with all training instances as feature space.", "Therefore, the kernel matrix of training set is m × m and that of test set is n × m where m and n are the sizes of training and test sets, respectively.", "For our multi-class task, we perform a one-vsall classification for each label and then assign the one with the highest likelihood among the four, i.e., non-rumor, false rumor, true rumor or unverified rumor.", "We choose this method due to interpretability of results, similar to recent work on occupational class classification (Preotiuc-Pietro et al., 2015; Lukasik et al., 2015) .", "Experiments and Results Data Sets To our knowledge, there is no public large dataset available for classifying propagation trees, where we need a good number of source tweets, more accurately, the tree roots together with the corresponding propagation structure, to be appropriately annotated with ground truth.", "We constructed our datasets based on a couple of reference datasets, namely Twitter15 (Liu et al., 2015) and Twitter16 (Ma et al., 2016) .", "The original datasets were released and used for binary classification of rumor and non-rumor with respect to given events that contain their relevant tweets.", "First, we extracted the popular source tweets 2 that are highly retweeted or replied.", "We then collected all the propagation threads (i.e., retweets and replies) for these source tweets.", "Because Twitter API cannot retrieve the retweets or replies, we gathered the retweet users for a given tweet from 2 Though unpopular tweets could be false, we ignore them as they do not draw much attention and are hardly impactful Twrench 3 and crawled the replies through Twitter's web interface.", "Finally, we annotated the source tweets by referring to the labels of the events they are from.", "We first turned the label of each event in Twitter15 and Twitter16 from binary to quaternary according to the veracity tag of the article in rumor debunking websites (e.g., snopes.com, Emergent.info, etc).", "Then we labeled the source tweets by following these rules: 1) Source tweets from unverified rumor events or non-rumor events are labeled the same as the corresponding event's label; 2) For a source tweet in false rumor event, we flip over the label and assign true to the source tweet if it expresses denial type of stance; otherwise, the label is assigned as false; 3) The analogous flip-over/nochange rule applies to the source tweets from true rumor events.", "We make the datasets produced publicly accessible 4 .", "Table 1 gives statistics on the resulting datasets.", "Experimental Setup We compare our kernel-based method against the following baselines: SVM-TS: A linear SVM classification model that uses time-series to model the variation of a set of hand-crafted features (Ma et al., 2015) .", "DTR: A Decision-Tree-based Ranking method to identify trending rumors (Zhao et al., 2015) , which searches for enquiry phrases and clusters disputed factual claims, and ranked the clustered results based on statistical features.", "DTC and SVM-RBF: The Twitter information credibility model using Decision Tree Classifier (Castillo et al., 2011) and the SVM-based model with RBF kernel (Yang et al., 2012) , respectively, both using hand-crafted features based on the overall statistics of the posts.", "RFC: The Random Forest Classifier proposed by Kwon et al.", "(2017) using three parameters to fit the temporal properties and an extensive set of hand-crafted features related to the user, linguistic and structure characteristics.", "GRU: The RNN-based rumor detection model proposed by Ma et al.", "(2016) with gated recurrent unit for representation learning of high-level features from relevant posts over time.", "BOW: A naive baseline we worked by representing the text in each tree using bag-of-words and building the rumor classifier with linear SVM.", "Our models: PTK and cPTK are our full PTK and cPTK models, respectively; PTKand cPTKare the setting of only using content while ignoring user properties.", "We implemented DTC and RFC with Weka 5 , SVM models with LibSVM 6 and GRU with Theano 7 .", "We held out 10% of the trees in each dataset for model tuning, and for the rest of the trees, we performed 3-fold cross-validation.", "We used accuracy, F 1 measure as evaluation metrics.", "Table 2 shows that our proposed methods outperform all the baselines on both datasets.", "Experimental Results Among all baselines, GRU performs the best, which learns the low-dimensional representation of responsive tweets by capturing the textual and temporal information.", "This indicates the effectiveness of complex signals indicative of rumors beyond cue words or phrases (e.g., \"what?", "\", \"really?", "\", \"not sure\", etc.).", "This also justifies the good performance of BOW even though it only uses uni-grams for representation.", "Although DTR uses a set of regular expressions, we found only 19.59% and 22.21% tweets in our datasets containing these expressions.", "That is why the results of DTR are not satisfactory.", "SVM-TS and RFC are comparable because both of them utilize an extensive set of features especially focusing on temporal traits.", "But none of the models can directly incorporate structured propagation patterns for deep similarity compar- ison between propagation trees.", "SVM-RBF, although using a non-linear kernel, is based on traditional hand-crafted features instead of the structural kernel like ours.", "So, they performed obviously worse than our approach.", "Representation learning methods like GRU cannot easily utilize complex structural information for learning important features from our networked data.", "In contrast, our models can capture complex propagation patterns from structured data rich of linguistic, user and temporal signals.", "Therefore, the superiority of our models is clear: PTKwhich only uses text is already better than GRU, demonstrating the importance of propagation structures.", "PTK that combines text and user yields better results on both datasets, implying that both properties are complementary and PTK integrating flat and structured information is obviously more effective.", "It is also observed that cPTK outperforms PTK except for non-rumor class.", "This suggests the context-sensitive modeling based on PTK is effective for different types of rumors, but for non- The example subtree of a rumor captured by the algorithm at early stage of propagation rumors, it seems that considering context of propagation path is not always helpful.", "This might be due to the generally weak signals originated from node properties on the paths during non-rumor's diffusion since user distribution patterns in nonrumors do not seem as obvious as in rumors.", "This is not an issue in cPTKsince user information is not considered at all.", "Over all classes, cPTK achieves the highest accuracies on both datasets.", "Furthermore, we observe that all the baseline methods perform much better on non-rumors than on rumors.", "This is because the features of existing methods were defined for a binary (rumor vs. non-rumor) classification problem.", "So, they do not perform well for finer-grained classes.", "Our ap-proach can differentiate various classes much better by deep, detailed comparison of different patterns based on propagation structure.", "Early Detection Performance Detecting rumors at an early stage of propagation is very important so that preventive measures could be taken as quickly as possible.", "In early detection task, all the posts after a detection deadline are invisible during test.", "The earlier the deadline, the less propagation information can be available.", "Figure 4 shows the performances of our PTK and cPTK models versus RFC (the best system based on feature engineering), GRU (the best system based on RNN) and DTR (an early-detection-specific algorithm) against various deadlines.", "In the first few hours, our approach demonstrates superior early detection performance than other models.", "Particularly, cPTK achieve 75% accuracy on Twitter15 and 73% on Twitter16 within 24 hours, that is much faster than other models.", "Our analysis shows that rumors typically demonstrate more complex propagation substructures especially at early stage.", "Figure 5 shows a detected subtree of a false rumor spread in its first few hours, where influential users are somehow captured to boost its propagation and the information flows among the users with an obvious unpopular-to-popular-to-unpopular trend in terms of user popularity, but such pattern was not witnessed in non-rumors in early stage.", "Many textual signals (underlined) can also be observed in that early period.", "Our method can learn such structures and patterns naturally, but it is difficult to know and hand-craft them in feature engineering.", "Conclusion and Future Work We propose a novel approach for detecting rumors in microblog posts based on kernel learning method using propagation trees.", "A propagation tree encodes the spread of a hypothesis (i.e., a source tweet) with complex structured patterns and flat information regarding content, user and time associated with the tree nodes.", "Enlightened by tree kernel techniques, our kernel method learns discriminant clues for identifying rumors of finer-grained levels by directly measuring the similarity among propagation trees via kernel functions.", "Experiments on two Twitter datasets show that our approach outperforms stateof-the-art baselines with large margin for both general and early rumor detection tasks.", "Since kernel-based approach covers more structural information than feature-based methods, it allows kernel to further incorporate information from a high dimensional space for possibly better discrimination.", "In the future, we will focus on improving the rumor detection task by exploring network representation learning framework.", "Moreover, we plan to investigate unsupervised models considering massive unlabeled rumorous data from social media." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5.1", "5.2", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Representation of Tweets Propagation", "Propagation Tree Kernel Modeling", "Background of Tree Kernel", "Our PTK Model", "Context-Sensitive Extension of PTK", "Rumor Detection via Kernel Learning", "Data Sets", "Experimental Setup", "Experimental Results", "Early Detection Performance", "Conclusion and Future Work" ] }
GEM-SciDuet-train-101#paper-1265#slide-7
Traditional Tree Kernel TK
TK compute the syntactic similarity between two sentences by counting the common subtrees common subtrees rooted at and
TK compute the syntactic similarity between two sentences by counting the common subtrees common subtrees rooted at and
[]
GEM-SciDuet-train-101#paper-1265#slide-8
1265
Detect Rumors in Microblog Posts Using Propagation Structure via Kernel Learning
How fake news goes viral via social media? How does its propagation pattern differ from real stories? In this paper, we attempt to address the problem of identifying rumors, i.e., fake information, out of microblog posts based on their propagation structure. We firstly model microblog posts diffusion with propagation trees, which provide valuable clues on how an original message is transmitted and developed over time. We then propose a kernel-based method called Propagation Tree Kernel, which captures high-order patterns differentiating different types of rumors by evaluating the similarities between their propagation tree structures. Experimental results on two real-world datasets demonstrate that the proposed kernel-based approach can detect rumors more quickly and accurately than state-ofthe-art rumor detection models.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170 ], "paper_content_text": [ "Introduction On November 9th, 2016, Eric Tucker, a grassroots user who had just about 40 followers on Twitter, tweeted his unverified observations about paid protesters being bused to attend anti-Trump demonstration in Austin, Texas.", "The tweet, which was proved false later, was shared over 16 thousand times on Twitter and 350 thousand times on Facebook within a couple of days, fueling a nation-wide conspiracy theory 1 .", "The diffusion of the story is illustrated as Figure 1 which gives the key spreading points of the story along the time line.", "We can see that after the initial post, the tweet was shared or promoted by some influential online communities and users (including Trump himself), resulting in its wide spread.", "A widely accepted definition of rumor is \"unverified and instrumentally relevant information statements in circulation\" (DiFonzo and Bordia, 2007) .", "This unverified information may eventually turn out to be true, or partly or entirely false.", "In today's ever-connected world, rumors can arise and spread at lightening speed thanks to social media platforms, which could not only be wrong, but be misleading and dangerous to the public society.", "Therefore, it is crucial to track and debunk such rumors in timely manner.", "Journalists and fact-checking websites such as snopes.com have made efforts to track and detect rumors.", "However, such endeavor is manual, thus prone to poor coverage and low speed.", "Feature-based methods (Castillo et al., 2011; Yang et al., 2012; Ma et al., 2015) achieved certain success by employing large feature sets crafted from message contents, user profiles and holistic statistics of diffusion patterns (e.g., number of retweets, propagation time, etc.).", "But such an approach was over simplified as they ignored the dynamics of rumor propagation.", "Existing studies considering propagation characteristics mainly focused on the temporal features (Kwon et al., 2013 (Kwon et al., , 2017 rather than the structure of propagation.", "So, can the propagation structure make any difference for differentiating rumors from nonrumors?", "Recent studies showed that rumor spreaders are persons who want to get attention and popularity (Sunstein, 2014) .", "However, popular users who get more attention on Twitter (e.g., with more followers) are actually less likely to spread rumor in a sense that the high audience size might hinder a user from participating in propagating unverified information (Kwon et al., 2017) .", "Intuitively, for \"successful\" rumors being propagated as widely as popular real news, initial spreaders (typically lack of popularity) must attract certain amount of broadcasting power, e.g., attention of influential users or communities that have a lot of audiences joining in promoting the propagation.", "We refer to this as a constrained mode propagation, relative to the open mode propagation of normal messages that everyone is open to share.", "Such different modes of propagation may imply some distinct propagation structures between rumors and nonrumors and even among different types of rumors.", "Due to the complex nature of information diffusion, explicitly defining discriminant features based on propagation structure is difficult and biased.", "Figure 2 exemplifies the propagation structures of two Twitter posts, a rumor and a nonrumor, initiated by two users shown as the root nodes (in green color).", "The information flows here illustrate that the rumorous tweet is first posted by a low-impact user, then some popular users joining in who boost the spreading, but the non-rumorous tweet is initially posted by a popular user and directly spread by many general users; contentbased signal like various users' stance (Zhao et al., 2015) and edge-based signal such as relative influence (Kwon et al., 2017) can also suggest the different nature of source tweets.", "Many of such implicit distinctions throughout message propagation are hard to hand craft specifically using flat summary of statistics as previous work did.", "In addition, unlike representation learning for plain text, learning for representation of structures such as networks is not well studied in general.", "Therefore, traditional and latest text-based models (Castillo (a) A rumor (b) A non-rumor Figure 2 : Fragments of the propagation for two source tweets.", "Node size: denotes the popularity of the user who tweet the post (represented by # of followers); Red, black, blue node: content-wise the user express doubt/denial, support, neutrality in the tweet, respectively; Solid (dotted) edge: information flow from a more (less) popular user to a less (more) popular user; Dashed concentric circles: time stamps.", "Ma et al., 2015 Ma et al., , 2016 cannot be applied easily on such complex, dynamic structures.", "To capture high-order propagation patterns for rumor detection, we firstly represent the propagation of each source tweet with a propagation tree which is formed by harvesting user's interactions to one another triggered by the source tweet.", "Then, we propose a kernel-based data-driven method called Propagation Tree Kernel (PTK) to generate relevant features (i.e., subtrees) automatically for estimating the similarity between two propagation trees.", "Unlike traditional tree kernel (Moschitti, 2006; Zhang et al., 2008) for modeling syntactic structure based on parse tree, our propagation tree consists of nodes corresponding to microblog posts, each represented as a continuous vector, and edges representing the direction of propagation and providing the context to individual posts.", "The basic idea is to find and capture the salient substructures in the propagation trees indicative of rumors.", "We also extend PTK into a context-enriched PTK (cPTK) to enhance the model by considering different propagation paths from source tweet to the roots of subtrees, which capture the context of transmission.", "Extensive experiments on two real-world Twitter datasets show that the proposed methods outperform state-of-the-art rumor detection models with large margin.", "Moreover, most existing approaches regard rumor detection as a binary classification problem, which predicts a candidate hypothesis as rumor or not.", "Since a rumor often begins as unverified and later turns out to be confirmed as true or false, or remains unverified (Zubiaga et al., 2016) , here we consider a set of more practical, finer-grained classes: false rumor, true rumor, unverified rumor, and non-rumor, which becomes an even more challenging problem.", "Related Work Tracking misinformation or debunking rumors has been a hot research topic in multiple disciplines (DiFonzo and Bordia, 2007; Morris et al., 2012; Rosnow, 1991) .", "Castillo et al.", "(2011) studied information credibility on Twitter using a wide range of hand-crafted features.", "Following that, various features corresponding to message contents, user profiles and statistics of propagation patterns were proposed in many studies (Yang et al., 2012; Wu et al., 2015; Sun et al., 2013; Liu et al., 2015) .", "Zhao et al.", "(2015) focused on early rumor detection by using regular expressions for finding questing and denying tweets as the key for debunking rumor.", "All such approaches are over simplistic because they ignore the dynamic propagation patterns given the rich structures of social media data.", "Some studies focus on finding temporal patterns for understanding rumor diffusion.", "Kown et al.", "(2013; 2017) introduced a time-series fitting model based on the temporal properties of tweet volume.", "Ma et al.", "(2015) extended the model using time series to capture the variation of features over time.", "Friggeri et al.", "(2014) and Hannak et al.", "(2014) studied the structure of misinformation cascades by analyzing comments linking to rumor debunking websites.", "More recently, Ma et al.", "(2016) used recurrent neural networks to learn the representations of rumor signals from tweet text at different times.", "Our work will consider temporal, structural and linguistic signals in a unified framework based on propagation tree kernel.", "Most previous work formulated the task as classification at event level where an event is comprised of a number of source tweets, each being associated with a group of retweets and replies.", "Here we focus on classifying a given source tweet regarding a claim which is a finer-grained task.", "Similar setting was also considered in (Wu et al., 2015; Qazvinian et al., 2011) .", "Kernel methods are designed to evaluate the similarity between two objects, and tree kernel specifically addresses structured data which has been successfully applied for modeling syntactic information in many natural language tasks such as syntactic parsing (Collins and Duffy, 2001) , question-answering (Moschitti, 2006) , semantic analysis (Moschitti, 2004) , relation extraction (Zhang et al., 2008) and machine translation (Sun et al., 2010) .", "These kernels are not suitable for modeling the social media propagation structures because the nodes are not given as discrete values like part-of-speech tags, but are represented as high dimensional real-valued vectors.", "Our proposed method is a substantial extension of tree kernel for modeling such structures.", "Representation of Tweets Propagation On microblogging platforms, the follower/friend relationship embeds shared interests among the users.", "Once a user has posted a tweet, all his followers will receive the tweet.", "Furthermore, Twitter allows a user to retweet or comment another user's post, so that the information could reach beyond the network of the original creator.", "We model the propagation of each source tweet as a tree structure T (r) = V, E , where r is the source tweet as well as the root of the tree, V refers to a set of nodes each representing a responsive post (i.e., retweet or reply) of a user at a certain time to the source tweet r which initiates the circulation, and E is a set of directed edges corresponding to the response relation among the nodes in V .", "If there exists a directed edge from v i to v j , it means v j is a direct response to v i .", "More specifically, each node v ∈ V is repre- sented as a tuple v = (u v , c v , t v ) , which provides the following information: u v is the creator of the post, c v represents the text content of the post, and t v is the time lag between the source tweet r and v. In our case, u v contains attributes of the user such as # of followers/friends, verification status, # of history posts, etc., c v is a vector of binary features based on uni-grams and/or bi-grams representing the post's content.", "Propagation Tree Kernel Modeling In this section, we describe our rumor detection model based on propagation trees using kernel method called Propagation Tree Kernel (PTK).", "Our task is, given a propagation tree T (r) of a source tweet r, to predict the label of r. Background of Tree Kernel Before presenting our proposed algorithm, we briefly present the traditional tree kernel, which our PTK model is based on.", "Tree kernel was designed to compute the syntactic and semantic similarity between two natural language sentences by implicitly counting the number of common subtrees between their corresponding parse trees.", "Given a syntactic parse tree, each node with its children is associated with a grammar production rule.", "Figure 3 illustrates the syntactic parse tree of \"cut a tree\" and its subtrees.", "A subtree is defined as any subgraph which has more than one nodes, with the restriction that entire (not partial) rule productions must be included.", "For example, the fragment [NP [D a]] is excluded because it contains only part of the production NP → D N (Collins and Duffy, 2001) .", "Following Collins and Duffy (2001) , given two parse trees T 1 and T 2 , the kernel function K(T 1 , T 2 ) is defined as: v i ∈V 1 v j ∈V 2 ∆(v i , v j ) (1) where V 1 and V 2 are the sets of all nodes respectively in T 1 and T 2 , and each node is associated with a production rule, and ∆(v i , v j ) evaluates the common subtrees rooted at v i and v j .", "∆(v i , v j ) can be computed using the following recursive procedure (Collins and Duffy, 2001) : 1) if the production rules at v i and v j are different, then ∆(v i , v j ) = 0; 2) else if the production rules at v i and v j are same, and v i and v j have only leaf children (i.e., they are pre-terminal symbols), then ∆(v i , v j ) = λ; 3) else ∆(v i , v j ) = λ min(nc(v i ),nc(v j )) k=1 (1 + ∆(ch(v i , k), ch(v j , k))).", "where nc(v) is the number of children of node v, ch(v, k) is the k-th child of node v, and λ (0 < λ ≤ 1) is a decay factor.", "λ = 1 yields the number of common subtrees; λ < 1 down weighs the contribution of larger subtrees to make the kernel value less variable with respect to subtree size.", "Our PTK Model To classify propagation trees, we can calculate the similarity between the trees, which is supposed to reflect the distinction of different types of rumors and non-rumors based on structural, linguistic and temporal properties.", "However, existing tree kernels cannot be readily applied on propagation trees because 1) unlike parse tree where the node is represented by enumerable nominal value (e.g., part-of-speech tag), the propagation tree node is given as a vector of continuous numerical values representing the basic properties of the node; 2) the similarity of two parse trees is based on the count of common subtrees, for which the commonality of subtrees is evaluated by checking if the same production rules and the same children are associated with the nodes in two subtrees being compared, whereas in our context the similarity function should be defined softly since hardly two nodes from different propagation trees are same.", "With the representation of propagation tree, we first define a function f to evaluate the similarity between two nodes v i and v j (we simplify the node representation for instance v i = (u i , c i , t i )) as the following: f (v i , v j ) = e −t (αE(u i , u j ) + (1 − α)J (c i , c j )) where t = |t i − t j | is the absolute difference between the time lags of v i and v j , E and J are user-based similarity and content-based similarity, respectively, and α is the trade-off parameter.", "The intuition of using exponential function of t to scale down the similarity is to capture the discriminant signals or patterns at the different stages of propagation.", "For example, a questioning message posted very early may signal a false rumor while the same posted far later from initial post may indicate the rumor is still unverified, despite that the two messages are semantically similar.", "The user-based similarity is defined as an Euclidean distance E(u i , u j ) = ||u i − u j || 2 , where u i and u j are the user vectors of node v i and v j and || • || 2 is the 2-norm of a vector.", "Here E is used to capture the characteristics of users participating in spreading rumors as discriminant signals, throughout the entire stage of propagation.", "Contentwise, we use Jaccard coefficient to measure the similarity of post content: J (c i , c j ) = |N gram(c i ) ∩ N gram(c j )| |N gram(c i ) ∪ N gram(c j )| where c i and c j are the sets of content words in two nodes.", "For n-grams here, we adopt both uni-grams and bi-grams.", "It can capture cue terms e.g., 'false', 'debunk', 'not true', etc.", "commonly occurring in rumors but not in non-rumors.", "Given two propagation trees T 1 = V 1 , E 1 and T 2 = V 2 , E 2 , PTK aims to compute the similarity between T 1 and T 2 iteratively based on enumerating all pairs of most similar subtrees.", "First, for each node v i ∈ V 1 , we obtain v i ∈ V 2 , the most similar node of v i from V 2 : v i = arg max v j ∈V 2 f (v i , v j ) Similarly, for each v j ∈ V 2 , we obtain v j ∈ V 1 : v j = arg max v i ∈V 1 f (v i , v j ) Then, the propagation tree kernel K P (T 1 , T 2 ) is defined as: v i ∈V 1 Λ(v i , v i ) + v j ∈V 2 Λ(v j , v j ) (2) where Λ(v, v ) evaluates the similarity of two subtrees rooted at v and v , which is computed recursively as follows: 1) if v or v are leaf nodes, then Λ(v, v ) = f (v, v ); 2) else Λ(v, v ) = f (v, v ) min(nc(v),nc(v )) k=1 (1 + Λ(ch(v, k), ch(v , k))) Note that unlike traditional tree kernel, in PTK the node similarity f ∈ [0, 1] is used for softly counting similar subtrees instead of common subtrees.", "Also, λ in tree kernel is not needed as subtree size is not an issue here thanks to node similarity f .", "PTK aims to capture discriminant patterns from propagation trees inclusive of user, content and temporal traits, which is inspired by prior analyses on rumors spreading, e.g., user information can be a strong clue in the initial broadcast, content features are important throughout entire propagation periods, and structural and temporal patterns help for longitudinal diffusion (Zubiaga et al., 2016; Kwon et al., 2017) .", "Context-Sensitive Extension of PTK One defect of PTK is that it ignores the clues outside the subtrees, e.g., how the information propagates from source post to the current subtree.", "Intuitively, propagation paths provide further clue for determining the truthfulness of information since they embed the route and context of how the propagation happens.", "Therefore, we propose contextsensitive PTK (cPTK) by considering the propagation paths from the root of the tree to the roots of subtrees, which shares similar intuition with the context-sensitive tree kernel (Zhou et al., 2007) .", "For a propagation tree node v ∈ T (r), let L r v be the length (i.e., # of nodes) of the propagation path from root r to v, and v[x] be the x-th ancestor of v on the path starting from v (0 ≤ x < L r v , v[0] = v, v[L r v − 1] = r) .", "cPTK evaluates the similarity between two trees T 1 (r 1 ) and T 2 (r 2 ) as follows: v i ∈V 1 L r 1 v i −1 x=0 Λ x (v i , v i ) + v j ∈V 2 L r 2 v j −1 x=0 Λ x (v j , v j ) (3) where Λ x (v, v ) measures the similarity of sub- trees rooted at v[x] and v [x] for context-sensitive evaluation, which is computed as follows: 1) if x > 0, Λ x (v, v ) = f (v[x], v [x]), where v[x] and v [x] are the x-th ancestor nodes of v and v on the respective propagation path.", "2) else Λ x (v, v ) = Λ(v, v ), namely PTK.", "Clearly, PTK is a special case of cPTK when x = 0 (see equation 3).", "cPTK evaluates the oc-currence of both context-free (without considering ancestors on propagation paths) and contextsensitive cases.", "Rumor Detection via Kernel Learning The advantage of kernel-based method is that we can avoid painstakingly engineering the features.", "This is possible because the kernel function can explore an implicit feature space when calculating the similarity between two objects (Culotta and Sorensen, 2004) .", "We incorporate the proposed tree kernel functions, i.e., PTK (equation 2) or cPTK (equation 3), into a supervised learning framework, for which we utilize a kernel-based SVM classifier.", "We treat each tree as an instance, and its similarity values with all training instances as feature space.", "Therefore, the kernel matrix of training set is m × m and that of test set is n × m where m and n are the sizes of training and test sets, respectively.", "For our multi-class task, we perform a one-vsall classification for each label and then assign the one with the highest likelihood among the four, i.e., non-rumor, false rumor, true rumor or unverified rumor.", "We choose this method due to interpretability of results, similar to recent work on occupational class classification (Preotiuc-Pietro et al., 2015; Lukasik et al., 2015) .", "Experiments and Results Data Sets To our knowledge, there is no public large dataset available for classifying propagation trees, where we need a good number of source tweets, more accurately, the tree roots together with the corresponding propagation structure, to be appropriately annotated with ground truth.", "We constructed our datasets based on a couple of reference datasets, namely Twitter15 (Liu et al., 2015) and Twitter16 (Ma et al., 2016) .", "The original datasets were released and used for binary classification of rumor and non-rumor with respect to given events that contain their relevant tweets.", "First, we extracted the popular source tweets 2 that are highly retweeted or replied.", "We then collected all the propagation threads (i.e., retweets and replies) for these source tweets.", "Because Twitter API cannot retrieve the retweets or replies, we gathered the retweet users for a given tweet from 2 Though unpopular tweets could be false, we ignore them as they do not draw much attention and are hardly impactful Twrench 3 and crawled the replies through Twitter's web interface.", "Finally, we annotated the source tweets by referring to the labels of the events they are from.", "We first turned the label of each event in Twitter15 and Twitter16 from binary to quaternary according to the veracity tag of the article in rumor debunking websites (e.g., snopes.com, Emergent.info, etc).", "Then we labeled the source tweets by following these rules: 1) Source tweets from unverified rumor events or non-rumor events are labeled the same as the corresponding event's label; 2) For a source tweet in false rumor event, we flip over the label and assign true to the source tweet if it expresses denial type of stance; otherwise, the label is assigned as false; 3) The analogous flip-over/nochange rule applies to the source tweets from true rumor events.", "We make the datasets produced publicly accessible 4 .", "Table 1 gives statistics on the resulting datasets.", "Experimental Setup We compare our kernel-based method against the following baselines: SVM-TS: A linear SVM classification model that uses time-series to model the variation of a set of hand-crafted features (Ma et al., 2015) .", "DTR: A Decision-Tree-based Ranking method to identify trending rumors (Zhao et al., 2015) , which searches for enquiry phrases and clusters disputed factual claims, and ranked the clustered results based on statistical features.", "DTC and SVM-RBF: The Twitter information credibility model using Decision Tree Classifier (Castillo et al., 2011) and the SVM-based model with RBF kernel (Yang et al., 2012) , respectively, both using hand-crafted features based on the overall statistics of the posts.", "RFC: The Random Forest Classifier proposed by Kwon et al.", "(2017) using three parameters to fit the temporal properties and an extensive set of hand-crafted features related to the user, linguistic and structure characteristics.", "GRU: The RNN-based rumor detection model proposed by Ma et al.", "(2016) with gated recurrent unit for representation learning of high-level features from relevant posts over time.", "BOW: A naive baseline we worked by representing the text in each tree using bag-of-words and building the rumor classifier with linear SVM.", "Our models: PTK and cPTK are our full PTK and cPTK models, respectively; PTKand cPTKare the setting of only using content while ignoring user properties.", "We implemented DTC and RFC with Weka 5 , SVM models with LibSVM 6 and GRU with Theano 7 .", "We held out 10% of the trees in each dataset for model tuning, and for the rest of the trees, we performed 3-fold cross-validation.", "We used accuracy, F 1 measure as evaluation metrics.", "Table 2 shows that our proposed methods outperform all the baselines on both datasets.", "Experimental Results Among all baselines, GRU performs the best, which learns the low-dimensional representation of responsive tweets by capturing the textual and temporal information.", "This indicates the effectiveness of complex signals indicative of rumors beyond cue words or phrases (e.g., \"what?", "\", \"really?", "\", \"not sure\", etc.).", "This also justifies the good performance of BOW even though it only uses uni-grams for representation.", "Although DTR uses a set of regular expressions, we found only 19.59% and 22.21% tweets in our datasets containing these expressions.", "That is why the results of DTR are not satisfactory.", "SVM-TS and RFC are comparable because both of them utilize an extensive set of features especially focusing on temporal traits.", "But none of the models can directly incorporate structured propagation patterns for deep similarity compar- ison between propagation trees.", "SVM-RBF, although using a non-linear kernel, is based on traditional hand-crafted features instead of the structural kernel like ours.", "So, they performed obviously worse than our approach.", "Representation learning methods like GRU cannot easily utilize complex structural information for learning important features from our networked data.", "In contrast, our models can capture complex propagation patterns from structured data rich of linguistic, user and temporal signals.", "Therefore, the superiority of our models is clear: PTKwhich only uses text is already better than GRU, demonstrating the importance of propagation structures.", "PTK that combines text and user yields better results on both datasets, implying that both properties are complementary and PTK integrating flat and structured information is obviously more effective.", "It is also observed that cPTK outperforms PTK except for non-rumor class.", "This suggests the context-sensitive modeling based on PTK is effective for different types of rumors, but for non- The example subtree of a rumor captured by the algorithm at early stage of propagation rumors, it seems that considering context of propagation path is not always helpful.", "This might be due to the generally weak signals originated from node properties on the paths during non-rumor's diffusion since user distribution patterns in nonrumors do not seem as obvious as in rumors.", "This is not an issue in cPTKsince user information is not considered at all.", "Over all classes, cPTK achieves the highest accuracies on both datasets.", "Furthermore, we observe that all the baseline methods perform much better on non-rumors than on rumors.", "This is because the features of existing methods were defined for a binary (rumor vs. non-rumor) classification problem.", "So, they do not perform well for finer-grained classes.", "Our ap-proach can differentiate various classes much better by deep, detailed comparison of different patterns based on propagation structure.", "Early Detection Performance Detecting rumors at an early stage of propagation is very important so that preventive measures could be taken as quickly as possible.", "In early detection task, all the posts after a detection deadline are invisible during test.", "The earlier the deadline, the less propagation information can be available.", "Figure 4 shows the performances of our PTK and cPTK models versus RFC (the best system based on feature engineering), GRU (the best system based on RNN) and DTR (an early-detection-specific algorithm) against various deadlines.", "In the first few hours, our approach demonstrates superior early detection performance than other models.", "Particularly, cPTK achieve 75% accuracy on Twitter15 and 73% on Twitter16 within 24 hours, that is much faster than other models.", "Our analysis shows that rumors typically demonstrate more complex propagation substructures especially at early stage.", "Figure 5 shows a detected subtree of a false rumor spread in its first few hours, where influential users are somehow captured to boost its propagation and the information flows among the users with an obvious unpopular-to-popular-to-unpopular trend in terms of user popularity, but such pattern was not witnessed in non-rumors in early stage.", "Many textual signals (underlined) can also be observed in that early period.", "Our method can learn such structures and patterns naturally, but it is difficult to know and hand-craft them in feature engineering.", "Conclusion and Future Work We propose a novel approach for detecting rumors in microblog posts based on kernel learning method using propagation trees.", "A propagation tree encodes the spread of a hypothesis (i.e., a source tweet) with complex structured patterns and flat information regarding content, user and time associated with the tree nodes.", "Enlightened by tree kernel techniques, our kernel method learns discriminant clues for identifying rumors of finer-grained levels by directly measuring the similarity among propagation trees via kernel functions.", "Experiments on two Twitter datasets show that our approach outperforms stateof-the-art baselines with large margin for both general and early rumor detection tasks.", "Since kernel-based approach covers more structural information than feature-based methods, it allows kernel to further incorporate information from a high dimensional space for possibly better discrimination.", "In the future, we will focus on improving the rumor detection task by exploring network representation learning framework.", "Moreover, we plan to investigate unsupervised models considering massive unlabeled rumorous data from social media." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5.1", "5.2", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Representation of Tweets Propagation", "Propagation Tree Kernel Modeling", "Background of Tree Kernel", "Our PTK Model", "Context-Sensitive Extension of PTK", "Rumor Detection via Kernel Learning", "Data Sets", "Experimental Setup", "Experimental Results", "Early Detection Performance", "Conclusion and Future Work" ] }
GEM-SciDuet-train-101#paper-1265#slide-8
Propagation Tree Kernel PTK
Existing tree kernel cannot apply here, since in our case (1) node is a vector of continuous numerical values; (2) similarity needs to be softly defined between two trees instead of hardly counting on identical nodes
Existing tree kernel cannot apply here, since in our case (1) node is a vector of continuous numerical values; (2) similarity needs to be softly defined between two trees instead of hardly counting on identical nodes
[]
GEM-SciDuet-train-101#paper-1265#slide-9
1265
Detect Rumors in Microblog Posts Using Propagation Structure via Kernel Learning
How fake news goes viral via social media? How does its propagation pattern differ from real stories? In this paper, we attempt to address the problem of identifying rumors, i.e., fake information, out of microblog posts based on their propagation structure. We firstly model microblog posts diffusion with propagation trees, which provide valuable clues on how an original message is transmitted and developed over time. We then propose a kernel-based method called Propagation Tree Kernel, which captures high-order patterns differentiating different types of rumors by evaluating the similarities between their propagation tree structures. Experimental results on two real-world datasets demonstrate that the proposed kernel-based approach can detect rumors more quickly and accurately than state-ofthe-art rumor detection models.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170 ], "paper_content_text": [ "Introduction On November 9th, 2016, Eric Tucker, a grassroots user who had just about 40 followers on Twitter, tweeted his unverified observations about paid protesters being bused to attend anti-Trump demonstration in Austin, Texas.", "The tweet, which was proved false later, was shared over 16 thousand times on Twitter and 350 thousand times on Facebook within a couple of days, fueling a nation-wide conspiracy theory 1 .", "The diffusion of the story is illustrated as Figure 1 which gives the key spreading points of the story along the time line.", "We can see that after the initial post, the tweet was shared or promoted by some influential online communities and users (including Trump himself), resulting in its wide spread.", "A widely accepted definition of rumor is \"unverified and instrumentally relevant information statements in circulation\" (DiFonzo and Bordia, 2007) .", "This unverified information may eventually turn out to be true, or partly or entirely false.", "In today's ever-connected world, rumors can arise and spread at lightening speed thanks to social media platforms, which could not only be wrong, but be misleading and dangerous to the public society.", "Therefore, it is crucial to track and debunk such rumors in timely manner.", "Journalists and fact-checking websites such as snopes.com have made efforts to track and detect rumors.", "However, such endeavor is manual, thus prone to poor coverage and low speed.", "Feature-based methods (Castillo et al., 2011; Yang et al., 2012; Ma et al., 2015) achieved certain success by employing large feature sets crafted from message contents, user profiles and holistic statistics of diffusion patterns (e.g., number of retweets, propagation time, etc.).", "But such an approach was over simplified as they ignored the dynamics of rumor propagation.", "Existing studies considering propagation characteristics mainly focused on the temporal features (Kwon et al., 2013 (Kwon et al., , 2017 rather than the structure of propagation.", "So, can the propagation structure make any difference for differentiating rumors from nonrumors?", "Recent studies showed that rumor spreaders are persons who want to get attention and popularity (Sunstein, 2014) .", "However, popular users who get more attention on Twitter (e.g., with more followers) are actually less likely to spread rumor in a sense that the high audience size might hinder a user from participating in propagating unverified information (Kwon et al., 2017) .", "Intuitively, for \"successful\" rumors being propagated as widely as popular real news, initial spreaders (typically lack of popularity) must attract certain amount of broadcasting power, e.g., attention of influential users or communities that have a lot of audiences joining in promoting the propagation.", "We refer to this as a constrained mode propagation, relative to the open mode propagation of normal messages that everyone is open to share.", "Such different modes of propagation may imply some distinct propagation structures between rumors and nonrumors and even among different types of rumors.", "Due to the complex nature of information diffusion, explicitly defining discriminant features based on propagation structure is difficult and biased.", "Figure 2 exemplifies the propagation structures of two Twitter posts, a rumor and a nonrumor, initiated by two users shown as the root nodes (in green color).", "The information flows here illustrate that the rumorous tweet is first posted by a low-impact user, then some popular users joining in who boost the spreading, but the non-rumorous tweet is initially posted by a popular user and directly spread by many general users; contentbased signal like various users' stance (Zhao et al., 2015) and edge-based signal such as relative influence (Kwon et al., 2017) can also suggest the different nature of source tweets.", "Many of such implicit distinctions throughout message propagation are hard to hand craft specifically using flat summary of statistics as previous work did.", "In addition, unlike representation learning for plain text, learning for representation of structures such as networks is not well studied in general.", "Therefore, traditional and latest text-based models (Castillo (a) A rumor (b) A non-rumor Figure 2 : Fragments of the propagation for two source tweets.", "Node size: denotes the popularity of the user who tweet the post (represented by # of followers); Red, black, blue node: content-wise the user express doubt/denial, support, neutrality in the tweet, respectively; Solid (dotted) edge: information flow from a more (less) popular user to a less (more) popular user; Dashed concentric circles: time stamps.", "Ma et al., 2015 Ma et al., , 2016 cannot be applied easily on such complex, dynamic structures.", "To capture high-order propagation patterns for rumor detection, we firstly represent the propagation of each source tweet with a propagation tree which is formed by harvesting user's interactions to one another triggered by the source tweet.", "Then, we propose a kernel-based data-driven method called Propagation Tree Kernel (PTK) to generate relevant features (i.e., subtrees) automatically for estimating the similarity between two propagation trees.", "Unlike traditional tree kernel (Moschitti, 2006; Zhang et al., 2008) for modeling syntactic structure based on parse tree, our propagation tree consists of nodes corresponding to microblog posts, each represented as a continuous vector, and edges representing the direction of propagation and providing the context to individual posts.", "The basic idea is to find and capture the salient substructures in the propagation trees indicative of rumors.", "We also extend PTK into a context-enriched PTK (cPTK) to enhance the model by considering different propagation paths from source tweet to the roots of subtrees, which capture the context of transmission.", "Extensive experiments on two real-world Twitter datasets show that the proposed methods outperform state-of-the-art rumor detection models with large margin.", "Moreover, most existing approaches regard rumor detection as a binary classification problem, which predicts a candidate hypothesis as rumor or not.", "Since a rumor often begins as unverified and later turns out to be confirmed as true or false, or remains unverified (Zubiaga et al., 2016) , here we consider a set of more practical, finer-grained classes: false rumor, true rumor, unverified rumor, and non-rumor, which becomes an even more challenging problem.", "Related Work Tracking misinformation or debunking rumors has been a hot research topic in multiple disciplines (DiFonzo and Bordia, 2007; Morris et al., 2012; Rosnow, 1991) .", "Castillo et al.", "(2011) studied information credibility on Twitter using a wide range of hand-crafted features.", "Following that, various features corresponding to message contents, user profiles and statistics of propagation patterns were proposed in many studies (Yang et al., 2012; Wu et al., 2015; Sun et al., 2013; Liu et al., 2015) .", "Zhao et al.", "(2015) focused on early rumor detection by using regular expressions for finding questing and denying tweets as the key for debunking rumor.", "All such approaches are over simplistic because they ignore the dynamic propagation patterns given the rich structures of social media data.", "Some studies focus on finding temporal patterns for understanding rumor diffusion.", "Kown et al.", "(2013; 2017) introduced a time-series fitting model based on the temporal properties of tweet volume.", "Ma et al.", "(2015) extended the model using time series to capture the variation of features over time.", "Friggeri et al.", "(2014) and Hannak et al.", "(2014) studied the structure of misinformation cascades by analyzing comments linking to rumor debunking websites.", "More recently, Ma et al.", "(2016) used recurrent neural networks to learn the representations of rumor signals from tweet text at different times.", "Our work will consider temporal, structural and linguistic signals in a unified framework based on propagation tree kernel.", "Most previous work formulated the task as classification at event level where an event is comprised of a number of source tweets, each being associated with a group of retweets and replies.", "Here we focus on classifying a given source tweet regarding a claim which is a finer-grained task.", "Similar setting was also considered in (Wu et al., 2015; Qazvinian et al., 2011) .", "Kernel methods are designed to evaluate the similarity between two objects, and tree kernel specifically addresses structured data which has been successfully applied for modeling syntactic information in many natural language tasks such as syntactic parsing (Collins and Duffy, 2001) , question-answering (Moschitti, 2006) , semantic analysis (Moschitti, 2004) , relation extraction (Zhang et al., 2008) and machine translation (Sun et al., 2010) .", "These kernels are not suitable for modeling the social media propagation structures because the nodes are not given as discrete values like part-of-speech tags, but are represented as high dimensional real-valued vectors.", "Our proposed method is a substantial extension of tree kernel for modeling such structures.", "Representation of Tweets Propagation On microblogging platforms, the follower/friend relationship embeds shared interests among the users.", "Once a user has posted a tweet, all his followers will receive the tweet.", "Furthermore, Twitter allows a user to retweet or comment another user's post, so that the information could reach beyond the network of the original creator.", "We model the propagation of each source tweet as a tree structure T (r) = V, E , where r is the source tweet as well as the root of the tree, V refers to a set of nodes each representing a responsive post (i.e., retweet or reply) of a user at a certain time to the source tweet r which initiates the circulation, and E is a set of directed edges corresponding to the response relation among the nodes in V .", "If there exists a directed edge from v i to v j , it means v j is a direct response to v i .", "More specifically, each node v ∈ V is repre- sented as a tuple v = (u v , c v , t v ) , which provides the following information: u v is the creator of the post, c v represents the text content of the post, and t v is the time lag between the source tweet r and v. In our case, u v contains attributes of the user such as # of followers/friends, verification status, # of history posts, etc., c v is a vector of binary features based on uni-grams and/or bi-grams representing the post's content.", "Propagation Tree Kernel Modeling In this section, we describe our rumor detection model based on propagation trees using kernel method called Propagation Tree Kernel (PTK).", "Our task is, given a propagation tree T (r) of a source tweet r, to predict the label of r. Background of Tree Kernel Before presenting our proposed algorithm, we briefly present the traditional tree kernel, which our PTK model is based on.", "Tree kernel was designed to compute the syntactic and semantic similarity between two natural language sentences by implicitly counting the number of common subtrees between their corresponding parse trees.", "Given a syntactic parse tree, each node with its children is associated with a grammar production rule.", "Figure 3 illustrates the syntactic parse tree of \"cut a tree\" and its subtrees.", "A subtree is defined as any subgraph which has more than one nodes, with the restriction that entire (not partial) rule productions must be included.", "For example, the fragment [NP [D a]] is excluded because it contains only part of the production NP → D N (Collins and Duffy, 2001) .", "Following Collins and Duffy (2001) , given two parse trees T 1 and T 2 , the kernel function K(T 1 , T 2 ) is defined as: v i ∈V 1 v j ∈V 2 ∆(v i , v j ) (1) where V 1 and V 2 are the sets of all nodes respectively in T 1 and T 2 , and each node is associated with a production rule, and ∆(v i , v j ) evaluates the common subtrees rooted at v i and v j .", "∆(v i , v j ) can be computed using the following recursive procedure (Collins and Duffy, 2001) : 1) if the production rules at v i and v j are different, then ∆(v i , v j ) = 0; 2) else if the production rules at v i and v j are same, and v i and v j have only leaf children (i.e., they are pre-terminal symbols), then ∆(v i , v j ) = λ; 3) else ∆(v i , v j ) = λ min(nc(v i ),nc(v j )) k=1 (1 + ∆(ch(v i , k), ch(v j , k))).", "where nc(v) is the number of children of node v, ch(v, k) is the k-th child of node v, and λ (0 < λ ≤ 1) is a decay factor.", "λ = 1 yields the number of common subtrees; λ < 1 down weighs the contribution of larger subtrees to make the kernel value less variable with respect to subtree size.", "Our PTK Model To classify propagation trees, we can calculate the similarity between the trees, which is supposed to reflect the distinction of different types of rumors and non-rumors based on structural, linguistic and temporal properties.", "However, existing tree kernels cannot be readily applied on propagation trees because 1) unlike parse tree where the node is represented by enumerable nominal value (e.g., part-of-speech tag), the propagation tree node is given as a vector of continuous numerical values representing the basic properties of the node; 2) the similarity of two parse trees is based on the count of common subtrees, for which the commonality of subtrees is evaluated by checking if the same production rules and the same children are associated with the nodes in two subtrees being compared, whereas in our context the similarity function should be defined softly since hardly two nodes from different propagation trees are same.", "With the representation of propagation tree, we first define a function f to evaluate the similarity between two nodes v i and v j (we simplify the node representation for instance v i = (u i , c i , t i )) as the following: f (v i , v j ) = e −t (αE(u i , u j ) + (1 − α)J (c i , c j )) where t = |t i − t j | is the absolute difference between the time lags of v i and v j , E and J are user-based similarity and content-based similarity, respectively, and α is the trade-off parameter.", "The intuition of using exponential function of t to scale down the similarity is to capture the discriminant signals or patterns at the different stages of propagation.", "For example, a questioning message posted very early may signal a false rumor while the same posted far later from initial post may indicate the rumor is still unverified, despite that the two messages are semantically similar.", "The user-based similarity is defined as an Euclidean distance E(u i , u j ) = ||u i − u j || 2 , where u i and u j are the user vectors of node v i and v j and || • || 2 is the 2-norm of a vector.", "Here E is used to capture the characteristics of users participating in spreading rumors as discriminant signals, throughout the entire stage of propagation.", "Contentwise, we use Jaccard coefficient to measure the similarity of post content: J (c i , c j ) = |N gram(c i ) ∩ N gram(c j )| |N gram(c i ) ∪ N gram(c j )| where c i and c j are the sets of content words in two nodes.", "For n-grams here, we adopt both uni-grams and bi-grams.", "It can capture cue terms e.g., 'false', 'debunk', 'not true', etc.", "commonly occurring in rumors but not in non-rumors.", "Given two propagation trees T 1 = V 1 , E 1 and T 2 = V 2 , E 2 , PTK aims to compute the similarity between T 1 and T 2 iteratively based on enumerating all pairs of most similar subtrees.", "First, for each node v i ∈ V 1 , we obtain v i ∈ V 2 , the most similar node of v i from V 2 : v i = arg max v j ∈V 2 f (v i , v j ) Similarly, for each v j ∈ V 2 , we obtain v j ∈ V 1 : v j = arg max v i ∈V 1 f (v i , v j ) Then, the propagation tree kernel K P (T 1 , T 2 ) is defined as: v i ∈V 1 Λ(v i , v i ) + v j ∈V 2 Λ(v j , v j ) (2) where Λ(v, v ) evaluates the similarity of two subtrees rooted at v and v , which is computed recursively as follows: 1) if v or v are leaf nodes, then Λ(v, v ) = f (v, v ); 2) else Λ(v, v ) = f (v, v ) min(nc(v),nc(v )) k=1 (1 + Λ(ch(v, k), ch(v , k))) Note that unlike traditional tree kernel, in PTK the node similarity f ∈ [0, 1] is used for softly counting similar subtrees instead of common subtrees.", "Also, λ in tree kernel is not needed as subtree size is not an issue here thanks to node similarity f .", "PTK aims to capture discriminant patterns from propagation trees inclusive of user, content and temporal traits, which is inspired by prior analyses on rumors spreading, e.g., user information can be a strong clue in the initial broadcast, content features are important throughout entire propagation periods, and structural and temporal patterns help for longitudinal diffusion (Zubiaga et al., 2016; Kwon et al., 2017) .", "Context-Sensitive Extension of PTK One defect of PTK is that it ignores the clues outside the subtrees, e.g., how the information propagates from source post to the current subtree.", "Intuitively, propagation paths provide further clue for determining the truthfulness of information since they embed the route and context of how the propagation happens.", "Therefore, we propose contextsensitive PTK (cPTK) by considering the propagation paths from the root of the tree to the roots of subtrees, which shares similar intuition with the context-sensitive tree kernel (Zhou et al., 2007) .", "For a propagation tree node v ∈ T (r), let L r v be the length (i.e., # of nodes) of the propagation path from root r to v, and v[x] be the x-th ancestor of v on the path starting from v (0 ≤ x < L r v , v[0] = v, v[L r v − 1] = r) .", "cPTK evaluates the similarity between two trees T 1 (r 1 ) and T 2 (r 2 ) as follows: v i ∈V 1 L r 1 v i −1 x=0 Λ x (v i , v i ) + v j ∈V 2 L r 2 v j −1 x=0 Λ x (v j , v j ) (3) where Λ x (v, v ) measures the similarity of sub- trees rooted at v[x] and v [x] for context-sensitive evaluation, which is computed as follows: 1) if x > 0, Λ x (v, v ) = f (v[x], v [x]), where v[x] and v [x] are the x-th ancestor nodes of v and v on the respective propagation path.", "2) else Λ x (v, v ) = Λ(v, v ), namely PTK.", "Clearly, PTK is a special case of cPTK when x = 0 (see equation 3).", "cPTK evaluates the oc-currence of both context-free (without considering ancestors on propagation paths) and contextsensitive cases.", "Rumor Detection via Kernel Learning The advantage of kernel-based method is that we can avoid painstakingly engineering the features.", "This is possible because the kernel function can explore an implicit feature space when calculating the similarity between two objects (Culotta and Sorensen, 2004) .", "We incorporate the proposed tree kernel functions, i.e., PTK (equation 2) or cPTK (equation 3), into a supervised learning framework, for which we utilize a kernel-based SVM classifier.", "We treat each tree as an instance, and its similarity values with all training instances as feature space.", "Therefore, the kernel matrix of training set is m × m and that of test set is n × m where m and n are the sizes of training and test sets, respectively.", "For our multi-class task, we perform a one-vsall classification for each label and then assign the one with the highest likelihood among the four, i.e., non-rumor, false rumor, true rumor or unverified rumor.", "We choose this method due to interpretability of results, similar to recent work on occupational class classification (Preotiuc-Pietro et al., 2015; Lukasik et al., 2015) .", "Experiments and Results Data Sets To our knowledge, there is no public large dataset available for classifying propagation trees, where we need a good number of source tweets, more accurately, the tree roots together with the corresponding propagation structure, to be appropriately annotated with ground truth.", "We constructed our datasets based on a couple of reference datasets, namely Twitter15 (Liu et al., 2015) and Twitter16 (Ma et al., 2016) .", "The original datasets were released and used for binary classification of rumor and non-rumor with respect to given events that contain their relevant tweets.", "First, we extracted the popular source tweets 2 that are highly retweeted or replied.", "We then collected all the propagation threads (i.e., retweets and replies) for these source tweets.", "Because Twitter API cannot retrieve the retweets or replies, we gathered the retweet users for a given tweet from 2 Though unpopular tweets could be false, we ignore them as they do not draw much attention and are hardly impactful Twrench 3 and crawled the replies through Twitter's web interface.", "Finally, we annotated the source tweets by referring to the labels of the events they are from.", "We first turned the label of each event in Twitter15 and Twitter16 from binary to quaternary according to the veracity tag of the article in rumor debunking websites (e.g., snopes.com, Emergent.info, etc).", "Then we labeled the source tweets by following these rules: 1) Source tweets from unverified rumor events or non-rumor events are labeled the same as the corresponding event's label; 2) For a source tweet in false rumor event, we flip over the label and assign true to the source tweet if it expresses denial type of stance; otherwise, the label is assigned as false; 3) The analogous flip-over/nochange rule applies to the source tweets from true rumor events.", "We make the datasets produced publicly accessible 4 .", "Table 1 gives statistics on the resulting datasets.", "Experimental Setup We compare our kernel-based method against the following baselines: SVM-TS: A linear SVM classification model that uses time-series to model the variation of a set of hand-crafted features (Ma et al., 2015) .", "DTR: A Decision-Tree-based Ranking method to identify trending rumors (Zhao et al., 2015) , which searches for enquiry phrases and clusters disputed factual claims, and ranked the clustered results based on statistical features.", "DTC and SVM-RBF: The Twitter information credibility model using Decision Tree Classifier (Castillo et al., 2011) and the SVM-based model with RBF kernel (Yang et al., 2012) , respectively, both using hand-crafted features based on the overall statistics of the posts.", "RFC: The Random Forest Classifier proposed by Kwon et al.", "(2017) using three parameters to fit the temporal properties and an extensive set of hand-crafted features related to the user, linguistic and structure characteristics.", "GRU: The RNN-based rumor detection model proposed by Ma et al.", "(2016) with gated recurrent unit for representation learning of high-level features from relevant posts over time.", "BOW: A naive baseline we worked by representing the text in each tree using bag-of-words and building the rumor classifier with linear SVM.", "Our models: PTK and cPTK are our full PTK and cPTK models, respectively; PTKand cPTKare the setting of only using content while ignoring user properties.", "We implemented DTC and RFC with Weka 5 , SVM models with LibSVM 6 and GRU with Theano 7 .", "We held out 10% of the trees in each dataset for model tuning, and for the rest of the trees, we performed 3-fold cross-validation.", "We used accuracy, F 1 measure as evaluation metrics.", "Table 2 shows that our proposed methods outperform all the baselines on both datasets.", "Experimental Results Among all baselines, GRU performs the best, which learns the low-dimensional representation of responsive tweets by capturing the textual and temporal information.", "This indicates the effectiveness of complex signals indicative of rumors beyond cue words or phrases (e.g., \"what?", "\", \"really?", "\", \"not sure\", etc.).", "This also justifies the good performance of BOW even though it only uses uni-grams for representation.", "Although DTR uses a set of regular expressions, we found only 19.59% and 22.21% tweets in our datasets containing these expressions.", "That is why the results of DTR are not satisfactory.", "SVM-TS and RFC are comparable because both of them utilize an extensive set of features especially focusing on temporal traits.", "But none of the models can directly incorporate structured propagation patterns for deep similarity compar- ison between propagation trees.", "SVM-RBF, although using a non-linear kernel, is based on traditional hand-crafted features instead of the structural kernel like ours.", "So, they performed obviously worse than our approach.", "Representation learning methods like GRU cannot easily utilize complex structural information for learning important features from our networked data.", "In contrast, our models can capture complex propagation patterns from structured data rich of linguistic, user and temporal signals.", "Therefore, the superiority of our models is clear: PTKwhich only uses text is already better than GRU, demonstrating the importance of propagation structures.", "PTK that combines text and user yields better results on both datasets, implying that both properties are complementary and PTK integrating flat and structured information is obviously more effective.", "It is also observed that cPTK outperforms PTK except for non-rumor class.", "This suggests the context-sensitive modeling based on PTK is effective for different types of rumors, but for non- The example subtree of a rumor captured by the algorithm at early stage of propagation rumors, it seems that considering context of propagation path is not always helpful.", "This might be due to the generally weak signals originated from node properties on the paths during non-rumor's diffusion since user distribution patterns in nonrumors do not seem as obvious as in rumors.", "This is not an issue in cPTKsince user information is not considered at all.", "Over all classes, cPTK achieves the highest accuracies on both datasets.", "Furthermore, we observe that all the baseline methods perform much better on non-rumors than on rumors.", "This is because the features of existing methods were defined for a binary (rumor vs. non-rumor) classification problem.", "So, they do not perform well for finer-grained classes.", "Our ap-proach can differentiate various classes much better by deep, detailed comparison of different patterns based on propagation structure.", "Early Detection Performance Detecting rumors at an early stage of propagation is very important so that preventive measures could be taken as quickly as possible.", "In early detection task, all the posts after a detection deadline are invisible during test.", "The earlier the deadline, the less propagation information can be available.", "Figure 4 shows the performances of our PTK and cPTK models versus RFC (the best system based on feature engineering), GRU (the best system based on RNN) and DTR (an early-detection-specific algorithm) against various deadlines.", "In the first few hours, our approach demonstrates superior early detection performance than other models.", "Particularly, cPTK achieve 75% accuracy on Twitter15 and 73% on Twitter16 within 24 hours, that is much faster than other models.", "Our analysis shows that rumors typically demonstrate more complex propagation substructures especially at early stage.", "Figure 5 shows a detected subtree of a false rumor spread in its first few hours, where influential users are somehow captured to boost its propagation and the information flows among the users with an obvious unpopular-to-popular-to-unpopular trend in terms of user popularity, but such pattern was not witnessed in non-rumors in early stage.", "Many textual signals (underlined) can also be observed in that early period.", "Our method can learn such structures and patterns naturally, but it is difficult to know and hand-craft them in feature engineering.", "Conclusion and Future Work We propose a novel approach for detecting rumors in microblog posts based on kernel learning method using propagation trees.", "A propagation tree encodes the spread of a hypothesis (i.e., a source tweet) with complex structured patterns and flat information regarding content, user and time associated with the tree nodes.", "Enlightened by tree kernel techniques, our kernel method learns discriminant clues for identifying rumors of finer-grained levels by directly measuring the similarity among propagation trees via kernel functions.", "Experiments on two Twitter datasets show that our approach outperforms stateof-the-art baselines with large margin for both general and early rumor detection tasks.", "Since kernel-based approach covers more structural information than feature-based methods, it allows kernel to further incorporate information from a high dimensional space for possibly better discrimination.", "In the future, we will focus on improving the rumor detection task by exploring network representation learning framework.", "Moreover, we plan to investigate unsupervised models considering massive unlabeled rumorous data from social media." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5.1", "5.2", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Representation of Tweets Propagation", "Propagation Tree Kernel Modeling", "Background of Tree Kernel", "Our PTK Model", "Context-Sensitive Extension of PTK", "Rumor Detection via Kernel Learning", "Data Sets", "Experimental Setup", "Experimental Results", "Early Detection Performance", "Conclusion and Future Work" ] }
GEM-SciDuet-train-101#paper-1265#slide-9
Propagation Tree Kernel
Given two trees and >PTK compute similarity between them by enumerating all similar subtrees. and are similar node pairs from and respectively similarity of two subtrees rooted at and Sub-tree 1) if or are leaf nodes, then
Given two trees and >PTK compute similarity between them by enumerating all similar subtrees. and are similar node pairs from and respectively similarity of two subtrees rooted at and Sub-tree 1) if or are leaf nodes, then
[]