gem_id
stringlengths
37
41
paper_id
stringlengths
3
4
paper_title
stringlengths
19
183
paper_abstract
stringlengths
168
1.38k
paper_content
dict
paper_headers
dict
slide_id
stringlengths
37
41
slide_title
stringlengths
2
85
slide_content_text
stringlengths
11
2.55k
target
stringlengths
11
2.55k
references
list
GEM-SciDuet-train-101#paper-1265#slide-10
1265
Detect Rumors in Microblog Posts Using Propagation Structure via Kernel Learning
How fake news goes viral via social media? How does its propagation pattern differ from real stories? In this paper, we attempt to address the problem of identifying rumors, i.e., fake information, out of microblog posts based on their propagation structure. We firstly model microblog posts diffusion with propagation trees, which provide valuable clues on how an original message is transmitted and developed over time. We then propose a kernel-based method called Propagation Tree Kernel, which captures high-order patterns differentiating different types of rumors by evaluating the similarities between their propagation tree structures. Experimental results on two real-world datasets demonstrate that the proposed kernel-based approach can detect rumors more quickly and accurately than state-ofthe-art rumor detection models.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170 ], "paper_content_text": [ "Introduction On November 9th, 2016, Eric Tucker, a grassroots user who had just about 40 followers on Twitter, tweeted his unverified observations about paid protesters being bused to attend anti-Trump demonstration in Austin, Texas.", "The tweet, which was proved false later, was shared over 16 thousand times on Twitter and 350 thousand times on Facebook within a couple of days, fueling a nation-wide conspiracy theory 1 .", "The diffusion of the story is illustrated as Figure 1 which gives the key spreading points of the story along the time line.", "We can see that after the initial post, the tweet was shared or promoted by some influential online communities and users (including Trump himself), resulting in its wide spread.", "A widely accepted definition of rumor is \"unverified and instrumentally relevant information statements in circulation\" (DiFonzo and Bordia, 2007) .", "This unverified information may eventually turn out to be true, or partly or entirely false.", "In today's ever-connected world, rumors can arise and spread at lightening speed thanks to social media platforms, which could not only be wrong, but be misleading and dangerous to the public society.", "Therefore, it is crucial to track and debunk such rumors in timely manner.", "Journalists and fact-checking websites such as snopes.com have made efforts to track and detect rumors.", "However, such endeavor is manual, thus prone to poor coverage and low speed.", "Feature-based methods (Castillo et al., 2011; Yang et al., 2012; Ma et al., 2015) achieved certain success by employing large feature sets crafted from message contents, user profiles and holistic statistics of diffusion patterns (e.g., number of retweets, propagation time, etc.).", "But such an approach was over simplified as they ignored the dynamics of rumor propagation.", "Existing studies considering propagation characteristics mainly focused on the temporal features (Kwon et al., 2013 (Kwon et al., , 2017 rather than the structure of propagation.", "So, can the propagation structure make any difference for differentiating rumors from nonrumors?", "Recent studies showed that rumor spreaders are persons who want to get attention and popularity (Sunstein, 2014) .", "However, popular users who get more attention on Twitter (e.g., with more followers) are actually less likely to spread rumor in a sense that the high audience size might hinder a user from participating in propagating unverified information (Kwon et al., 2017) .", "Intuitively, for \"successful\" rumors being propagated as widely as popular real news, initial spreaders (typically lack of popularity) must attract certain amount of broadcasting power, e.g., attention of influential users or communities that have a lot of audiences joining in promoting the propagation.", "We refer to this as a constrained mode propagation, relative to the open mode propagation of normal messages that everyone is open to share.", "Such different modes of propagation may imply some distinct propagation structures between rumors and nonrumors and even among different types of rumors.", "Due to the complex nature of information diffusion, explicitly defining discriminant features based on propagation structure is difficult and biased.", "Figure 2 exemplifies the propagation structures of two Twitter posts, a rumor and a nonrumor, initiated by two users shown as the root nodes (in green color).", "The information flows here illustrate that the rumorous tweet is first posted by a low-impact user, then some popular users joining in who boost the spreading, but the non-rumorous tweet is initially posted by a popular user and directly spread by many general users; contentbased signal like various users' stance (Zhao et al., 2015) and edge-based signal such as relative influence (Kwon et al., 2017) can also suggest the different nature of source tweets.", "Many of such implicit distinctions throughout message propagation are hard to hand craft specifically using flat summary of statistics as previous work did.", "In addition, unlike representation learning for plain text, learning for representation of structures such as networks is not well studied in general.", "Therefore, traditional and latest text-based models (Castillo (a) A rumor (b) A non-rumor Figure 2 : Fragments of the propagation for two source tweets.", "Node size: denotes the popularity of the user who tweet the post (represented by # of followers); Red, black, blue node: content-wise the user express doubt/denial, support, neutrality in the tweet, respectively; Solid (dotted) edge: information flow from a more (less) popular user to a less (more) popular user; Dashed concentric circles: time stamps.", "Ma et al., 2015 Ma et al., , 2016 cannot be applied easily on such complex, dynamic structures.", "To capture high-order propagation patterns for rumor detection, we firstly represent the propagation of each source tweet with a propagation tree which is formed by harvesting user's interactions to one another triggered by the source tweet.", "Then, we propose a kernel-based data-driven method called Propagation Tree Kernel (PTK) to generate relevant features (i.e., subtrees) automatically for estimating the similarity between two propagation trees.", "Unlike traditional tree kernel (Moschitti, 2006; Zhang et al., 2008) for modeling syntactic structure based on parse tree, our propagation tree consists of nodes corresponding to microblog posts, each represented as a continuous vector, and edges representing the direction of propagation and providing the context to individual posts.", "The basic idea is to find and capture the salient substructures in the propagation trees indicative of rumors.", "We also extend PTK into a context-enriched PTK (cPTK) to enhance the model by considering different propagation paths from source tweet to the roots of subtrees, which capture the context of transmission.", "Extensive experiments on two real-world Twitter datasets show that the proposed methods outperform state-of-the-art rumor detection models with large margin.", "Moreover, most existing approaches regard rumor detection as a binary classification problem, which predicts a candidate hypothesis as rumor or not.", "Since a rumor often begins as unverified and later turns out to be confirmed as true or false, or remains unverified (Zubiaga et al., 2016) , here we consider a set of more practical, finer-grained classes: false rumor, true rumor, unverified rumor, and non-rumor, which becomes an even more challenging problem.", "Related Work Tracking misinformation or debunking rumors has been a hot research topic in multiple disciplines (DiFonzo and Bordia, 2007; Morris et al., 2012; Rosnow, 1991) .", "Castillo et al.", "(2011) studied information credibility on Twitter using a wide range of hand-crafted features.", "Following that, various features corresponding to message contents, user profiles and statistics of propagation patterns were proposed in many studies (Yang et al., 2012; Wu et al., 2015; Sun et al., 2013; Liu et al., 2015) .", "Zhao et al.", "(2015) focused on early rumor detection by using regular expressions for finding questing and denying tweets as the key for debunking rumor.", "All such approaches are over simplistic because they ignore the dynamic propagation patterns given the rich structures of social media data.", "Some studies focus on finding temporal patterns for understanding rumor diffusion.", "Kown et al.", "(2013; 2017) introduced a time-series fitting model based on the temporal properties of tweet volume.", "Ma et al.", "(2015) extended the model using time series to capture the variation of features over time.", "Friggeri et al.", "(2014) and Hannak et al.", "(2014) studied the structure of misinformation cascades by analyzing comments linking to rumor debunking websites.", "More recently, Ma et al.", "(2016) used recurrent neural networks to learn the representations of rumor signals from tweet text at different times.", "Our work will consider temporal, structural and linguistic signals in a unified framework based on propagation tree kernel.", "Most previous work formulated the task as classification at event level where an event is comprised of a number of source tweets, each being associated with a group of retweets and replies.", "Here we focus on classifying a given source tweet regarding a claim which is a finer-grained task.", "Similar setting was also considered in (Wu et al., 2015; Qazvinian et al., 2011) .", "Kernel methods are designed to evaluate the similarity between two objects, and tree kernel specifically addresses structured data which has been successfully applied for modeling syntactic information in many natural language tasks such as syntactic parsing (Collins and Duffy, 2001) , question-answering (Moschitti, 2006) , semantic analysis (Moschitti, 2004) , relation extraction (Zhang et al., 2008) and machine translation (Sun et al., 2010) .", "These kernels are not suitable for modeling the social media propagation structures because the nodes are not given as discrete values like part-of-speech tags, but are represented as high dimensional real-valued vectors.", "Our proposed method is a substantial extension of tree kernel for modeling such structures.", "Representation of Tweets Propagation On microblogging platforms, the follower/friend relationship embeds shared interests among the users.", "Once a user has posted a tweet, all his followers will receive the tweet.", "Furthermore, Twitter allows a user to retweet or comment another user's post, so that the information could reach beyond the network of the original creator.", "We model the propagation of each source tweet as a tree structure T (r) = V, E , where r is the source tweet as well as the root of the tree, V refers to a set of nodes each representing a responsive post (i.e., retweet or reply) of a user at a certain time to the source tweet r which initiates the circulation, and E is a set of directed edges corresponding to the response relation among the nodes in V .", "If there exists a directed edge from v i to v j , it means v j is a direct response to v i .", "More specifically, each node v ∈ V is repre- sented as a tuple v = (u v , c v , t v ) , which provides the following information: u v is the creator of the post, c v represents the text content of the post, and t v is the time lag between the source tweet r and v. In our case, u v contains attributes of the user such as # of followers/friends, verification status, # of history posts, etc., c v is a vector of binary features based on uni-grams and/or bi-grams representing the post's content.", "Propagation Tree Kernel Modeling In this section, we describe our rumor detection model based on propagation trees using kernel method called Propagation Tree Kernel (PTK).", "Our task is, given a propagation tree T (r) of a source tweet r, to predict the label of r. Background of Tree Kernel Before presenting our proposed algorithm, we briefly present the traditional tree kernel, which our PTK model is based on.", "Tree kernel was designed to compute the syntactic and semantic similarity between two natural language sentences by implicitly counting the number of common subtrees between their corresponding parse trees.", "Given a syntactic parse tree, each node with its children is associated with a grammar production rule.", "Figure 3 illustrates the syntactic parse tree of \"cut a tree\" and its subtrees.", "A subtree is defined as any subgraph which has more than one nodes, with the restriction that entire (not partial) rule productions must be included.", "For example, the fragment [NP [D a]] is excluded because it contains only part of the production NP β†’ D N (Collins and Duffy, 2001) .", "Following Collins and Duffy (2001) , given two parse trees T 1 and T 2 , the kernel function K(T 1 , T 2 ) is defined as: v i ∈V 1 v j ∈V 2 βˆ†(v i , v j ) (1) where V 1 and V 2 are the sets of all nodes respectively in T 1 and T 2 , and each node is associated with a production rule, and βˆ†(v i , v j ) evaluates the common subtrees rooted at v i and v j .", "βˆ†(v i , v j ) can be computed using the following recursive procedure (Collins and Duffy, 2001) : 1) if the production rules at v i and v j are different, then βˆ†(v i , v j ) = 0; 2) else if the production rules at v i and v j are same, and v i and v j have only leaf children (i.e., they are pre-terminal symbols), then βˆ†(v i , v j ) = Ξ»; 3) else βˆ†(v i , v j ) = Ξ» min(nc(v i ),nc(v j )) k=1 (1 + βˆ†(ch(v i , k), ch(v j , k))).", "where nc(v) is the number of children of node v, ch(v, k) is the k-th child of node v, and Ξ» (0 < Ξ» ≀ 1) is a decay factor.", "Ξ» = 1 yields the number of common subtrees; Ξ» < 1 down weighs the contribution of larger subtrees to make the kernel value less variable with respect to subtree size.", "Our PTK Model To classify propagation trees, we can calculate the similarity between the trees, which is supposed to reflect the distinction of different types of rumors and non-rumors based on structural, linguistic and temporal properties.", "However, existing tree kernels cannot be readily applied on propagation trees because 1) unlike parse tree where the node is represented by enumerable nominal value (e.g., part-of-speech tag), the propagation tree node is given as a vector of continuous numerical values representing the basic properties of the node; 2) the similarity of two parse trees is based on the count of common subtrees, for which the commonality of subtrees is evaluated by checking if the same production rules and the same children are associated with the nodes in two subtrees being compared, whereas in our context the similarity function should be defined softly since hardly two nodes from different propagation trees are same.", "With the representation of propagation tree, we first define a function f to evaluate the similarity between two nodes v i and v j (we simplify the node representation for instance v i = (u i , c i , t i )) as the following: f (v i , v j ) = e βˆ’t (Ξ±E(u i , u j ) + (1 βˆ’ Ξ±)J (c i , c j )) where t = |t i βˆ’ t j | is the absolute difference between the time lags of v i and v j , E and J are user-based similarity and content-based similarity, respectively, and Ξ± is the trade-off parameter.", "The intuition of using exponential function of t to scale down the similarity is to capture the discriminant signals or patterns at the different stages of propagation.", "For example, a questioning message posted very early may signal a false rumor while the same posted far later from initial post may indicate the rumor is still unverified, despite that the two messages are semantically similar.", "The user-based similarity is defined as an Euclidean distance E(u i , u j ) = ||u i βˆ’ u j || 2 , where u i and u j are the user vectors of node v i and v j and || β€’ || 2 is the 2-norm of a vector.", "Here E is used to capture the characteristics of users participating in spreading rumors as discriminant signals, throughout the entire stage of propagation.", "Contentwise, we use Jaccard coefficient to measure the similarity of post content: J (c i , c j ) = |N gram(c i ) ∩ N gram(c j )| |N gram(c i ) βˆͺ N gram(c j )| where c i and c j are the sets of content words in two nodes.", "For n-grams here, we adopt both uni-grams and bi-grams.", "It can capture cue terms e.g., 'false', 'debunk', 'not true', etc.", "commonly occurring in rumors but not in non-rumors.", "Given two propagation trees T 1 = V 1 , E 1 and T 2 = V 2 , E 2 , PTK aims to compute the similarity between T 1 and T 2 iteratively based on enumerating all pairs of most similar subtrees.", "First, for each node v i ∈ V 1 , we obtain v i ∈ V 2 , the most similar node of v i from V 2 : v i = arg max v j ∈V 2 f (v i , v j ) Similarly, for each v j ∈ V 2 , we obtain v j ∈ V 1 : v j = arg max v i ∈V 1 f (v i , v j ) Then, the propagation tree kernel K P (T 1 , T 2 ) is defined as: v i ∈V 1 Ξ›(v i , v i ) + v j ∈V 2 Ξ›(v j , v j ) (2) where Ξ›(v, v ) evaluates the similarity of two subtrees rooted at v and v , which is computed recursively as follows: 1) if v or v are leaf nodes, then Ξ›(v, v ) = f (v, v ); 2) else Ξ›(v, v ) = f (v, v ) min(nc(v),nc(v )) k=1 (1 + Ξ›(ch(v, k), ch(v , k))) Note that unlike traditional tree kernel, in PTK the node similarity f ∈ [0, 1] is used for softly counting similar subtrees instead of common subtrees.", "Also, Ξ» in tree kernel is not needed as subtree size is not an issue here thanks to node similarity f .", "PTK aims to capture discriminant patterns from propagation trees inclusive of user, content and temporal traits, which is inspired by prior analyses on rumors spreading, e.g., user information can be a strong clue in the initial broadcast, content features are important throughout entire propagation periods, and structural and temporal patterns help for longitudinal diffusion (Zubiaga et al., 2016; Kwon et al., 2017) .", "Context-Sensitive Extension of PTK One defect of PTK is that it ignores the clues outside the subtrees, e.g., how the information propagates from source post to the current subtree.", "Intuitively, propagation paths provide further clue for determining the truthfulness of information since they embed the route and context of how the propagation happens.", "Therefore, we propose contextsensitive PTK (cPTK) by considering the propagation paths from the root of the tree to the roots of subtrees, which shares similar intuition with the context-sensitive tree kernel (Zhou et al., 2007) .", "For a propagation tree node v ∈ T (r), let L r v be the length (i.e., # of nodes) of the propagation path from root r to v, and v[x] be the x-th ancestor of v on the path starting from v (0 ≀ x < L r v , v[0] = v, v[L r v βˆ’ 1] = r) .", "cPTK evaluates the similarity between two trees T 1 (r 1 ) and T 2 (r 2 ) as follows: v i ∈V 1 L r 1 v i βˆ’1 x=0 Ξ› x (v i , v i ) + v j ∈V 2 L r 2 v j βˆ’1 x=0 Ξ› x (v j , v j ) (3) where Ξ› x (v, v ) measures the similarity of sub- trees rooted at v[x] and v [x] for context-sensitive evaluation, which is computed as follows: 1) if x > 0, Ξ› x (v, v ) = f (v[x], v [x]), where v[x] and v [x] are the x-th ancestor nodes of v and v on the respective propagation path.", "2) else Ξ› x (v, v ) = Ξ›(v, v ), namely PTK.", "Clearly, PTK is a special case of cPTK when x = 0 (see equation 3).", "cPTK evaluates the oc-currence of both context-free (without considering ancestors on propagation paths) and contextsensitive cases.", "Rumor Detection via Kernel Learning The advantage of kernel-based method is that we can avoid painstakingly engineering the features.", "This is possible because the kernel function can explore an implicit feature space when calculating the similarity between two objects (Culotta and Sorensen, 2004) .", "We incorporate the proposed tree kernel functions, i.e., PTK (equation 2) or cPTK (equation 3), into a supervised learning framework, for which we utilize a kernel-based SVM classifier.", "We treat each tree as an instance, and its similarity values with all training instances as feature space.", "Therefore, the kernel matrix of training set is m Γ— m and that of test set is n Γ— m where m and n are the sizes of training and test sets, respectively.", "For our multi-class task, we perform a one-vsall classification for each label and then assign the one with the highest likelihood among the four, i.e., non-rumor, false rumor, true rumor or unverified rumor.", "We choose this method due to interpretability of results, similar to recent work on occupational class classification (Preotiuc-Pietro et al., 2015; Lukasik et al., 2015) .", "Experiments and Results Data Sets To our knowledge, there is no public large dataset available for classifying propagation trees, where we need a good number of source tweets, more accurately, the tree roots together with the corresponding propagation structure, to be appropriately annotated with ground truth.", "We constructed our datasets based on a couple of reference datasets, namely Twitter15 (Liu et al., 2015) and Twitter16 (Ma et al., 2016) .", "The original datasets were released and used for binary classification of rumor and non-rumor with respect to given events that contain their relevant tweets.", "First, we extracted the popular source tweets 2 that are highly retweeted or replied.", "We then collected all the propagation threads (i.e., retweets and replies) for these source tweets.", "Because Twitter API cannot retrieve the retweets or replies, we gathered the retweet users for a given tweet from 2 Though unpopular tweets could be false, we ignore them as they do not draw much attention and are hardly impactful Twrench 3 and crawled the replies through Twitter's web interface.", "Finally, we annotated the source tweets by referring to the labels of the events they are from.", "We first turned the label of each event in Twitter15 and Twitter16 from binary to quaternary according to the veracity tag of the article in rumor debunking websites (e.g., snopes.com, Emergent.info, etc).", "Then we labeled the source tweets by following these rules: 1) Source tweets from unverified rumor events or non-rumor events are labeled the same as the corresponding event's label; 2) For a source tweet in false rumor event, we flip over the label and assign true to the source tweet if it expresses denial type of stance; otherwise, the label is assigned as false; 3) The analogous flip-over/nochange rule applies to the source tweets from true rumor events.", "We make the datasets produced publicly accessible 4 .", "Table 1 gives statistics on the resulting datasets.", "Experimental Setup We compare our kernel-based method against the following baselines: SVM-TS: A linear SVM classification model that uses time-series to model the variation of a set of hand-crafted features (Ma et al., 2015) .", "DTR: A Decision-Tree-based Ranking method to identify trending rumors (Zhao et al., 2015) , which searches for enquiry phrases and clusters disputed factual claims, and ranked the clustered results based on statistical features.", "DTC and SVM-RBF: The Twitter information credibility model using Decision Tree Classifier (Castillo et al., 2011) and the SVM-based model with RBF kernel (Yang et al., 2012) , respectively, both using hand-crafted features based on the overall statistics of the posts.", "RFC: The Random Forest Classifier proposed by Kwon et al.", "(2017) using three parameters to fit the temporal properties and an extensive set of hand-crafted features related to the user, linguistic and structure characteristics.", "GRU: The RNN-based rumor detection model proposed by Ma et al.", "(2016) with gated recurrent unit for representation learning of high-level features from relevant posts over time.", "BOW: A naive baseline we worked by representing the text in each tree using bag-of-words and building the rumor classifier with linear SVM.", "Our models: PTK and cPTK are our full PTK and cPTK models, respectively; PTKand cPTKare the setting of only using content while ignoring user properties.", "We implemented DTC and RFC with Weka 5 , SVM models with LibSVM 6 and GRU with Theano 7 .", "We held out 10% of the trees in each dataset for model tuning, and for the rest of the trees, we performed 3-fold cross-validation.", "We used accuracy, F 1 measure as evaluation metrics.", "Table 2 shows that our proposed methods outperform all the baselines on both datasets.", "Experimental Results Among all baselines, GRU performs the best, which learns the low-dimensional representation of responsive tweets by capturing the textual and temporal information.", "This indicates the effectiveness of complex signals indicative of rumors beyond cue words or phrases (e.g., \"what?", "\", \"really?", "\", \"not sure\", etc.).", "This also justifies the good performance of BOW even though it only uses uni-grams for representation.", "Although DTR uses a set of regular expressions, we found only 19.59% and 22.21% tweets in our datasets containing these expressions.", "That is why the results of DTR are not satisfactory.", "SVM-TS and RFC are comparable because both of them utilize an extensive set of features especially focusing on temporal traits.", "But none of the models can directly incorporate structured propagation patterns for deep similarity compar- ison between propagation trees.", "SVM-RBF, although using a non-linear kernel, is based on traditional hand-crafted features instead of the structural kernel like ours.", "So, they performed obviously worse than our approach.", "Representation learning methods like GRU cannot easily utilize complex structural information for learning important features from our networked data.", "In contrast, our models can capture complex propagation patterns from structured data rich of linguistic, user and temporal signals.", "Therefore, the superiority of our models is clear: PTKwhich only uses text is already better than GRU, demonstrating the importance of propagation structures.", "PTK that combines text and user yields better results on both datasets, implying that both properties are complementary and PTK integrating flat and structured information is obviously more effective.", "It is also observed that cPTK outperforms PTK except for non-rumor class.", "This suggests the context-sensitive modeling based on PTK is effective for different types of rumors, but for non- The example subtree of a rumor captured by the algorithm at early stage of propagation rumors, it seems that considering context of propagation path is not always helpful.", "This might be due to the generally weak signals originated from node properties on the paths during non-rumor's diffusion since user distribution patterns in nonrumors do not seem as obvious as in rumors.", "This is not an issue in cPTKsince user information is not considered at all.", "Over all classes, cPTK achieves the highest accuracies on both datasets.", "Furthermore, we observe that all the baseline methods perform much better on non-rumors than on rumors.", "This is because the features of existing methods were defined for a binary (rumor vs. non-rumor) classification problem.", "So, they do not perform well for finer-grained classes.", "Our ap-proach can differentiate various classes much better by deep, detailed comparison of different patterns based on propagation structure.", "Early Detection Performance Detecting rumors at an early stage of propagation is very important so that preventive measures could be taken as quickly as possible.", "In early detection task, all the posts after a detection deadline are invisible during test.", "The earlier the deadline, the less propagation information can be available.", "Figure 4 shows the performances of our PTK and cPTK models versus RFC (the best system based on feature engineering), GRU (the best system based on RNN) and DTR (an early-detection-specific algorithm) against various deadlines.", "In the first few hours, our approach demonstrates superior early detection performance than other models.", "Particularly, cPTK achieve 75% accuracy on Twitter15 and 73% on Twitter16 within 24 hours, that is much faster than other models.", "Our analysis shows that rumors typically demonstrate more complex propagation substructures especially at early stage.", "Figure 5 shows a detected subtree of a false rumor spread in its first few hours, where influential users are somehow captured to boost its propagation and the information flows among the users with an obvious unpopular-to-popular-to-unpopular trend in terms of user popularity, but such pattern was not witnessed in non-rumors in early stage.", "Many textual signals (underlined) can also be observed in that early period.", "Our method can learn such structures and patterns naturally, but it is difficult to know and hand-craft them in feature engineering.", "Conclusion and Future Work We propose a novel approach for detecting rumors in microblog posts based on kernel learning method using propagation trees.", "A propagation tree encodes the spread of a hypothesis (i.e., a source tweet) with complex structured patterns and flat information regarding content, user and time associated with the tree nodes.", "Enlightened by tree kernel techniques, our kernel method learns discriminant clues for identifying rumors of finer-grained levels by directly measuring the similarity among propagation trees via kernel functions.", "Experiments on two Twitter datasets show that our approach outperforms stateof-the-art baselines with large margin for both general and early rumor detection tasks.", "Since kernel-based approach covers more structural information than feature-based methods, it allows kernel to further incorporate information from a high dimensional space for possibly better discrimination.", "In the future, we will focus on improving the rumor detection task by exploring network representation learning framework.", "Moreover, we plan to investigate unsupervised models considering massive unlabeled rumorous data from social media." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5.1", "5.2", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Representation of Tweets Propagation", "Propagation Tree Kernel Modeling", "Background of Tree Kernel", "Our PTK Model", "Context-Sensitive Extension of PTK", "Rumor Detection via Kernel Learning", "Data Sets", "Experimental Setup", "Experimental Results", "Early Detection Performance", "Conclusion and Future Work" ] }
GEM-SciDuet-train-101#paper-1265#slide-10
Context Sensitive Extension of PTK
Consider propagation paths from root node to the subtree PTK ignores the clues outside the subtrees and the route embed how the Similar intuition to context-sensitive tree kernel (Zhou et al., 2007) : the length of propagation path from root to Context path 1) if and are the x-th ancestor nodes of and ,then ): similarity of subtrees rooted at and Kernel Algorithm Subtree root
Consider propagation paths from root node to the subtree PTK ignores the clues outside the subtrees and the route embed how the Similar intuition to context-sensitive tree kernel (Zhou et al., 2007) : the length of propagation path from root to Context path 1) if and are the x-th ancestor nodes of and ,then ): similarity of subtrees rooted at and Kernel Algorithm Subtree root
[]
GEM-SciDuet-train-101#paper-1265#slide-11
1265
Detect Rumors in Microblog Posts Using Propagation Structure via Kernel Learning
How fake news goes viral via social media? How does its propagation pattern differ from real stories? In this paper, we attempt to address the problem of identifying rumors, i.e., fake information, out of microblog posts based on their propagation structure. We firstly model microblog posts diffusion with propagation trees, which provide valuable clues on how an original message is transmitted and developed over time. We then propose a kernel-based method called Propagation Tree Kernel, which captures high-order patterns differentiating different types of rumors by evaluating the similarities between their propagation tree structures. Experimental results on two real-world datasets demonstrate that the proposed kernel-based approach can detect rumors more quickly and accurately than state-ofthe-art rumor detection models.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170 ], "paper_content_text": [ "Introduction On November 9th, 2016, Eric Tucker, a grassroots user who had just about 40 followers on Twitter, tweeted his unverified observations about paid protesters being bused to attend anti-Trump demonstration in Austin, Texas.", "The tweet, which was proved false later, was shared over 16 thousand times on Twitter and 350 thousand times on Facebook within a couple of days, fueling a nation-wide conspiracy theory 1 .", "The diffusion of the story is illustrated as Figure 1 which gives the key spreading points of the story along the time line.", "We can see that after the initial post, the tweet was shared or promoted by some influential online communities and users (including Trump himself), resulting in its wide spread.", "A widely accepted definition of rumor is \"unverified and instrumentally relevant information statements in circulation\" (DiFonzo and Bordia, 2007) .", "This unverified information may eventually turn out to be true, or partly or entirely false.", "In today's ever-connected world, rumors can arise and spread at lightening speed thanks to social media platforms, which could not only be wrong, but be misleading and dangerous to the public society.", "Therefore, it is crucial to track and debunk such rumors in timely manner.", "Journalists and fact-checking websites such as snopes.com have made efforts to track and detect rumors.", "However, such endeavor is manual, thus prone to poor coverage and low speed.", "Feature-based methods (Castillo et al., 2011; Yang et al., 2012; Ma et al., 2015) achieved certain success by employing large feature sets crafted from message contents, user profiles and holistic statistics of diffusion patterns (e.g., number of retweets, propagation time, etc.).", "But such an approach was over simplified as they ignored the dynamics of rumor propagation.", "Existing studies considering propagation characteristics mainly focused on the temporal features (Kwon et al., 2013 (Kwon et al., , 2017 rather than the structure of propagation.", "So, can the propagation structure make any difference for differentiating rumors from nonrumors?", "Recent studies showed that rumor spreaders are persons who want to get attention and popularity (Sunstein, 2014) .", "However, popular users who get more attention on Twitter (e.g., with more followers) are actually less likely to spread rumor in a sense that the high audience size might hinder a user from participating in propagating unverified information (Kwon et al., 2017) .", "Intuitively, for \"successful\" rumors being propagated as widely as popular real news, initial spreaders (typically lack of popularity) must attract certain amount of broadcasting power, e.g., attention of influential users or communities that have a lot of audiences joining in promoting the propagation.", "We refer to this as a constrained mode propagation, relative to the open mode propagation of normal messages that everyone is open to share.", "Such different modes of propagation may imply some distinct propagation structures between rumors and nonrumors and even among different types of rumors.", "Due to the complex nature of information diffusion, explicitly defining discriminant features based on propagation structure is difficult and biased.", "Figure 2 exemplifies the propagation structures of two Twitter posts, a rumor and a nonrumor, initiated by two users shown as the root nodes (in green color).", "The information flows here illustrate that the rumorous tweet is first posted by a low-impact user, then some popular users joining in who boost the spreading, but the non-rumorous tweet is initially posted by a popular user and directly spread by many general users; contentbased signal like various users' stance (Zhao et al., 2015) and edge-based signal such as relative influence (Kwon et al., 2017) can also suggest the different nature of source tweets.", "Many of such implicit distinctions throughout message propagation are hard to hand craft specifically using flat summary of statistics as previous work did.", "In addition, unlike representation learning for plain text, learning for representation of structures such as networks is not well studied in general.", "Therefore, traditional and latest text-based models (Castillo (a) A rumor (b) A non-rumor Figure 2 : Fragments of the propagation for two source tweets.", "Node size: denotes the popularity of the user who tweet the post (represented by # of followers); Red, black, blue node: content-wise the user express doubt/denial, support, neutrality in the tweet, respectively; Solid (dotted) edge: information flow from a more (less) popular user to a less (more) popular user; Dashed concentric circles: time stamps.", "Ma et al., 2015 Ma et al., , 2016 cannot be applied easily on such complex, dynamic structures.", "To capture high-order propagation patterns for rumor detection, we firstly represent the propagation of each source tweet with a propagation tree which is formed by harvesting user's interactions to one another triggered by the source tweet.", "Then, we propose a kernel-based data-driven method called Propagation Tree Kernel (PTK) to generate relevant features (i.e., subtrees) automatically for estimating the similarity between two propagation trees.", "Unlike traditional tree kernel (Moschitti, 2006; Zhang et al., 2008) for modeling syntactic structure based on parse tree, our propagation tree consists of nodes corresponding to microblog posts, each represented as a continuous vector, and edges representing the direction of propagation and providing the context to individual posts.", "The basic idea is to find and capture the salient substructures in the propagation trees indicative of rumors.", "We also extend PTK into a context-enriched PTK (cPTK) to enhance the model by considering different propagation paths from source tweet to the roots of subtrees, which capture the context of transmission.", "Extensive experiments on two real-world Twitter datasets show that the proposed methods outperform state-of-the-art rumor detection models with large margin.", "Moreover, most existing approaches regard rumor detection as a binary classification problem, which predicts a candidate hypothesis as rumor or not.", "Since a rumor often begins as unverified and later turns out to be confirmed as true or false, or remains unverified (Zubiaga et al., 2016) , here we consider a set of more practical, finer-grained classes: false rumor, true rumor, unverified rumor, and non-rumor, which becomes an even more challenging problem.", "Related Work Tracking misinformation or debunking rumors has been a hot research topic in multiple disciplines (DiFonzo and Bordia, 2007; Morris et al., 2012; Rosnow, 1991) .", "Castillo et al.", "(2011) studied information credibility on Twitter using a wide range of hand-crafted features.", "Following that, various features corresponding to message contents, user profiles and statistics of propagation patterns were proposed in many studies (Yang et al., 2012; Wu et al., 2015; Sun et al., 2013; Liu et al., 2015) .", "Zhao et al.", "(2015) focused on early rumor detection by using regular expressions for finding questing and denying tweets as the key for debunking rumor.", "All such approaches are over simplistic because they ignore the dynamic propagation patterns given the rich structures of social media data.", "Some studies focus on finding temporal patterns for understanding rumor diffusion.", "Kown et al.", "(2013; 2017) introduced a time-series fitting model based on the temporal properties of tweet volume.", "Ma et al.", "(2015) extended the model using time series to capture the variation of features over time.", "Friggeri et al.", "(2014) and Hannak et al.", "(2014) studied the structure of misinformation cascades by analyzing comments linking to rumor debunking websites.", "More recently, Ma et al.", "(2016) used recurrent neural networks to learn the representations of rumor signals from tweet text at different times.", "Our work will consider temporal, structural and linguistic signals in a unified framework based on propagation tree kernel.", "Most previous work formulated the task as classification at event level where an event is comprised of a number of source tweets, each being associated with a group of retweets and replies.", "Here we focus on classifying a given source tweet regarding a claim which is a finer-grained task.", "Similar setting was also considered in (Wu et al., 2015; Qazvinian et al., 2011) .", "Kernel methods are designed to evaluate the similarity between two objects, and tree kernel specifically addresses structured data which has been successfully applied for modeling syntactic information in many natural language tasks such as syntactic parsing (Collins and Duffy, 2001) , question-answering (Moschitti, 2006) , semantic analysis (Moschitti, 2004) , relation extraction (Zhang et al., 2008) and machine translation (Sun et al., 2010) .", "These kernels are not suitable for modeling the social media propagation structures because the nodes are not given as discrete values like part-of-speech tags, but are represented as high dimensional real-valued vectors.", "Our proposed method is a substantial extension of tree kernel for modeling such structures.", "Representation of Tweets Propagation On microblogging platforms, the follower/friend relationship embeds shared interests among the users.", "Once a user has posted a tweet, all his followers will receive the tweet.", "Furthermore, Twitter allows a user to retweet or comment another user's post, so that the information could reach beyond the network of the original creator.", "We model the propagation of each source tweet as a tree structure T (r) = V, E , where r is the source tweet as well as the root of the tree, V refers to a set of nodes each representing a responsive post (i.e., retweet or reply) of a user at a certain time to the source tweet r which initiates the circulation, and E is a set of directed edges corresponding to the response relation among the nodes in V .", "If there exists a directed edge from v i to v j , it means v j is a direct response to v i .", "More specifically, each node v ∈ V is repre- sented as a tuple v = (u v , c v , t v ) , which provides the following information: u v is the creator of the post, c v represents the text content of the post, and t v is the time lag between the source tweet r and v. In our case, u v contains attributes of the user such as # of followers/friends, verification status, # of history posts, etc., c v is a vector of binary features based on uni-grams and/or bi-grams representing the post's content.", "Propagation Tree Kernel Modeling In this section, we describe our rumor detection model based on propagation trees using kernel method called Propagation Tree Kernel (PTK).", "Our task is, given a propagation tree T (r) of a source tweet r, to predict the label of r. Background of Tree Kernel Before presenting our proposed algorithm, we briefly present the traditional tree kernel, which our PTK model is based on.", "Tree kernel was designed to compute the syntactic and semantic similarity between two natural language sentences by implicitly counting the number of common subtrees between their corresponding parse trees.", "Given a syntactic parse tree, each node with its children is associated with a grammar production rule.", "Figure 3 illustrates the syntactic parse tree of \"cut a tree\" and its subtrees.", "A subtree is defined as any subgraph which has more than one nodes, with the restriction that entire (not partial) rule productions must be included.", "For example, the fragment [NP [D a]] is excluded because it contains only part of the production NP β†’ D N (Collins and Duffy, 2001) .", "Following Collins and Duffy (2001) , given two parse trees T 1 and T 2 , the kernel function K(T 1 , T 2 ) is defined as: v i ∈V 1 v j ∈V 2 βˆ†(v i , v j ) (1) where V 1 and V 2 are the sets of all nodes respectively in T 1 and T 2 , and each node is associated with a production rule, and βˆ†(v i , v j ) evaluates the common subtrees rooted at v i and v j .", "βˆ†(v i , v j ) can be computed using the following recursive procedure (Collins and Duffy, 2001) : 1) if the production rules at v i and v j are different, then βˆ†(v i , v j ) = 0; 2) else if the production rules at v i and v j are same, and v i and v j have only leaf children (i.e., they are pre-terminal symbols), then βˆ†(v i , v j ) = Ξ»; 3) else βˆ†(v i , v j ) = Ξ» min(nc(v i ),nc(v j )) k=1 (1 + βˆ†(ch(v i , k), ch(v j , k))).", "where nc(v) is the number of children of node v, ch(v, k) is the k-th child of node v, and Ξ» (0 < Ξ» ≀ 1) is a decay factor.", "Ξ» = 1 yields the number of common subtrees; Ξ» < 1 down weighs the contribution of larger subtrees to make the kernel value less variable with respect to subtree size.", "Our PTK Model To classify propagation trees, we can calculate the similarity between the trees, which is supposed to reflect the distinction of different types of rumors and non-rumors based on structural, linguistic and temporal properties.", "However, existing tree kernels cannot be readily applied on propagation trees because 1) unlike parse tree where the node is represented by enumerable nominal value (e.g., part-of-speech tag), the propagation tree node is given as a vector of continuous numerical values representing the basic properties of the node; 2) the similarity of two parse trees is based on the count of common subtrees, for which the commonality of subtrees is evaluated by checking if the same production rules and the same children are associated with the nodes in two subtrees being compared, whereas in our context the similarity function should be defined softly since hardly two nodes from different propagation trees are same.", "With the representation of propagation tree, we first define a function f to evaluate the similarity between two nodes v i and v j (we simplify the node representation for instance v i = (u i , c i , t i )) as the following: f (v i , v j ) = e βˆ’t (Ξ±E(u i , u j ) + (1 βˆ’ Ξ±)J (c i , c j )) where t = |t i βˆ’ t j | is the absolute difference between the time lags of v i and v j , E and J are user-based similarity and content-based similarity, respectively, and Ξ± is the trade-off parameter.", "The intuition of using exponential function of t to scale down the similarity is to capture the discriminant signals or patterns at the different stages of propagation.", "For example, a questioning message posted very early may signal a false rumor while the same posted far later from initial post may indicate the rumor is still unverified, despite that the two messages are semantically similar.", "The user-based similarity is defined as an Euclidean distance E(u i , u j ) = ||u i βˆ’ u j || 2 , where u i and u j are the user vectors of node v i and v j and || β€’ || 2 is the 2-norm of a vector.", "Here E is used to capture the characteristics of users participating in spreading rumors as discriminant signals, throughout the entire stage of propagation.", "Contentwise, we use Jaccard coefficient to measure the similarity of post content: J (c i , c j ) = |N gram(c i ) ∩ N gram(c j )| |N gram(c i ) βˆͺ N gram(c j )| where c i and c j are the sets of content words in two nodes.", "For n-grams here, we adopt both uni-grams and bi-grams.", "It can capture cue terms e.g., 'false', 'debunk', 'not true', etc.", "commonly occurring in rumors but not in non-rumors.", "Given two propagation trees T 1 = V 1 , E 1 and T 2 = V 2 , E 2 , PTK aims to compute the similarity between T 1 and T 2 iteratively based on enumerating all pairs of most similar subtrees.", "First, for each node v i ∈ V 1 , we obtain v i ∈ V 2 , the most similar node of v i from V 2 : v i = arg max v j ∈V 2 f (v i , v j ) Similarly, for each v j ∈ V 2 , we obtain v j ∈ V 1 : v j = arg max v i ∈V 1 f (v i , v j ) Then, the propagation tree kernel K P (T 1 , T 2 ) is defined as: v i ∈V 1 Ξ›(v i , v i ) + v j ∈V 2 Ξ›(v j , v j ) (2) where Ξ›(v, v ) evaluates the similarity of two subtrees rooted at v and v , which is computed recursively as follows: 1) if v or v are leaf nodes, then Ξ›(v, v ) = f (v, v ); 2) else Ξ›(v, v ) = f (v, v ) min(nc(v),nc(v )) k=1 (1 + Ξ›(ch(v, k), ch(v , k))) Note that unlike traditional tree kernel, in PTK the node similarity f ∈ [0, 1] is used for softly counting similar subtrees instead of common subtrees.", "Also, Ξ» in tree kernel is not needed as subtree size is not an issue here thanks to node similarity f .", "PTK aims to capture discriminant patterns from propagation trees inclusive of user, content and temporal traits, which is inspired by prior analyses on rumors spreading, e.g., user information can be a strong clue in the initial broadcast, content features are important throughout entire propagation periods, and structural and temporal patterns help for longitudinal diffusion (Zubiaga et al., 2016; Kwon et al., 2017) .", "Context-Sensitive Extension of PTK One defect of PTK is that it ignores the clues outside the subtrees, e.g., how the information propagates from source post to the current subtree.", "Intuitively, propagation paths provide further clue for determining the truthfulness of information since they embed the route and context of how the propagation happens.", "Therefore, we propose contextsensitive PTK (cPTK) by considering the propagation paths from the root of the tree to the roots of subtrees, which shares similar intuition with the context-sensitive tree kernel (Zhou et al., 2007) .", "For a propagation tree node v ∈ T (r), let L r v be the length (i.e., # of nodes) of the propagation path from root r to v, and v[x] be the x-th ancestor of v on the path starting from v (0 ≀ x < L r v , v[0] = v, v[L r v βˆ’ 1] = r) .", "cPTK evaluates the similarity between two trees T 1 (r 1 ) and T 2 (r 2 ) as follows: v i ∈V 1 L r 1 v i βˆ’1 x=0 Ξ› x (v i , v i ) + v j ∈V 2 L r 2 v j βˆ’1 x=0 Ξ› x (v j , v j ) (3) where Ξ› x (v, v ) measures the similarity of sub- trees rooted at v[x] and v [x] for context-sensitive evaluation, which is computed as follows: 1) if x > 0, Ξ› x (v, v ) = f (v[x], v [x]), where v[x] and v [x] are the x-th ancestor nodes of v and v on the respective propagation path.", "2) else Ξ› x (v, v ) = Ξ›(v, v ), namely PTK.", "Clearly, PTK is a special case of cPTK when x = 0 (see equation 3).", "cPTK evaluates the oc-currence of both context-free (without considering ancestors on propagation paths) and contextsensitive cases.", "Rumor Detection via Kernel Learning The advantage of kernel-based method is that we can avoid painstakingly engineering the features.", "This is possible because the kernel function can explore an implicit feature space when calculating the similarity between two objects (Culotta and Sorensen, 2004) .", "We incorporate the proposed tree kernel functions, i.e., PTK (equation 2) or cPTK (equation 3), into a supervised learning framework, for which we utilize a kernel-based SVM classifier.", "We treat each tree as an instance, and its similarity values with all training instances as feature space.", "Therefore, the kernel matrix of training set is m Γ— m and that of test set is n Γ— m where m and n are the sizes of training and test sets, respectively.", "For our multi-class task, we perform a one-vsall classification for each label and then assign the one with the highest likelihood among the four, i.e., non-rumor, false rumor, true rumor or unverified rumor.", "We choose this method due to interpretability of results, similar to recent work on occupational class classification (Preotiuc-Pietro et al., 2015; Lukasik et al., 2015) .", "Experiments and Results Data Sets To our knowledge, there is no public large dataset available for classifying propagation trees, where we need a good number of source tweets, more accurately, the tree roots together with the corresponding propagation structure, to be appropriately annotated with ground truth.", "We constructed our datasets based on a couple of reference datasets, namely Twitter15 (Liu et al., 2015) and Twitter16 (Ma et al., 2016) .", "The original datasets were released and used for binary classification of rumor and non-rumor with respect to given events that contain their relevant tweets.", "First, we extracted the popular source tweets 2 that are highly retweeted or replied.", "We then collected all the propagation threads (i.e., retweets and replies) for these source tweets.", "Because Twitter API cannot retrieve the retweets or replies, we gathered the retweet users for a given tweet from 2 Though unpopular tweets could be false, we ignore them as they do not draw much attention and are hardly impactful Twrench 3 and crawled the replies through Twitter's web interface.", "Finally, we annotated the source tweets by referring to the labels of the events they are from.", "We first turned the label of each event in Twitter15 and Twitter16 from binary to quaternary according to the veracity tag of the article in rumor debunking websites (e.g., snopes.com, Emergent.info, etc).", "Then we labeled the source tweets by following these rules: 1) Source tweets from unverified rumor events or non-rumor events are labeled the same as the corresponding event's label; 2) For a source tweet in false rumor event, we flip over the label and assign true to the source tweet if it expresses denial type of stance; otherwise, the label is assigned as false; 3) The analogous flip-over/nochange rule applies to the source tweets from true rumor events.", "We make the datasets produced publicly accessible 4 .", "Table 1 gives statistics on the resulting datasets.", "Experimental Setup We compare our kernel-based method against the following baselines: SVM-TS: A linear SVM classification model that uses time-series to model the variation of a set of hand-crafted features (Ma et al., 2015) .", "DTR: A Decision-Tree-based Ranking method to identify trending rumors (Zhao et al., 2015) , which searches for enquiry phrases and clusters disputed factual claims, and ranked the clustered results based on statistical features.", "DTC and SVM-RBF: The Twitter information credibility model using Decision Tree Classifier (Castillo et al., 2011) and the SVM-based model with RBF kernel (Yang et al., 2012) , respectively, both using hand-crafted features based on the overall statistics of the posts.", "RFC: The Random Forest Classifier proposed by Kwon et al.", "(2017) using three parameters to fit the temporal properties and an extensive set of hand-crafted features related to the user, linguistic and structure characteristics.", "GRU: The RNN-based rumor detection model proposed by Ma et al.", "(2016) with gated recurrent unit for representation learning of high-level features from relevant posts over time.", "BOW: A naive baseline we worked by representing the text in each tree using bag-of-words and building the rumor classifier with linear SVM.", "Our models: PTK and cPTK are our full PTK and cPTK models, respectively; PTKand cPTKare the setting of only using content while ignoring user properties.", "We implemented DTC and RFC with Weka 5 , SVM models with LibSVM 6 and GRU with Theano 7 .", "We held out 10% of the trees in each dataset for model tuning, and for the rest of the trees, we performed 3-fold cross-validation.", "We used accuracy, F 1 measure as evaluation metrics.", "Table 2 shows that our proposed methods outperform all the baselines on both datasets.", "Experimental Results Among all baselines, GRU performs the best, which learns the low-dimensional representation of responsive tweets by capturing the textual and temporal information.", "This indicates the effectiveness of complex signals indicative of rumors beyond cue words or phrases (e.g., \"what?", "\", \"really?", "\", \"not sure\", etc.).", "This also justifies the good performance of BOW even though it only uses uni-grams for representation.", "Although DTR uses a set of regular expressions, we found only 19.59% and 22.21% tweets in our datasets containing these expressions.", "That is why the results of DTR are not satisfactory.", "SVM-TS and RFC are comparable because both of them utilize an extensive set of features especially focusing on temporal traits.", "But none of the models can directly incorporate structured propagation patterns for deep similarity compar- ison between propagation trees.", "SVM-RBF, although using a non-linear kernel, is based on traditional hand-crafted features instead of the structural kernel like ours.", "So, they performed obviously worse than our approach.", "Representation learning methods like GRU cannot easily utilize complex structural information for learning important features from our networked data.", "In contrast, our models can capture complex propagation patterns from structured data rich of linguistic, user and temporal signals.", "Therefore, the superiority of our models is clear: PTKwhich only uses text is already better than GRU, demonstrating the importance of propagation structures.", "PTK that combines text and user yields better results on both datasets, implying that both properties are complementary and PTK integrating flat and structured information is obviously more effective.", "It is also observed that cPTK outperforms PTK except for non-rumor class.", "This suggests the context-sensitive modeling based on PTK is effective for different types of rumors, but for non- The example subtree of a rumor captured by the algorithm at early stage of propagation rumors, it seems that considering context of propagation path is not always helpful.", "This might be due to the generally weak signals originated from node properties on the paths during non-rumor's diffusion since user distribution patterns in nonrumors do not seem as obvious as in rumors.", "This is not an issue in cPTKsince user information is not considered at all.", "Over all classes, cPTK achieves the highest accuracies on both datasets.", "Furthermore, we observe that all the baseline methods perform much better on non-rumors than on rumors.", "This is because the features of existing methods were defined for a binary (rumor vs. non-rumor) classification problem.", "So, they do not perform well for finer-grained classes.", "Our ap-proach can differentiate various classes much better by deep, detailed comparison of different patterns based on propagation structure.", "Early Detection Performance Detecting rumors at an early stage of propagation is very important so that preventive measures could be taken as quickly as possible.", "In early detection task, all the posts after a detection deadline are invisible during test.", "The earlier the deadline, the less propagation information can be available.", "Figure 4 shows the performances of our PTK and cPTK models versus RFC (the best system based on feature engineering), GRU (the best system based on RNN) and DTR (an early-detection-specific algorithm) against various deadlines.", "In the first few hours, our approach demonstrates superior early detection performance than other models.", "Particularly, cPTK achieve 75% accuracy on Twitter15 and 73% on Twitter16 within 24 hours, that is much faster than other models.", "Our analysis shows that rumors typically demonstrate more complex propagation substructures especially at early stage.", "Figure 5 shows a detected subtree of a false rumor spread in its first few hours, where influential users are somehow captured to boost its propagation and the information flows among the users with an obvious unpopular-to-popular-to-unpopular trend in terms of user popularity, but such pattern was not witnessed in non-rumors in early stage.", "Many textual signals (underlined) can also be observed in that early period.", "Our method can learn such structures and patterns naturally, but it is difficult to know and hand-craft them in feature engineering.", "Conclusion and Future Work We propose a novel approach for detecting rumors in microblog posts based on kernel learning method using propagation trees.", "A propagation tree encodes the spread of a hypothesis (i.e., a source tweet) with complex structured patterns and flat information regarding content, user and time associated with the tree nodes.", "Enlightened by tree kernel techniques, our kernel method learns discriminant clues for identifying rumors of finer-grained levels by directly measuring the similarity among propagation trees via kernel functions.", "Experiments on two Twitter datasets show that our approach outperforms stateof-the-art baselines with large margin for both general and early rumor detection tasks.", "Since kernel-based approach covers more structural information than feature-based methods, it allows kernel to further incorporate information from a high dimensional space for possibly better discrimination.", "In the future, we will focus on improving the rumor detection task by exploring network representation learning framework.", "Moreover, we plan to investigate unsupervised models considering massive unlabeled rumorous data from social media." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5.1", "5.2", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Representation of Tweets Propagation", "Propagation Tree Kernel Modeling", "Background of Tree Kernel", "Our PTK Model", "Context-Sensitive Extension of PTK", "Rumor Detection via Kernel Learning", "Data Sets", "Experimental Setup", "Experimental Results", "Early Detection Performance", "Conclusion and Future Work" ] }
GEM-SciDuet-train-101#paper-1265#slide-11
Rumor Detection via Kernel Learning
Incorporate the proposed tree kernel functions (i.e., PTK or cPTK) into a supervised learning framework, for which we utilize a kernel-based SVM classifier. Avoid feature engineering the kernel function can explore an implicit feature space when calculating the similarity For multi-class task, perform One vs. all, i.e., building K (# of classes) basic binary classifiers so as to separate one class from all the others.
Incorporate the proposed tree kernel functions (i.e., PTK or cPTK) into a supervised learning framework, for which we utilize a kernel-based SVM classifier. Avoid feature engineering the kernel function can explore an implicit feature space when calculating the similarity For multi-class task, perform One vs. all, i.e., building K (# of classes) basic binary classifiers so as to separate one class from all the others.
[]
GEM-SciDuet-train-101#paper-1265#slide-12
1265
Detect Rumors in Microblog Posts Using Propagation Structure via Kernel Learning
How fake news goes viral via social media? How does its propagation pattern differ from real stories? In this paper, we attempt to address the problem of identifying rumors, i.e., fake information, out of microblog posts based on their propagation structure. We firstly model microblog posts diffusion with propagation trees, which provide valuable clues on how an original message is transmitted and developed over time. We then propose a kernel-based method called Propagation Tree Kernel, which captures high-order patterns differentiating different types of rumors by evaluating the similarities between their propagation tree structures. Experimental results on two real-world datasets demonstrate that the proposed kernel-based approach can detect rumors more quickly and accurately than state-ofthe-art rumor detection models.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170 ], "paper_content_text": [ "Introduction On November 9th, 2016, Eric Tucker, a grassroots user who had just about 40 followers on Twitter, tweeted his unverified observations about paid protesters being bused to attend anti-Trump demonstration in Austin, Texas.", "The tweet, which was proved false later, was shared over 16 thousand times on Twitter and 350 thousand times on Facebook within a couple of days, fueling a nation-wide conspiracy theory 1 .", "The diffusion of the story is illustrated as Figure 1 which gives the key spreading points of the story along the time line.", "We can see that after the initial post, the tweet was shared or promoted by some influential online communities and users (including Trump himself), resulting in its wide spread.", "A widely accepted definition of rumor is \"unverified and instrumentally relevant information statements in circulation\" (DiFonzo and Bordia, 2007) .", "This unverified information may eventually turn out to be true, or partly or entirely false.", "In today's ever-connected world, rumors can arise and spread at lightening speed thanks to social media platforms, which could not only be wrong, but be misleading and dangerous to the public society.", "Therefore, it is crucial to track and debunk such rumors in timely manner.", "Journalists and fact-checking websites such as snopes.com have made efforts to track and detect rumors.", "However, such endeavor is manual, thus prone to poor coverage and low speed.", "Feature-based methods (Castillo et al., 2011; Yang et al., 2012; Ma et al., 2015) achieved certain success by employing large feature sets crafted from message contents, user profiles and holistic statistics of diffusion patterns (e.g., number of retweets, propagation time, etc.).", "But such an approach was over simplified as they ignored the dynamics of rumor propagation.", "Existing studies considering propagation characteristics mainly focused on the temporal features (Kwon et al., 2013 (Kwon et al., , 2017 rather than the structure of propagation.", "So, can the propagation structure make any difference for differentiating rumors from nonrumors?", "Recent studies showed that rumor spreaders are persons who want to get attention and popularity (Sunstein, 2014) .", "However, popular users who get more attention on Twitter (e.g., with more followers) are actually less likely to spread rumor in a sense that the high audience size might hinder a user from participating in propagating unverified information (Kwon et al., 2017) .", "Intuitively, for \"successful\" rumors being propagated as widely as popular real news, initial spreaders (typically lack of popularity) must attract certain amount of broadcasting power, e.g., attention of influential users or communities that have a lot of audiences joining in promoting the propagation.", "We refer to this as a constrained mode propagation, relative to the open mode propagation of normal messages that everyone is open to share.", "Such different modes of propagation may imply some distinct propagation structures between rumors and nonrumors and even among different types of rumors.", "Due to the complex nature of information diffusion, explicitly defining discriminant features based on propagation structure is difficult and biased.", "Figure 2 exemplifies the propagation structures of two Twitter posts, a rumor and a nonrumor, initiated by two users shown as the root nodes (in green color).", "The information flows here illustrate that the rumorous tweet is first posted by a low-impact user, then some popular users joining in who boost the spreading, but the non-rumorous tweet is initially posted by a popular user and directly spread by many general users; contentbased signal like various users' stance (Zhao et al., 2015) and edge-based signal such as relative influence (Kwon et al., 2017) can also suggest the different nature of source tweets.", "Many of such implicit distinctions throughout message propagation are hard to hand craft specifically using flat summary of statistics as previous work did.", "In addition, unlike representation learning for plain text, learning for representation of structures such as networks is not well studied in general.", "Therefore, traditional and latest text-based models (Castillo (a) A rumor (b) A non-rumor Figure 2 : Fragments of the propagation for two source tweets.", "Node size: denotes the popularity of the user who tweet the post (represented by # of followers); Red, black, blue node: content-wise the user express doubt/denial, support, neutrality in the tweet, respectively; Solid (dotted) edge: information flow from a more (less) popular user to a less (more) popular user; Dashed concentric circles: time stamps.", "Ma et al., 2015 Ma et al., , 2016 cannot be applied easily on such complex, dynamic structures.", "To capture high-order propagation patterns for rumor detection, we firstly represent the propagation of each source tweet with a propagation tree which is formed by harvesting user's interactions to one another triggered by the source tweet.", "Then, we propose a kernel-based data-driven method called Propagation Tree Kernel (PTK) to generate relevant features (i.e., subtrees) automatically for estimating the similarity between two propagation trees.", "Unlike traditional tree kernel (Moschitti, 2006; Zhang et al., 2008) for modeling syntactic structure based on parse tree, our propagation tree consists of nodes corresponding to microblog posts, each represented as a continuous vector, and edges representing the direction of propagation and providing the context to individual posts.", "The basic idea is to find and capture the salient substructures in the propagation trees indicative of rumors.", "We also extend PTK into a context-enriched PTK (cPTK) to enhance the model by considering different propagation paths from source tweet to the roots of subtrees, which capture the context of transmission.", "Extensive experiments on two real-world Twitter datasets show that the proposed methods outperform state-of-the-art rumor detection models with large margin.", "Moreover, most existing approaches regard rumor detection as a binary classification problem, which predicts a candidate hypothesis as rumor or not.", "Since a rumor often begins as unverified and later turns out to be confirmed as true or false, or remains unverified (Zubiaga et al., 2016) , here we consider a set of more practical, finer-grained classes: false rumor, true rumor, unverified rumor, and non-rumor, which becomes an even more challenging problem.", "Related Work Tracking misinformation or debunking rumors has been a hot research topic in multiple disciplines (DiFonzo and Bordia, 2007; Morris et al., 2012; Rosnow, 1991) .", "Castillo et al.", "(2011) studied information credibility on Twitter using a wide range of hand-crafted features.", "Following that, various features corresponding to message contents, user profiles and statistics of propagation patterns were proposed in many studies (Yang et al., 2012; Wu et al., 2015; Sun et al., 2013; Liu et al., 2015) .", "Zhao et al.", "(2015) focused on early rumor detection by using regular expressions for finding questing and denying tweets as the key for debunking rumor.", "All such approaches are over simplistic because they ignore the dynamic propagation patterns given the rich structures of social media data.", "Some studies focus on finding temporal patterns for understanding rumor diffusion.", "Kown et al.", "(2013; 2017) introduced a time-series fitting model based on the temporal properties of tweet volume.", "Ma et al.", "(2015) extended the model using time series to capture the variation of features over time.", "Friggeri et al.", "(2014) and Hannak et al.", "(2014) studied the structure of misinformation cascades by analyzing comments linking to rumor debunking websites.", "More recently, Ma et al.", "(2016) used recurrent neural networks to learn the representations of rumor signals from tweet text at different times.", "Our work will consider temporal, structural and linguistic signals in a unified framework based on propagation tree kernel.", "Most previous work formulated the task as classification at event level where an event is comprised of a number of source tweets, each being associated with a group of retweets and replies.", "Here we focus on classifying a given source tweet regarding a claim which is a finer-grained task.", "Similar setting was also considered in (Wu et al., 2015; Qazvinian et al., 2011) .", "Kernel methods are designed to evaluate the similarity between two objects, and tree kernel specifically addresses structured data which has been successfully applied for modeling syntactic information in many natural language tasks such as syntactic parsing (Collins and Duffy, 2001) , question-answering (Moschitti, 2006) , semantic analysis (Moschitti, 2004) , relation extraction (Zhang et al., 2008) and machine translation (Sun et al., 2010) .", "These kernels are not suitable for modeling the social media propagation structures because the nodes are not given as discrete values like part-of-speech tags, but are represented as high dimensional real-valued vectors.", "Our proposed method is a substantial extension of tree kernel for modeling such structures.", "Representation of Tweets Propagation On microblogging platforms, the follower/friend relationship embeds shared interests among the users.", "Once a user has posted a tweet, all his followers will receive the tweet.", "Furthermore, Twitter allows a user to retweet or comment another user's post, so that the information could reach beyond the network of the original creator.", "We model the propagation of each source tweet as a tree structure T (r) = V, E , where r is the source tweet as well as the root of the tree, V refers to a set of nodes each representing a responsive post (i.e., retweet or reply) of a user at a certain time to the source tweet r which initiates the circulation, and E is a set of directed edges corresponding to the response relation among the nodes in V .", "If there exists a directed edge from v i to v j , it means v j is a direct response to v i .", "More specifically, each node v ∈ V is repre- sented as a tuple v = (u v , c v , t v ) , which provides the following information: u v is the creator of the post, c v represents the text content of the post, and t v is the time lag between the source tweet r and v. In our case, u v contains attributes of the user such as # of followers/friends, verification status, # of history posts, etc., c v is a vector of binary features based on uni-grams and/or bi-grams representing the post's content.", "Propagation Tree Kernel Modeling In this section, we describe our rumor detection model based on propagation trees using kernel method called Propagation Tree Kernel (PTK).", "Our task is, given a propagation tree T (r) of a source tweet r, to predict the label of r. Background of Tree Kernel Before presenting our proposed algorithm, we briefly present the traditional tree kernel, which our PTK model is based on.", "Tree kernel was designed to compute the syntactic and semantic similarity between two natural language sentences by implicitly counting the number of common subtrees between their corresponding parse trees.", "Given a syntactic parse tree, each node with its children is associated with a grammar production rule.", "Figure 3 illustrates the syntactic parse tree of \"cut a tree\" and its subtrees.", "A subtree is defined as any subgraph which has more than one nodes, with the restriction that entire (not partial) rule productions must be included.", "For example, the fragment [NP [D a]] is excluded because it contains only part of the production NP β†’ D N (Collins and Duffy, 2001) .", "Following Collins and Duffy (2001) , given two parse trees T 1 and T 2 , the kernel function K(T 1 , T 2 ) is defined as: v i ∈V 1 v j ∈V 2 βˆ†(v i , v j ) (1) where V 1 and V 2 are the sets of all nodes respectively in T 1 and T 2 , and each node is associated with a production rule, and βˆ†(v i , v j ) evaluates the common subtrees rooted at v i and v j .", "βˆ†(v i , v j ) can be computed using the following recursive procedure (Collins and Duffy, 2001) : 1) if the production rules at v i and v j are different, then βˆ†(v i , v j ) = 0; 2) else if the production rules at v i and v j are same, and v i and v j have only leaf children (i.e., they are pre-terminal symbols), then βˆ†(v i , v j ) = Ξ»; 3) else βˆ†(v i , v j ) = Ξ» min(nc(v i ),nc(v j )) k=1 (1 + βˆ†(ch(v i , k), ch(v j , k))).", "where nc(v) is the number of children of node v, ch(v, k) is the k-th child of node v, and Ξ» (0 < Ξ» ≀ 1) is a decay factor.", "Ξ» = 1 yields the number of common subtrees; Ξ» < 1 down weighs the contribution of larger subtrees to make the kernel value less variable with respect to subtree size.", "Our PTK Model To classify propagation trees, we can calculate the similarity between the trees, which is supposed to reflect the distinction of different types of rumors and non-rumors based on structural, linguistic and temporal properties.", "However, existing tree kernels cannot be readily applied on propagation trees because 1) unlike parse tree where the node is represented by enumerable nominal value (e.g., part-of-speech tag), the propagation tree node is given as a vector of continuous numerical values representing the basic properties of the node; 2) the similarity of two parse trees is based on the count of common subtrees, for which the commonality of subtrees is evaluated by checking if the same production rules and the same children are associated with the nodes in two subtrees being compared, whereas in our context the similarity function should be defined softly since hardly two nodes from different propagation trees are same.", "With the representation of propagation tree, we first define a function f to evaluate the similarity between two nodes v i and v j (we simplify the node representation for instance v i = (u i , c i , t i )) as the following: f (v i , v j ) = e βˆ’t (Ξ±E(u i , u j ) + (1 βˆ’ Ξ±)J (c i , c j )) where t = |t i βˆ’ t j | is the absolute difference between the time lags of v i and v j , E and J are user-based similarity and content-based similarity, respectively, and Ξ± is the trade-off parameter.", "The intuition of using exponential function of t to scale down the similarity is to capture the discriminant signals or patterns at the different stages of propagation.", "For example, a questioning message posted very early may signal a false rumor while the same posted far later from initial post may indicate the rumor is still unverified, despite that the two messages are semantically similar.", "The user-based similarity is defined as an Euclidean distance E(u i , u j ) = ||u i βˆ’ u j || 2 , where u i and u j are the user vectors of node v i and v j and || β€’ || 2 is the 2-norm of a vector.", "Here E is used to capture the characteristics of users participating in spreading rumors as discriminant signals, throughout the entire stage of propagation.", "Contentwise, we use Jaccard coefficient to measure the similarity of post content: J (c i , c j ) = |N gram(c i ) ∩ N gram(c j )| |N gram(c i ) βˆͺ N gram(c j )| where c i and c j are the sets of content words in two nodes.", "For n-grams here, we adopt both uni-grams and bi-grams.", "It can capture cue terms e.g., 'false', 'debunk', 'not true', etc.", "commonly occurring in rumors but not in non-rumors.", "Given two propagation trees T 1 = V 1 , E 1 and T 2 = V 2 , E 2 , PTK aims to compute the similarity between T 1 and T 2 iteratively based on enumerating all pairs of most similar subtrees.", "First, for each node v i ∈ V 1 , we obtain v i ∈ V 2 , the most similar node of v i from V 2 : v i = arg max v j ∈V 2 f (v i , v j ) Similarly, for each v j ∈ V 2 , we obtain v j ∈ V 1 : v j = arg max v i ∈V 1 f (v i , v j ) Then, the propagation tree kernel K P (T 1 , T 2 ) is defined as: v i ∈V 1 Ξ›(v i , v i ) + v j ∈V 2 Ξ›(v j , v j ) (2) where Ξ›(v, v ) evaluates the similarity of two subtrees rooted at v and v , which is computed recursively as follows: 1) if v or v are leaf nodes, then Ξ›(v, v ) = f (v, v ); 2) else Ξ›(v, v ) = f (v, v ) min(nc(v),nc(v )) k=1 (1 + Ξ›(ch(v, k), ch(v , k))) Note that unlike traditional tree kernel, in PTK the node similarity f ∈ [0, 1] is used for softly counting similar subtrees instead of common subtrees.", "Also, Ξ» in tree kernel is not needed as subtree size is not an issue here thanks to node similarity f .", "PTK aims to capture discriminant patterns from propagation trees inclusive of user, content and temporal traits, which is inspired by prior analyses on rumors spreading, e.g., user information can be a strong clue in the initial broadcast, content features are important throughout entire propagation periods, and structural and temporal patterns help for longitudinal diffusion (Zubiaga et al., 2016; Kwon et al., 2017) .", "Context-Sensitive Extension of PTK One defect of PTK is that it ignores the clues outside the subtrees, e.g., how the information propagates from source post to the current subtree.", "Intuitively, propagation paths provide further clue for determining the truthfulness of information since they embed the route and context of how the propagation happens.", "Therefore, we propose contextsensitive PTK (cPTK) by considering the propagation paths from the root of the tree to the roots of subtrees, which shares similar intuition with the context-sensitive tree kernel (Zhou et al., 2007) .", "For a propagation tree node v ∈ T (r), let L r v be the length (i.e., # of nodes) of the propagation path from root r to v, and v[x] be the x-th ancestor of v on the path starting from v (0 ≀ x < L r v , v[0] = v, v[L r v βˆ’ 1] = r) .", "cPTK evaluates the similarity between two trees T 1 (r 1 ) and T 2 (r 2 ) as follows: v i ∈V 1 L r 1 v i βˆ’1 x=0 Ξ› x (v i , v i ) + v j ∈V 2 L r 2 v j βˆ’1 x=0 Ξ› x (v j , v j ) (3) where Ξ› x (v, v ) measures the similarity of sub- trees rooted at v[x] and v [x] for context-sensitive evaluation, which is computed as follows: 1) if x > 0, Ξ› x (v, v ) = f (v[x], v [x]), where v[x] and v [x] are the x-th ancestor nodes of v and v on the respective propagation path.", "2) else Ξ› x (v, v ) = Ξ›(v, v ), namely PTK.", "Clearly, PTK is a special case of cPTK when x = 0 (see equation 3).", "cPTK evaluates the oc-currence of both context-free (without considering ancestors on propagation paths) and contextsensitive cases.", "Rumor Detection via Kernel Learning The advantage of kernel-based method is that we can avoid painstakingly engineering the features.", "This is possible because the kernel function can explore an implicit feature space when calculating the similarity between two objects (Culotta and Sorensen, 2004) .", "We incorporate the proposed tree kernel functions, i.e., PTK (equation 2) or cPTK (equation 3), into a supervised learning framework, for which we utilize a kernel-based SVM classifier.", "We treat each tree as an instance, and its similarity values with all training instances as feature space.", "Therefore, the kernel matrix of training set is m Γ— m and that of test set is n Γ— m where m and n are the sizes of training and test sets, respectively.", "For our multi-class task, we perform a one-vsall classification for each label and then assign the one with the highest likelihood among the four, i.e., non-rumor, false rumor, true rumor or unverified rumor.", "We choose this method due to interpretability of results, similar to recent work on occupational class classification (Preotiuc-Pietro et al., 2015; Lukasik et al., 2015) .", "Experiments and Results Data Sets To our knowledge, there is no public large dataset available for classifying propagation trees, where we need a good number of source tweets, more accurately, the tree roots together with the corresponding propagation structure, to be appropriately annotated with ground truth.", "We constructed our datasets based on a couple of reference datasets, namely Twitter15 (Liu et al., 2015) and Twitter16 (Ma et al., 2016) .", "The original datasets were released and used for binary classification of rumor and non-rumor with respect to given events that contain their relevant tweets.", "First, we extracted the popular source tweets 2 that are highly retweeted or replied.", "We then collected all the propagation threads (i.e., retweets and replies) for these source tweets.", "Because Twitter API cannot retrieve the retweets or replies, we gathered the retweet users for a given tweet from 2 Though unpopular tweets could be false, we ignore them as they do not draw much attention and are hardly impactful Twrench 3 and crawled the replies through Twitter's web interface.", "Finally, we annotated the source tweets by referring to the labels of the events they are from.", "We first turned the label of each event in Twitter15 and Twitter16 from binary to quaternary according to the veracity tag of the article in rumor debunking websites (e.g., snopes.com, Emergent.info, etc).", "Then we labeled the source tweets by following these rules: 1) Source tweets from unverified rumor events or non-rumor events are labeled the same as the corresponding event's label; 2) For a source tweet in false rumor event, we flip over the label and assign true to the source tweet if it expresses denial type of stance; otherwise, the label is assigned as false; 3) The analogous flip-over/nochange rule applies to the source tweets from true rumor events.", "We make the datasets produced publicly accessible 4 .", "Table 1 gives statistics on the resulting datasets.", "Experimental Setup We compare our kernel-based method against the following baselines: SVM-TS: A linear SVM classification model that uses time-series to model the variation of a set of hand-crafted features (Ma et al., 2015) .", "DTR: A Decision-Tree-based Ranking method to identify trending rumors (Zhao et al., 2015) , which searches for enquiry phrases and clusters disputed factual claims, and ranked the clustered results based on statistical features.", "DTC and SVM-RBF: The Twitter information credibility model using Decision Tree Classifier (Castillo et al., 2011) and the SVM-based model with RBF kernel (Yang et al., 2012) , respectively, both using hand-crafted features based on the overall statistics of the posts.", "RFC: The Random Forest Classifier proposed by Kwon et al.", "(2017) using three parameters to fit the temporal properties and an extensive set of hand-crafted features related to the user, linguistic and structure characteristics.", "GRU: The RNN-based rumor detection model proposed by Ma et al.", "(2016) with gated recurrent unit for representation learning of high-level features from relevant posts over time.", "BOW: A naive baseline we worked by representing the text in each tree using bag-of-words and building the rumor classifier with linear SVM.", "Our models: PTK and cPTK are our full PTK and cPTK models, respectively; PTKand cPTKare the setting of only using content while ignoring user properties.", "We implemented DTC and RFC with Weka 5 , SVM models with LibSVM 6 and GRU with Theano 7 .", "We held out 10% of the trees in each dataset for model tuning, and for the rest of the trees, we performed 3-fold cross-validation.", "We used accuracy, F 1 measure as evaluation metrics.", "Table 2 shows that our proposed methods outperform all the baselines on both datasets.", "Experimental Results Among all baselines, GRU performs the best, which learns the low-dimensional representation of responsive tweets by capturing the textual and temporal information.", "This indicates the effectiveness of complex signals indicative of rumors beyond cue words or phrases (e.g., \"what?", "\", \"really?", "\", \"not sure\", etc.).", "This also justifies the good performance of BOW even though it only uses uni-grams for representation.", "Although DTR uses a set of regular expressions, we found only 19.59% and 22.21% tweets in our datasets containing these expressions.", "That is why the results of DTR are not satisfactory.", "SVM-TS and RFC are comparable because both of them utilize an extensive set of features especially focusing on temporal traits.", "But none of the models can directly incorporate structured propagation patterns for deep similarity compar- ison between propagation trees.", "SVM-RBF, although using a non-linear kernel, is based on traditional hand-crafted features instead of the structural kernel like ours.", "So, they performed obviously worse than our approach.", "Representation learning methods like GRU cannot easily utilize complex structural information for learning important features from our networked data.", "In contrast, our models can capture complex propagation patterns from structured data rich of linguistic, user and temporal signals.", "Therefore, the superiority of our models is clear: PTKwhich only uses text is already better than GRU, demonstrating the importance of propagation structures.", "PTK that combines text and user yields better results on both datasets, implying that both properties are complementary and PTK integrating flat and structured information is obviously more effective.", "It is also observed that cPTK outperforms PTK except for non-rumor class.", "This suggests the context-sensitive modeling based on PTK is effective for different types of rumors, but for non- The example subtree of a rumor captured by the algorithm at early stage of propagation rumors, it seems that considering context of propagation path is not always helpful.", "This might be due to the generally weak signals originated from node properties on the paths during non-rumor's diffusion since user distribution patterns in nonrumors do not seem as obvious as in rumors.", "This is not an issue in cPTKsince user information is not considered at all.", "Over all classes, cPTK achieves the highest accuracies on both datasets.", "Furthermore, we observe that all the baseline methods perform much better on non-rumors than on rumors.", "This is because the features of existing methods were defined for a binary (rumor vs. non-rumor) classification problem.", "So, they do not perform well for finer-grained classes.", "Our ap-proach can differentiate various classes much better by deep, detailed comparison of different patterns based on propagation structure.", "Early Detection Performance Detecting rumors at an early stage of propagation is very important so that preventive measures could be taken as quickly as possible.", "In early detection task, all the posts after a detection deadline are invisible during test.", "The earlier the deadline, the less propagation information can be available.", "Figure 4 shows the performances of our PTK and cPTK models versus RFC (the best system based on feature engineering), GRU (the best system based on RNN) and DTR (an early-detection-specific algorithm) against various deadlines.", "In the first few hours, our approach demonstrates superior early detection performance than other models.", "Particularly, cPTK achieve 75% accuracy on Twitter15 and 73% on Twitter16 within 24 hours, that is much faster than other models.", "Our analysis shows that rumors typically demonstrate more complex propagation substructures especially at early stage.", "Figure 5 shows a detected subtree of a false rumor spread in its first few hours, where influential users are somehow captured to boost its propagation and the information flows among the users with an obvious unpopular-to-popular-to-unpopular trend in terms of user popularity, but such pattern was not witnessed in non-rumors in early stage.", "Many textual signals (underlined) can also be observed in that early period.", "Our method can learn such structures and patterns naturally, but it is difficult to know and hand-craft them in feature engineering.", "Conclusion and Future Work We propose a novel approach for detecting rumors in microblog posts based on kernel learning method using propagation trees.", "A propagation tree encodes the spread of a hypothesis (i.e., a source tweet) with complex structured patterns and flat information regarding content, user and time associated with the tree nodes.", "Enlightened by tree kernel techniques, our kernel method learns discriminant clues for identifying rumors of finer-grained levels by directly measuring the similarity among propagation trees via kernel functions.", "Experiments on two Twitter datasets show that our approach outperforms stateof-the-art baselines with large margin for both general and early rumor detection tasks.", "Since kernel-based approach covers more structural information than feature-based methods, it allows kernel to further incorporate information from a high dimensional space for possibly better discrimination.", "In the future, we will focus on improving the rumor detection task by exploring network representation learning framework.", "Moreover, we plan to investigate unsupervised models considering massive unlabeled rumorous data from social media." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5.1", "5.2", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Representation of Tweets Propagation", "Propagation Tree Kernel Modeling", "Background of Tree Kernel", "Our PTK Model", "Context-Sensitive Extension of PTK", "Rumor Detection via Kernel Learning", "Data Sets", "Experimental Setup", "Experimental Results", "Early Detection Performance", "Conclusion and Future Work" ] }
GEM-SciDuet-train-101#paper-1265#slide-12
Data Collection
Construct our propagation tree datasets based on two Convert event label: Extract popular source tweets revised labels binary -> quarternary (Source tweet: highly retweeted or replied) ( retweets: Twrench.com ) ( replies: Web crawler )
Construct our propagation tree datasets based on two Convert event label: Extract popular source tweets revised labels binary -> quarternary (Source tweet: highly retweeted or replied) ( retweets: Twrench.com ) ( replies: Web crawler )
[]
GEM-SciDuet-train-101#paper-1265#slide-13
1265
Detect Rumors in Microblog Posts Using Propagation Structure via Kernel Learning
How fake news goes viral via social media? How does its propagation pattern differ from real stories? In this paper, we attempt to address the problem of identifying rumors, i.e., fake information, out of microblog posts based on their propagation structure. We firstly model microblog posts diffusion with propagation trees, which provide valuable clues on how an original message is transmitted and developed over time. We then propose a kernel-based method called Propagation Tree Kernel, which captures high-order patterns differentiating different types of rumors by evaluating the similarities between their propagation tree structures. Experimental results on two real-world datasets demonstrate that the proposed kernel-based approach can detect rumors more quickly and accurately than state-ofthe-art rumor detection models.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170 ], "paper_content_text": [ "Introduction On November 9th, 2016, Eric Tucker, a grassroots user who had just about 40 followers on Twitter, tweeted his unverified observations about paid protesters being bused to attend anti-Trump demonstration in Austin, Texas.", "The tweet, which was proved false later, was shared over 16 thousand times on Twitter and 350 thousand times on Facebook within a couple of days, fueling a nation-wide conspiracy theory 1 .", "The diffusion of the story is illustrated as Figure 1 which gives the key spreading points of the story along the time line.", "We can see that after the initial post, the tweet was shared or promoted by some influential online communities and users (including Trump himself), resulting in its wide spread.", "A widely accepted definition of rumor is \"unverified and instrumentally relevant information statements in circulation\" (DiFonzo and Bordia, 2007) .", "This unverified information may eventually turn out to be true, or partly or entirely false.", "In today's ever-connected world, rumors can arise and spread at lightening speed thanks to social media platforms, which could not only be wrong, but be misleading and dangerous to the public society.", "Therefore, it is crucial to track and debunk such rumors in timely manner.", "Journalists and fact-checking websites such as snopes.com have made efforts to track and detect rumors.", "However, such endeavor is manual, thus prone to poor coverage and low speed.", "Feature-based methods (Castillo et al., 2011; Yang et al., 2012; Ma et al., 2015) achieved certain success by employing large feature sets crafted from message contents, user profiles and holistic statistics of diffusion patterns (e.g., number of retweets, propagation time, etc.).", "But such an approach was over simplified as they ignored the dynamics of rumor propagation.", "Existing studies considering propagation characteristics mainly focused on the temporal features (Kwon et al., 2013 (Kwon et al., , 2017 rather than the structure of propagation.", "So, can the propagation structure make any difference for differentiating rumors from nonrumors?", "Recent studies showed that rumor spreaders are persons who want to get attention and popularity (Sunstein, 2014) .", "However, popular users who get more attention on Twitter (e.g., with more followers) are actually less likely to spread rumor in a sense that the high audience size might hinder a user from participating in propagating unverified information (Kwon et al., 2017) .", "Intuitively, for \"successful\" rumors being propagated as widely as popular real news, initial spreaders (typically lack of popularity) must attract certain amount of broadcasting power, e.g., attention of influential users or communities that have a lot of audiences joining in promoting the propagation.", "We refer to this as a constrained mode propagation, relative to the open mode propagation of normal messages that everyone is open to share.", "Such different modes of propagation may imply some distinct propagation structures between rumors and nonrumors and even among different types of rumors.", "Due to the complex nature of information diffusion, explicitly defining discriminant features based on propagation structure is difficult and biased.", "Figure 2 exemplifies the propagation structures of two Twitter posts, a rumor and a nonrumor, initiated by two users shown as the root nodes (in green color).", "The information flows here illustrate that the rumorous tweet is first posted by a low-impact user, then some popular users joining in who boost the spreading, but the non-rumorous tweet is initially posted by a popular user and directly spread by many general users; contentbased signal like various users' stance (Zhao et al., 2015) and edge-based signal such as relative influence (Kwon et al., 2017) can also suggest the different nature of source tweets.", "Many of such implicit distinctions throughout message propagation are hard to hand craft specifically using flat summary of statistics as previous work did.", "In addition, unlike representation learning for plain text, learning for representation of structures such as networks is not well studied in general.", "Therefore, traditional and latest text-based models (Castillo (a) A rumor (b) A non-rumor Figure 2 : Fragments of the propagation for two source tweets.", "Node size: denotes the popularity of the user who tweet the post (represented by # of followers); Red, black, blue node: content-wise the user express doubt/denial, support, neutrality in the tweet, respectively; Solid (dotted) edge: information flow from a more (less) popular user to a less (more) popular user; Dashed concentric circles: time stamps.", "Ma et al., 2015 Ma et al., , 2016 cannot be applied easily on such complex, dynamic structures.", "To capture high-order propagation patterns for rumor detection, we firstly represent the propagation of each source tweet with a propagation tree which is formed by harvesting user's interactions to one another triggered by the source tweet.", "Then, we propose a kernel-based data-driven method called Propagation Tree Kernel (PTK) to generate relevant features (i.e., subtrees) automatically for estimating the similarity between two propagation trees.", "Unlike traditional tree kernel (Moschitti, 2006; Zhang et al., 2008) for modeling syntactic structure based on parse tree, our propagation tree consists of nodes corresponding to microblog posts, each represented as a continuous vector, and edges representing the direction of propagation and providing the context to individual posts.", "The basic idea is to find and capture the salient substructures in the propagation trees indicative of rumors.", "We also extend PTK into a context-enriched PTK (cPTK) to enhance the model by considering different propagation paths from source tweet to the roots of subtrees, which capture the context of transmission.", "Extensive experiments on two real-world Twitter datasets show that the proposed methods outperform state-of-the-art rumor detection models with large margin.", "Moreover, most existing approaches regard rumor detection as a binary classification problem, which predicts a candidate hypothesis as rumor or not.", "Since a rumor often begins as unverified and later turns out to be confirmed as true or false, or remains unverified (Zubiaga et al., 2016) , here we consider a set of more practical, finer-grained classes: false rumor, true rumor, unverified rumor, and non-rumor, which becomes an even more challenging problem.", "Related Work Tracking misinformation or debunking rumors has been a hot research topic in multiple disciplines (DiFonzo and Bordia, 2007; Morris et al., 2012; Rosnow, 1991) .", "Castillo et al.", "(2011) studied information credibility on Twitter using a wide range of hand-crafted features.", "Following that, various features corresponding to message contents, user profiles and statistics of propagation patterns were proposed in many studies (Yang et al., 2012; Wu et al., 2015; Sun et al., 2013; Liu et al., 2015) .", "Zhao et al.", "(2015) focused on early rumor detection by using regular expressions for finding questing and denying tweets as the key for debunking rumor.", "All such approaches are over simplistic because they ignore the dynamic propagation patterns given the rich structures of social media data.", "Some studies focus on finding temporal patterns for understanding rumor diffusion.", "Kown et al.", "(2013; 2017) introduced a time-series fitting model based on the temporal properties of tweet volume.", "Ma et al.", "(2015) extended the model using time series to capture the variation of features over time.", "Friggeri et al.", "(2014) and Hannak et al.", "(2014) studied the structure of misinformation cascades by analyzing comments linking to rumor debunking websites.", "More recently, Ma et al.", "(2016) used recurrent neural networks to learn the representations of rumor signals from tweet text at different times.", "Our work will consider temporal, structural and linguistic signals in a unified framework based on propagation tree kernel.", "Most previous work formulated the task as classification at event level where an event is comprised of a number of source tweets, each being associated with a group of retweets and replies.", "Here we focus on classifying a given source tweet regarding a claim which is a finer-grained task.", "Similar setting was also considered in (Wu et al., 2015; Qazvinian et al., 2011) .", "Kernel methods are designed to evaluate the similarity between two objects, and tree kernel specifically addresses structured data which has been successfully applied for modeling syntactic information in many natural language tasks such as syntactic parsing (Collins and Duffy, 2001) , question-answering (Moschitti, 2006) , semantic analysis (Moschitti, 2004) , relation extraction (Zhang et al., 2008) and machine translation (Sun et al., 2010) .", "These kernels are not suitable for modeling the social media propagation structures because the nodes are not given as discrete values like part-of-speech tags, but are represented as high dimensional real-valued vectors.", "Our proposed method is a substantial extension of tree kernel for modeling such structures.", "Representation of Tweets Propagation On microblogging platforms, the follower/friend relationship embeds shared interests among the users.", "Once a user has posted a tweet, all his followers will receive the tweet.", "Furthermore, Twitter allows a user to retweet or comment another user's post, so that the information could reach beyond the network of the original creator.", "We model the propagation of each source tweet as a tree structure T (r) = V, E , where r is the source tweet as well as the root of the tree, V refers to a set of nodes each representing a responsive post (i.e., retweet or reply) of a user at a certain time to the source tweet r which initiates the circulation, and E is a set of directed edges corresponding to the response relation among the nodes in V .", "If there exists a directed edge from v i to v j , it means v j is a direct response to v i .", "More specifically, each node v ∈ V is repre- sented as a tuple v = (u v , c v , t v ) , which provides the following information: u v is the creator of the post, c v represents the text content of the post, and t v is the time lag between the source tweet r and v. In our case, u v contains attributes of the user such as # of followers/friends, verification status, # of history posts, etc., c v is a vector of binary features based on uni-grams and/or bi-grams representing the post's content.", "Propagation Tree Kernel Modeling In this section, we describe our rumor detection model based on propagation trees using kernel method called Propagation Tree Kernel (PTK).", "Our task is, given a propagation tree T (r) of a source tweet r, to predict the label of r. Background of Tree Kernel Before presenting our proposed algorithm, we briefly present the traditional tree kernel, which our PTK model is based on.", "Tree kernel was designed to compute the syntactic and semantic similarity between two natural language sentences by implicitly counting the number of common subtrees between their corresponding parse trees.", "Given a syntactic parse tree, each node with its children is associated with a grammar production rule.", "Figure 3 illustrates the syntactic parse tree of \"cut a tree\" and its subtrees.", "A subtree is defined as any subgraph which has more than one nodes, with the restriction that entire (not partial) rule productions must be included.", "For example, the fragment [NP [D a]] is excluded because it contains only part of the production NP β†’ D N (Collins and Duffy, 2001) .", "Following Collins and Duffy (2001) , given two parse trees T 1 and T 2 , the kernel function K(T 1 , T 2 ) is defined as: v i ∈V 1 v j ∈V 2 βˆ†(v i , v j ) (1) where V 1 and V 2 are the sets of all nodes respectively in T 1 and T 2 , and each node is associated with a production rule, and βˆ†(v i , v j ) evaluates the common subtrees rooted at v i and v j .", "βˆ†(v i , v j ) can be computed using the following recursive procedure (Collins and Duffy, 2001) : 1) if the production rules at v i and v j are different, then βˆ†(v i , v j ) = 0; 2) else if the production rules at v i and v j are same, and v i and v j have only leaf children (i.e., they are pre-terminal symbols), then βˆ†(v i , v j ) = Ξ»; 3) else βˆ†(v i , v j ) = Ξ» min(nc(v i ),nc(v j )) k=1 (1 + βˆ†(ch(v i , k), ch(v j , k))).", "where nc(v) is the number of children of node v, ch(v, k) is the k-th child of node v, and Ξ» (0 < Ξ» ≀ 1) is a decay factor.", "Ξ» = 1 yields the number of common subtrees; Ξ» < 1 down weighs the contribution of larger subtrees to make the kernel value less variable with respect to subtree size.", "Our PTK Model To classify propagation trees, we can calculate the similarity between the trees, which is supposed to reflect the distinction of different types of rumors and non-rumors based on structural, linguistic and temporal properties.", "However, existing tree kernels cannot be readily applied on propagation trees because 1) unlike parse tree where the node is represented by enumerable nominal value (e.g., part-of-speech tag), the propagation tree node is given as a vector of continuous numerical values representing the basic properties of the node; 2) the similarity of two parse trees is based on the count of common subtrees, for which the commonality of subtrees is evaluated by checking if the same production rules and the same children are associated with the nodes in two subtrees being compared, whereas in our context the similarity function should be defined softly since hardly two nodes from different propagation trees are same.", "With the representation of propagation tree, we first define a function f to evaluate the similarity between two nodes v i and v j (we simplify the node representation for instance v i = (u i , c i , t i )) as the following: f (v i , v j ) = e βˆ’t (Ξ±E(u i , u j ) + (1 βˆ’ Ξ±)J (c i , c j )) where t = |t i βˆ’ t j | is the absolute difference between the time lags of v i and v j , E and J are user-based similarity and content-based similarity, respectively, and Ξ± is the trade-off parameter.", "The intuition of using exponential function of t to scale down the similarity is to capture the discriminant signals or patterns at the different stages of propagation.", "For example, a questioning message posted very early may signal a false rumor while the same posted far later from initial post may indicate the rumor is still unverified, despite that the two messages are semantically similar.", "The user-based similarity is defined as an Euclidean distance E(u i , u j ) = ||u i βˆ’ u j || 2 , where u i and u j are the user vectors of node v i and v j and || β€’ || 2 is the 2-norm of a vector.", "Here E is used to capture the characteristics of users participating in spreading rumors as discriminant signals, throughout the entire stage of propagation.", "Contentwise, we use Jaccard coefficient to measure the similarity of post content: J (c i , c j ) = |N gram(c i ) ∩ N gram(c j )| |N gram(c i ) βˆͺ N gram(c j )| where c i and c j are the sets of content words in two nodes.", "For n-grams here, we adopt both uni-grams and bi-grams.", "It can capture cue terms e.g., 'false', 'debunk', 'not true', etc.", "commonly occurring in rumors but not in non-rumors.", "Given two propagation trees T 1 = V 1 , E 1 and T 2 = V 2 , E 2 , PTK aims to compute the similarity between T 1 and T 2 iteratively based on enumerating all pairs of most similar subtrees.", "First, for each node v i ∈ V 1 , we obtain v i ∈ V 2 , the most similar node of v i from V 2 : v i = arg max v j ∈V 2 f (v i , v j ) Similarly, for each v j ∈ V 2 , we obtain v j ∈ V 1 : v j = arg max v i ∈V 1 f (v i , v j ) Then, the propagation tree kernel K P (T 1 , T 2 ) is defined as: v i ∈V 1 Ξ›(v i , v i ) + v j ∈V 2 Ξ›(v j , v j ) (2) where Ξ›(v, v ) evaluates the similarity of two subtrees rooted at v and v , which is computed recursively as follows: 1) if v or v are leaf nodes, then Ξ›(v, v ) = f (v, v ); 2) else Ξ›(v, v ) = f (v, v ) min(nc(v),nc(v )) k=1 (1 + Ξ›(ch(v, k), ch(v , k))) Note that unlike traditional tree kernel, in PTK the node similarity f ∈ [0, 1] is used for softly counting similar subtrees instead of common subtrees.", "Also, Ξ» in tree kernel is not needed as subtree size is not an issue here thanks to node similarity f .", "PTK aims to capture discriminant patterns from propagation trees inclusive of user, content and temporal traits, which is inspired by prior analyses on rumors spreading, e.g., user information can be a strong clue in the initial broadcast, content features are important throughout entire propagation periods, and structural and temporal patterns help for longitudinal diffusion (Zubiaga et al., 2016; Kwon et al., 2017) .", "Context-Sensitive Extension of PTK One defect of PTK is that it ignores the clues outside the subtrees, e.g., how the information propagates from source post to the current subtree.", "Intuitively, propagation paths provide further clue for determining the truthfulness of information since they embed the route and context of how the propagation happens.", "Therefore, we propose contextsensitive PTK (cPTK) by considering the propagation paths from the root of the tree to the roots of subtrees, which shares similar intuition with the context-sensitive tree kernel (Zhou et al., 2007) .", "For a propagation tree node v ∈ T (r), let L r v be the length (i.e., # of nodes) of the propagation path from root r to v, and v[x] be the x-th ancestor of v on the path starting from v (0 ≀ x < L r v , v[0] = v, v[L r v βˆ’ 1] = r) .", "cPTK evaluates the similarity between two trees T 1 (r 1 ) and T 2 (r 2 ) as follows: v i ∈V 1 L r 1 v i βˆ’1 x=0 Ξ› x (v i , v i ) + v j ∈V 2 L r 2 v j βˆ’1 x=0 Ξ› x (v j , v j ) (3) where Ξ› x (v, v ) measures the similarity of sub- trees rooted at v[x] and v [x] for context-sensitive evaluation, which is computed as follows: 1) if x > 0, Ξ› x (v, v ) = f (v[x], v [x]), where v[x] and v [x] are the x-th ancestor nodes of v and v on the respective propagation path.", "2) else Ξ› x (v, v ) = Ξ›(v, v ), namely PTK.", "Clearly, PTK is a special case of cPTK when x = 0 (see equation 3).", "cPTK evaluates the oc-currence of both context-free (without considering ancestors on propagation paths) and contextsensitive cases.", "Rumor Detection via Kernel Learning The advantage of kernel-based method is that we can avoid painstakingly engineering the features.", "This is possible because the kernel function can explore an implicit feature space when calculating the similarity between two objects (Culotta and Sorensen, 2004) .", "We incorporate the proposed tree kernel functions, i.e., PTK (equation 2) or cPTK (equation 3), into a supervised learning framework, for which we utilize a kernel-based SVM classifier.", "We treat each tree as an instance, and its similarity values with all training instances as feature space.", "Therefore, the kernel matrix of training set is m Γ— m and that of test set is n Γ— m where m and n are the sizes of training and test sets, respectively.", "For our multi-class task, we perform a one-vsall classification for each label and then assign the one with the highest likelihood among the four, i.e., non-rumor, false rumor, true rumor or unverified rumor.", "We choose this method due to interpretability of results, similar to recent work on occupational class classification (Preotiuc-Pietro et al., 2015; Lukasik et al., 2015) .", "Experiments and Results Data Sets To our knowledge, there is no public large dataset available for classifying propagation trees, where we need a good number of source tweets, more accurately, the tree roots together with the corresponding propagation structure, to be appropriately annotated with ground truth.", "We constructed our datasets based on a couple of reference datasets, namely Twitter15 (Liu et al., 2015) and Twitter16 (Ma et al., 2016) .", "The original datasets were released and used for binary classification of rumor and non-rumor with respect to given events that contain their relevant tweets.", "First, we extracted the popular source tweets 2 that are highly retweeted or replied.", "We then collected all the propagation threads (i.e., retweets and replies) for these source tweets.", "Because Twitter API cannot retrieve the retweets or replies, we gathered the retweet users for a given tweet from 2 Though unpopular tweets could be false, we ignore them as they do not draw much attention and are hardly impactful Twrench 3 and crawled the replies through Twitter's web interface.", "Finally, we annotated the source tweets by referring to the labels of the events they are from.", "We first turned the label of each event in Twitter15 and Twitter16 from binary to quaternary according to the veracity tag of the article in rumor debunking websites (e.g., snopes.com, Emergent.info, etc).", "Then we labeled the source tweets by following these rules: 1) Source tweets from unverified rumor events or non-rumor events are labeled the same as the corresponding event's label; 2) For a source tweet in false rumor event, we flip over the label and assign true to the source tweet if it expresses denial type of stance; otherwise, the label is assigned as false; 3) The analogous flip-over/nochange rule applies to the source tweets from true rumor events.", "We make the datasets produced publicly accessible 4 .", "Table 1 gives statistics on the resulting datasets.", "Experimental Setup We compare our kernel-based method against the following baselines: SVM-TS: A linear SVM classification model that uses time-series to model the variation of a set of hand-crafted features (Ma et al., 2015) .", "DTR: A Decision-Tree-based Ranking method to identify trending rumors (Zhao et al., 2015) , which searches for enquiry phrases and clusters disputed factual claims, and ranked the clustered results based on statistical features.", "DTC and SVM-RBF: The Twitter information credibility model using Decision Tree Classifier (Castillo et al., 2011) and the SVM-based model with RBF kernel (Yang et al., 2012) , respectively, both using hand-crafted features based on the overall statistics of the posts.", "RFC: The Random Forest Classifier proposed by Kwon et al.", "(2017) using three parameters to fit the temporal properties and an extensive set of hand-crafted features related to the user, linguistic and structure characteristics.", "GRU: The RNN-based rumor detection model proposed by Ma et al.", "(2016) with gated recurrent unit for representation learning of high-level features from relevant posts over time.", "BOW: A naive baseline we worked by representing the text in each tree using bag-of-words and building the rumor classifier with linear SVM.", "Our models: PTK and cPTK are our full PTK and cPTK models, respectively; PTKand cPTKare the setting of only using content while ignoring user properties.", "We implemented DTC and RFC with Weka 5 , SVM models with LibSVM 6 and GRU with Theano 7 .", "We held out 10% of the trees in each dataset for model tuning, and for the rest of the trees, we performed 3-fold cross-validation.", "We used accuracy, F 1 measure as evaluation metrics.", "Table 2 shows that our proposed methods outperform all the baselines on both datasets.", "Experimental Results Among all baselines, GRU performs the best, which learns the low-dimensional representation of responsive tweets by capturing the textual and temporal information.", "This indicates the effectiveness of complex signals indicative of rumors beyond cue words or phrases (e.g., \"what?", "\", \"really?", "\", \"not sure\", etc.).", "This also justifies the good performance of BOW even though it only uses uni-grams for representation.", "Although DTR uses a set of regular expressions, we found only 19.59% and 22.21% tweets in our datasets containing these expressions.", "That is why the results of DTR are not satisfactory.", "SVM-TS and RFC are comparable because both of them utilize an extensive set of features especially focusing on temporal traits.", "But none of the models can directly incorporate structured propagation patterns for deep similarity compar- ison between propagation trees.", "SVM-RBF, although using a non-linear kernel, is based on traditional hand-crafted features instead of the structural kernel like ours.", "So, they performed obviously worse than our approach.", "Representation learning methods like GRU cannot easily utilize complex structural information for learning important features from our networked data.", "In contrast, our models can capture complex propagation patterns from structured data rich of linguistic, user and temporal signals.", "Therefore, the superiority of our models is clear: PTKwhich only uses text is already better than GRU, demonstrating the importance of propagation structures.", "PTK that combines text and user yields better results on both datasets, implying that both properties are complementary and PTK integrating flat and structured information is obviously more effective.", "It is also observed that cPTK outperforms PTK except for non-rumor class.", "This suggests the context-sensitive modeling based on PTK is effective for different types of rumors, but for non- The example subtree of a rumor captured by the algorithm at early stage of propagation rumors, it seems that considering context of propagation path is not always helpful.", "This might be due to the generally weak signals originated from node properties on the paths during non-rumor's diffusion since user distribution patterns in nonrumors do not seem as obvious as in rumors.", "This is not an issue in cPTKsince user information is not considered at all.", "Over all classes, cPTK achieves the highest accuracies on both datasets.", "Furthermore, we observe that all the baseline methods perform much better on non-rumors than on rumors.", "This is because the features of existing methods were defined for a binary (rumor vs. non-rumor) classification problem.", "So, they do not perform well for finer-grained classes.", "Our ap-proach can differentiate various classes much better by deep, detailed comparison of different patterns based on propagation structure.", "Early Detection Performance Detecting rumors at an early stage of propagation is very important so that preventive measures could be taken as quickly as possible.", "In early detection task, all the posts after a detection deadline are invisible during test.", "The earlier the deadline, the less propagation information can be available.", "Figure 4 shows the performances of our PTK and cPTK models versus RFC (the best system based on feature engineering), GRU (the best system based on RNN) and DTR (an early-detection-specific algorithm) against various deadlines.", "In the first few hours, our approach demonstrates superior early detection performance than other models.", "Particularly, cPTK achieve 75% accuracy on Twitter15 and 73% on Twitter16 within 24 hours, that is much faster than other models.", "Our analysis shows that rumors typically demonstrate more complex propagation substructures especially at early stage.", "Figure 5 shows a detected subtree of a false rumor spread in its first few hours, where influential users are somehow captured to boost its propagation and the information flows among the users with an obvious unpopular-to-popular-to-unpopular trend in terms of user popularity, but such pattern was not witnessed in non-rumors in early stage.", "Many textual signals (underlined) can also be observed in that early period.", "Our method can learn such structures and patterns naturally, but it is difficult to know and hand-craft them in feature engineering.", "Conclusion and Future Work We propose a novel approach for detecting rumors in microblog posts based on kernel learning method using propagation trees.", "A propagation tree encodes the spread of a hypothesis (i.e., a source tweet) with complex structured patterns and flat information regarding content, user and time associated with the tree nodes.", "Enlightened by tree kernel techniques, our kernel method learns discriminant clues for identifying rumors of finer-grained levels by directly measuring the similarity among propagation trees via kernel functions.", "Experiments on two Twitter datasets show that our approach outperforms stateof-the-art baselines with large margin for both general and early rumor detection tasks.", "Since kernel-based approach covers more structural information than feature-based methods, it allows kernel to further incorporate information from a high dimensional space for possibly better discrimination.", "In the future, we will focus on improving the rumor detection task by exploring network representation learning framework.", "Moreover, we plan to investigate unsupervised models considering massive unlabeled rumorous data from social media." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5.1", "5.2", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Representation of Tweets Propagation", "Propagation Tree Kernel Modeling", "Background of Tree Kernel", "Our PTK Model", "Context-Sensitive Extension of PTK", "Rumor Detection via Kernel Learning", "Data Sets", "Experimental Setup", "Experimental Results", "Early Detection Performance", "Conclusion and Future Work" ] }
GEM-SciDuet-train-101#paper-1265#slide-13
Statistics of Data Collection
URL of the datasets: https://www.dropbox.com/s/0jhsfwep3ywvpca/rumdetect2017.zip?dl=0
URL of the datasets: https://www.dropbox.com/s/0jhsfwep3ywvpca/rumdetect2017.zip?dl=0
[]
GEM-SciDuet-train-101#paper-1265#slide-14
1265
Detect Rumors in Microblog Posts Using Propagation Structure via Kernel Learning
How fake news goes viral via social media? How does its propagation pattern differ from real stories? In this paper, we attempt to address the problem of identifying rumors, i.e., fake information, out of microblog posts based on their propagation structure. We firstly model microblog posts diffusion with propagation trees, which provide valuable clues on how an original message is transmitted and developed over time. We then propose a kernel-based method called Propagation Tree Kernel, which captures high-order patterns differentiating different types of rumors by evaluating the similarities between their propagation tree structures. Experimental results on two real-world datasets demonstrate that the proposed kernel-based approach can detect rumors more quickly and accurately than state-ofthe-art rumor detection models.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170 ], "paper_content_text": [ "Introduction On November 9th, 2016, Eric Tucker, a grassroots user who had just about 40 followers on Twitter, tweeted his unverified observations about paid protesters being bused to attend anti-Trump demonstration in Austin, Texas.", "The tweet, which was proved false later, was shared over 16 thousand times on Twitter and 350 thousand times on Facebook within a couple of days, fueling a nation-wide conspiracy theory 1 .", "The diffusion of the story is illustrated as Figure 1 which gives the key spreading points of the story along the time line.", "We can see that after the initial post, the tweet was shared or promoted by some influential online communities and users (including Trump himself), resulting in its wide spread.", "A widely accepted definition of rumor is \"unverified and instrumentally relevant information statements in circulation\" (DiFonzo and Bordia, 2007) .", "This unverified information may eventually turn out to be true, or partly or entirely false.", "In today's ever-connected world, rumors can arise and spread at lightening speed thanks to social media platforms, which could not only be wrong, but be misleading and dangerous to the public society.", "Therefore, it is crucial to track and debunk such rumors in timely manner.", "Journalists and fact-checking websites such as snopes.com have made efforts to track and detect rumors.", "However, such endeavor is manual, thus prone to poor coverage and low speed.", "Feature-based methods (Castillo et al., 2011; Yang et al., 2012; Ma et al., 2015) achieved certain success by employing large feature sets crafted from message contents, user profiles and holistic statistics of diffusion patterns (e.g., number of retweets, propagation time, etc.).", "But such an approach was over simplified as they ignored the dynamics of rumor propagation.", "Existing studies considering propagation characteristics mainly focused on the temporal features (Kwon et al., 2013 (Kwon et al., , 2017 rather than the structure of propagation.", "So, can the propagation structure make any difference for differentiating rumors from nonrumors?", "Recent studies showed that rumor spreaders are persons who want to get attention and popularity (Sunstein, 2014) .", "However, popular users who get more attention on Twitter (e.g., with more followers) are actually less likely to spread rumor in a sense that the high audience size might hinder a user from participating in propagating unverified information (Kwon et al., 2017) .", "Intuitively, for \"successful\" rumors being propagated as widely as popular real news, initial spreaders (typically lack of popularity) must attract certain amount of broadcasting power, e.g., attention of influential users or communities that have a lot of audiences joining in promoting the propagation.", "We refer to this as a constrained mode propagation, relative to the open mode propagation of normal messages that everyone is open to share.", "Such different modes of propagation may imply some distinct propagation structures between rumors and nonrumors and even among different types of rumors.", "Due to the complex nature of information diffusion, explicitly defining discriminant features based on propagation structure is difficult and biased.", "Figure 2 exemplifies the propagation structures of two Twitter posts, a rumor and a nonrumor, initiated by two users shown as the root nodes (in green color).", "The information flows here illustrate that the rumorous tweet is first posted by a low-impact user, then some popular users joining in who boost the spreading, but the non-rumorous tweet is initially posted by a popular user and directly spread by many general users; contentbased signal like various users' stance (Zhao et al., 2015) and edge-based signal such as relative influence (Kwon et al., 2017) can also suggest the different nature of source tweets.", "Many of such implicit distinctions throughout message propagation are hard to hand craft specifically using flat summary of statistics as previous work did.", "In addition, unlike representation learning for plain text, learning for representation of structures such as networks is not well studied in general.", "Therefore, traditional and latest text-based models (Castillo (a) A rumor (b) A non-rumor Figure 2 : Fragments of the propagation for two source tweets.", "Node size: denotes the popularity of the user who tweet the post (represented by # of followers); Red, black, blue node: content-wise the user express doubt/denial, support, neutrality in the tweet, respectively; Solid (dotted) edge: information flow from a more (less) popular user to a less (more) popular user; Dashed concentric circles: time stamps.", "Ma et al., 2015 Ma et al., , 2016 cannot be applied easily on such complex, dynamic structures.", "To capture high-order propagation patterns for rumor detection, we firstly represent the propagation of each source tweet with a propagation tree which is formed by harvesting user's interactions to one another triggered by the source tweet.", "Then, we propose a kernel-based data-driven method called Propagation Tree Kernel (PTK) to generate relevant features (i.e., subtrees) automatically for estimating the similarity between two propagation trees.", "Unlike traditional tree kernel (Moschitti, 2006; Zhang et al., 2008) for modeling syntactic structure based on parse tree, our propagation tree consists of nodes corresponding to microblog posts, each represented as a continuous vector, and edges representing the direction of propagation and providing the context to individual posts.", "The basic idea is to find and capture the salient substructures in the propagation trees indicative of rumors.", "We also extend PTK into a context-enriched PTK (cPTK) to enhance the model by considering different propagation paths from source tweet to the roots of subtrees, which capture the context of transmission.", "Extensive experiments on two real-world Twitter datasets show that the proposed methods outperform state-of-the-art rumor detection models with large margin.", "Moreover, most existing approaches regard rumor detection as a binary classification problem, which predicts a candidate hypothesis as rumor or not.", "Since a rumor often begins as unverified and later turns out to be confirmed as true or false, or remains unverified (Zubiaga et al., 2016) , here we consider a set of more practical, finer-grained classes: false rumor, true rumor, unverified rumor, and non-rumor, which becomes an even more challenging problem.", "Related Work Tracking misinformation or debunking rumors has been a hot research topic in multiple disciplines (DiFonzo and Bordia, 2007; Morris et al., 2012; Rosnow, 1991) .", "Castillo et al.", "(2011) studied information credibility on Twitter using a wide range of hand-crafted features.", "Following that, various features corresponding to message contents, user profiles and statistics of propagation patterns were proposed in many studies (Yang et al., 2012; Wu et al., 2015; Sun et al., 2013; Liu et al., 2015) .", "Zhao et al.", "(2015) focused on early rumor detection by using regular expressions for finding questing and denying tweets as the key for debunking rumor.", "All such approaches are over simplistic because they ignore the dynamic propagation patterns given the rich structures of social media data.", "Some studies focus on finding temporal patterns for understanding rumor diffusion.", "Kown et al.", "(2013; 2017) introduced a time-series fitting model based on the temporal properties of tweet volume.", "Ma et al.", "(2015) extended the model using time series to capture the variation of features over time.", "Friggeri et al.", "(2014) and Hannak et al.", "(2014) studied the structure of misinformation cascades by analyzing comments linking to rumor debunking websites.", "More recently, Ma et al.", "(2016) used recurrent neural networks to learn the representations of rumor signals from tweet text at different times.", "Our work will consider temporal, structural and linguistic signals in a unified framework based on propagation tree kernel.", "Most previous work formulated the task as classification at event level where an event is comprised of a number of source tweets, each being associated with a group of retweets and replies.", "Here we focus on classifying a given source tweet regarding a claim which is a finer-grained task.", "Similar setting was also considered in (Wu et al., 2015; Qazvinian et al., 2011) .", "Kernel methods are designed to evaluate the similarity between two objects, and tree kernel specifically addresses structured data which has been successfully applied for modeling syntactic information in many natural language tasks such as syntactic parsing (Collins and Duffy, 2001) , question-answering (Moschitti, 2006) , semantic analysis (Moschitti, 2004) , relation extraction (Zhang et al., 2008) and machine translation (Sun et al., 2010) .", "These kernels are not suitable for modeling the social media propagation structures because the nodes are not given as discrete values like part-of-speech tags, but are represented as high dimensional real-valued vectors.", "Our proposed method is a substantial extension of tree kernel for modeling such structures.", "Representation of Tweets Propagation On microblogging platforms, the follower/friend relationship embeds shared interests among the users.", "Once a user has posted a tweet, all his followers will receive the tweet.", "Furthermore, Twitter allows a user to retweet or comment another user's post, so that the information could reach beyond the network of the original creator.", "We model the propagation of each source tweet as a tree structure T (r) = V, E , where r is the source tweet as well as the root of the tree, V refers to a set of nodes each representing a responsive post (i.e., retweet or reply) of a user at a certain time to the source tweet r which initiates the circulation, and E is a set of directed edges corresponding to the response relation among the nodes in V .", "If there exists a directed edge from v i to v j , it means v j is a direct response to v i .", "More specifically, each node v ∈ V is repre- sented as a tuple v = (u v , c v , t v ) , which provides the following information: u v is the creator of the post, c v represents the text content of the post, and t v is the time lag between the source tweet r and v. In our case, u v contains attributes of the user such as # of followers/friends, verification status, # of history posts, etc., c v is a vector of binary features based on uni-grams and/or bi-grams representing the post's content.", "Propagation Tree Kernel Modeling In this section, we describe our rumor detection model based on propagation trees using kernel method called Propagation Tree Kernel (PTK).", "Our task is, given a propagation tree T (r) of a source tweet r, to predict the label of r. Background of Tree Kernel Before presenting our proposed algorithm, we briefly present the traditional tree kernel, which our PTK model is based on.", "Tree kernel was designed to compute the syntactic and semantic similarity between two natural language sentences by implicitly counting the number of common subtrees between their corresponding parse trees.", "Given a syntactic parse tree, each node with its children is associated with a grammar production rule.", "Figure 3 illustrates the syntactic parse tree of \"cut a tree\" and its subtrees.", "A subtree is defined as any subgraph which has more than one nodes, with the restriction that entire (not partial) rule productions must be included.", "For example, the fragment [NP [D a]] is excluded because it contains only part of the production NP β†’ D N (Collins and Duffy, 2001) .", "Following Collins and Duffy (2001) , given two parse trees T 1 and T 2 , the kernel function K(T 1 , T 2 ) is defined as: v i ∈V 1 v j ∈V 2 βˆ†(v i , v j ) (1) where V 1 and V 2 are the sets of all nodes respectively in T 1 and T 2 , and each node is associated with a production rule, and βˆ†(v i , v j ) evaluates the common subtrees rooted at v i and v j .", "βˆ†(v i , v j ) can be computed using the following recursive procedure (Collins and Duffy, 2001) : 1) if the production rules at v i and v j are different, then βˆ†(v i , v j ) = 0; 2) else if the production rules at v i and v j are same, and v i and v j have only leaf children (i.e., they are pre-terminal symbols), then βˆ†(v i , v j ) = Ξ»; 3) else βˆ†(v i , v j ) = Ξ» min(nc(v i ),nc(v j )) k=1 (1 + βˆ†(ch(v i , k), ch(v j , k))).", "where nc(v) is the number of children of node v, ch(v, k) is the k-th child of node v, and Ξ» (0 < Ξ» ≀ 1) is a decay factor.", "Ξ» = 1 yields the number of common subtrees; Ξ» < 1 down weighs the contribution of larger subtrees to make the kernel value less variable with respect to subtree size.", "Our PTK Model To classify propagation trees, we can calculate the similarity between the trees, which is supposed to reflect the distinction of different types of rumors and non-rumors based on structural, linguistic and temporal properties.", "However, existing tree kernels cannot be readily applied on propagation trees because 1) unlike parse tree where the node is represented by enumerable nominal value (e.g., part-of-speech tag), the propagation tree node is given as a vector of continuous numerical values representing the basic properties of the node; 2) the similarity of two parse trees is based on the count of common subtrees, for which the commonality of subtrees is evaluated by checking if the same production rules and the same children are associated with the nodes in two subtrees being compared, whereas in our context the similarity function should be defined softly since hardly two nodes from different propagation trees are same.", "With the representation of propagation tree, we first define a function f to evaluate the similarity between two nodes v i and v j (we simplify the node representation for instance v i = (u i , c i , t i )) as the following: f (v i , v j ) = e βˆ’t (Ξ±E(u i , u j ) + (1 βˆ’ Ξ±)J (c i , c j )) where t = |t i βˆ’ t j | is the absolute difference between the time lags of v i and v j , E and J are user-based similarity and content-based similarity, respectively, and Ξ± is the trade-off parameter.", "The intuition of using exponential function of t to scale down the similarity is to capture the discriminant signals or patterns at the different stages of propagation.", "For example, a questioning message posted very early may signal a false rumor while the same posted far later from initial post may indicate the rumor is still unverified, despite that the two messages are semantically similar.", "The user-based similarity is defined as an Euclidean distance E(u i , u j ) = ||u i βˆ’ u j || 2 , where u i and u j are the user vectors of node v i and v j and || β€’ || 2 is the 2-norm of a vector.", "Here E is used to capture the characteristics of users participating in spreading rumors as discriminant signals, throughout the entire stage of propagation.", "Contentwise, we use Jaccard coefficient to measure the similarity of post content: J (c i , c j ) = |N gram(c i ) ∩ N gram(c j )| |N gram(c i ) βˆͺ N gram(c j )| where c i and c j are the sets of content words in two nodes.", "For n-grams here, we adopt both uni-grams and bi-grams.", "It can capture cue terms e.g., 'false', 'debunk', 'not true', etc.", "commonly occurring in rumors but not in non-rumors.", "Given two propagation trees T 1 = V 1 , E 1 and T 2 = V 2 , E 2 , PTK aims to compute the similarity between T 1 and T 2 iteratively based on enumerating all pairs of most similar subtrees.", "First, for each node v i ∈ V 1 , we obtain v i ∈ V 2 , the most similar node of v i from V 2 : v i = arg max v j ∈V 2 f (v i , v j ) Similarly, for each v j ∈ V 2 , we obtain v j ∈ V 1 : v j = arg max v i ∈V 1 f (v i , v j ) Then, the propagation tree kernel K P (T 1 , T 2 ) is defined as: v i ∈V 1 Ξ›(v i , v i ) + v j ∈V 2 Ξ›(v j , v j ) (2) where Ξ›(v, v ) evaluates the similarity of two subtrees rooted at v and v , which is computed recursively as follows: 1) if v or v are leaf nodes, then Ξ›(v, v ) = f (v, v ); 2) else Ξ›(v, v ) = f (v, v ) min(nc(v),nc(v )) k=1 (1 + Ξ›(ch(v, k), ch(v , k))) Note that unlike traditional tree kernel, in PTK the node similarity f ∈ [0, 1] is used for softly counting similar subtrees instead of common subtrees.", "Also, Ξ» in tree kernel is not needed as subtree size is not an issue here thanks to node similarity f .", "PTK aims to capture discriminant patterns from propagation trees inclusive of user, content and temporal traits, which is inspired by prior analyses on rumors spreading, e.g., user information can be a strong clue in the initial broadcast, content features are important throughout entire propagation periods, and structural and temporal patterns help for longitudinal diffusion (Zubiaga et al., 2016; Kwon et al., 2017) .", "Context-Sensitive Extension of PTK One defect of PTK is that it ignores the clues outside the subtrees, e.g., how the information propagates from source post to the current subtree.", "Intuitively, propagation paths provide further clue for determining the truthfulness of information since they embed the route and context of how the propagation happens.", "Therefore, we propose contextsensitive PTK (cPTK) by considering the propagation paths from the root of the tree to the roots of subtrees, which shares similar intuition with the context-sensitive tree kernel (Zhou et al., 2007) .", "For a propagation tree node v ∈ T (r), let L r v be the length (i.e., # of nodes) of the propagation path from root r to v, and v[x] be the x-th ancestor of v on the path starting from v (0 ≀ x < L r v , v[0] = v, v[L r v βˆ’ 1] = r) .", "cPTK evaluates the similarity between two trees T 1 (r 1 ) and T 2 (r 2 ) as follows: v i ∈V 1 L r 1 v i βˆ’1 x=0 Ξ› x (v i , v i ) + v j ∈V 2 L r 2 v j βˆ’1 x=0 Ξ› x (v j , v j ) (3) where Ξ› x (v, v ) measures the similarity of sub- trees rooted at v[x] and v [x] for context-sensitive evaluation, which is computed as follows: 1) if x > 0, Ξ› x (v, v ) = f (v[x], v [x]), where v[x] and v [x] are the x-th ancestor nodes of v and v on the respective propagation path.", "2) else Ξ› x (v, v ) = Ξ›(v, v ), namely PTK.", "Clearly, PTK is a special case of cPTK when x = 0 (see equation 3).", "cPTK evaluates the oc-currence of both context-free (without considering ancestors on propagation paths) and contextsensitive cases.", "Rumor Detection via Kernel Learning The advantage of kernel-based method is that we can avoid painstakingly engineering the features.", "This is possible because the kernel function can explore an implicit feature space when calculating the similarity between two objects (Culotta and Sorensen, 2004) .", "We incorporate the proposed tree kernel functions, i.e., PTK (equation 2) or cPTK (equation 3), into a supervised learning framework, for which we utilize a kernel-based SVM classifier.", "We treat each tree as an instance, and its similarity values with all training instances as feature space.", "Therefore, the kernel matrix of training set is m Γ— m and that of test set is n Γ— m where m and n are the sizes of training and test sets, respectively.", "For our multi-class task, we perform a one-vsall classification for each label and then assign the one with the highest likelihood among the four, i.e., non-rumor, false rumor, true rumor or unverified rumor.", "We choose this method due to interpretability of results, similar to recent work on occupational class classification (Preotiuc-Pietro et al., 2015; Lukasik et al., 2015) .", "Experiments and Results Data Sets To our knowledge, there is no public large dataset available for classifying propagation trees, where we need a good number of source tweets, more accurately, the tree roots together with the corresponding propagation structure, to be appropriately annotated with ground truth.", "We constructed our datasets based on a couple of reference datasets, namely Twitter15 (Liu et al., 2015) and Twitter16 (Ma et al., 2016) .", "The original datasets were released and used for binary classification of rumor and non-rumor with respect to given events that contain their relevant tweets.", "First, we extracted the popular source tweets 2 that are highly retweeted or replied.", "We then collected all the propagation threads (i.e., retweets and replies) for these source tweets.", "Because Twitter API cannot retrieve the retweets or replies, we gathered the retweet users for a given tweet from 2 Though unpopular tweets could be false, we ignore them as they do not draw much attention and are hardly impactful Twrench 3 and crawled the replies through Twitter's web interface.", "Finally, we annotated the source tweets by referring to the labels of the events they are from.", "We first turned the label of each event in Twitter15 and Twitter16 from binary to quaternary according to the veracity tag of the article in rumor debunking websites (e.g., snopes.com, Emergent.info, etc).", "Then we labeled the source tweets by following these rules: 1) Source tweets from unverified rumor events or non-rumor events are labeled the same as the corresponding event's label; 2) For a source tweet in false rumor event, we flip over the label and assign true to the source tweet if it expresses denial type of stance; otherwise, the label is assigned as false; 3) The analogous flip-over/nochange rule applies to the source tweets from true rumor events.", "We make the datasets produced publicly accessible 4 .", "Table 1 gives statistics on the resulting datasets.", "Experimental Setup We compare our kernel-based method against the following baselines: SVM-TS: A linear SVM classification model that uses time-series to model the variation of a set of hand-crafted features (Ma et al., 2015) .", "DTR: A Decision-Tree-based Ranking method to identify trending rumors (Zhao et al., 2015) , which searches for enquiry phrases and clusters disputed factual claims, and ranked the clustered results based on statistical features.", "DTC and SVM-RBF: The Twitter information credibility model using Decision Tree Classifier (Castillo et al., 2011) and the SVM-based model with RBF kernel (Yang et al., 2012) , respectively, both using hand-crafted features based on the overall statistics of the posts.", "RFC: The Random Forest Classifier proposed by Kwon et al.", "(2017) using three parameters to fit the temporal properties and an extensive set of hand-crafted features related to the user, linguistic and structure characteristics.", "GRU: The RNN-based rumor detection model proposed by Ma et al.", "(2016) with gated recurrent unit for representation learning of high-level features from relevant posts over time.", "BOW: A naive baseline we worked by representing the text in each tree using bag-of-words and building the rumor classifier with linear SVM.", "Our models: PTK and cPTK are our full PTK and cPTK models, respectively; PTKand cPTKare the setting of only using content while ignoring user properties.", "We implemented DTC and RFC with Weka 5 , SVM models with LibSVM 6 and GRU with Theano 7 .", "We held out 10% of the trees in each dataset for model tuning, and for the rest of the trees, we performed 3-fold cross-validation.", "We used accuracy, F 1 measure as evaluation metrics.", "Table 2 shows that our proposed methods outperform all the baselines on both datasets.", "Experimental Results Among all baselines, GRU performs the best, which learns the low-dimensional representation of responsive tweets by capturing the textual and temporal information.", "This indicates the effectiveness of complex signals indicative of rumors beyond cue words or phrases (e.g., \"what?", "\", \"really?", "\", \"not sure\", etc.).", "This also justifies the good performance of BOW even though it only uses uni-grams for representation.", "Although DTR uses a set of regular expressions, we found only 19.59% and 22.21% tweets in our datasets containing these expressions.", "That is why the results of DTR are not satisfactory.", "SVM-TS and RFC are comparable because both of them utilize an extensive set of features especially focusing on temporal traits.", "But none of the models can directly incorporate structured propagation patterns for deep similarity compar- ison between propagation trees.", "SVM-RBF, although using a non-linear kernel, is based on traditional hand-crafted features instead of the structural kernel like ours.", "So, they performed obviously worse than our approach.", "Representation learning methods like GRU cannot easily utilize complex structural information for learning important features from our networked data.", "In contrast, our models can capture complex propagation patterns from structured data rich of linguistic, user and temporal signals.", "Therefore, the superiority of our models is clear: PTKwhich only uses text is already better than GRU, demonstrating the importance of propagation structures.", "PTK that combines text and user yields better results on both datasets, implying that both properties are complementary and PTK integrating flat and structured information is obviously more effective.", "It is also observed that cPTK outperforms PTK except for non-rumor class.", "This suggests the context-sensitive modeling based on PTK is effective for different types of rumors, but for non- The example subtree of a rumor captured by the algorithm at early stage of propagation rumors, it seems that considering context of propagation path is not always helpful.", "This might be due to the generally weak signals originated from node properties on the paths during non-rumor's diffusion since user distribution patterns in nonrumors do not seem as obvious as in rumors.", "This is not an issue in cPTKsince user information is not considered at all.", "Over all classes, cPTK achieves the highest accuracies on both datasets.", "Furthermore, we observe that all the baseline methods perform much better on non-rumors than on rumors.", "This is because the features of existing methods were defined for a binary (rumor vs. non-rumor) classification problem.", "So, they do not perform well for finer-grained classes.", "Our ap-proach can differentiate various classes much better by deep, detailed comparison of different patterns based on propagation structure.", "Early Detection Performance Detecting rumors at an early stage of propagation is very important so that preventive measures could be taken as quickly as possible.", "In early detection task, all the posts after a detection deadline are invisible during test.", "The earlier the deadline, the less propagation information can be available.", "Figure 4 shows the performances of our PTK and cPTK models versus RFC (the best system based on feature engineering), GRU (the best system based on RNN) and DTR (an early-detection-specific algorithm) against various deadlines.", "In the first few hours, our approach demonstrates superior early detection performance than other models.", "Particularly, cPTK achieve 75% accuracy on Twitter15 and 73% on Twitter16 within 24 hours, that is much faster than other models.", "Our analysis shows that rumors typically demonstrate more complex propagation substructures especially at early stage.", "Figure 5 shows a detected subtree of a false rumor spread in its first few hours, where influential users are somehow captured to boost its propagation and the information flows among the users with an obvious unpopular-to-popular-to-unpopular trend in terms of user popularity, but such pattern was not witnessed in non-rumors in early stage.", "Many textual signals (underlined) can also be observed in that early period.", "Our method can learn such structures and patterns naturally, but it is difficult to know and hand-craft them in feature engineering.", "Conclusion and Future Work We propose a novel approach for detecting rumors in microblog posts based on kernel learning method using propagation trees.", "A propagation tree encodes the spread of a hypothesis (i.e., a source tweet) with complex structured patterns and flat information regarding content, user and time associated with the tree nodes.", "Enlightened by tree kernel techniques, our kernel method learns discriminant clues for identifying rumors of finer-grained levels by directly measuring the similarity among propagation trees via kernel functions.", "Experiments on two Twitter datasets show that our approach outperforms stateof-the-art baselines with large margin for both general and early rumor detection tasks.", "Since kernel-based approach covers more structural information than feature-based methods, it allows kernel to further incorporate information from a high dimensional space for possibly better discrimination.", "In the future, we will focus on improving the rumor detection task by exploring network representation learning framework.", "Moreover, we plan to investigate unsupervised models considering massive unlabeled rumorous data from social media." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5.1", "5.2", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Representation of Tweets Propagation", "Propagation Tree Kernel Modeling", "Background of Tree Kernel", "Our PTK Model", "Context-Sensitive Extension of PTK", "Rumor Detection via Kernel Learning", "Data Sets", "Experimental Setup", "Experimental Results", "Early Detection Performance", "Conclusion and Future Work" ] }
GEM-SciDuet-train-101#paper-1265#slide-14
Approaches to compare with
DTR: Decision tree-based ranking model using enquiry phrases to identify trending rumors (Zhao et al., 2015) DTC and SVM-RBF: Twitter information credibility model using Decision Tree Classifier (Castillo et al., 2011); SVM-based model with RBF kernel (Yang et al., 2012) RFC: Random Forest Classifier using three parameters to fit the temporal tweets volume curve (Kwon et al., 2013) SVM-TS: Linear SVM classifier using time-series structures to model the variation of social context features. (Ma et al., 2015) GRU: The RNN-based rumor detection model. (Ma et al., 2016) BOW: linear SVM classifier using bag-of-words. Ours (PTK and cPTK): Our kernel based model PTK- and cPTK-: Our kernel based model with subset node features.
DTR: Decision tree-based ranking model using enquiry phrases to identify trending rumors (Zhao et al., 2015) DTC and SVM-RBF: Twitter information credibility model using Decision Tree Classifier (Castillo et al., 2011); SVM-based model with RBF kernel (Yang et al., 2012) RFC: Random Forest Classifier using three parameters to fit the temporal tweets volume curve (Kwon et al., 2013) SVM-TS: Linear SVM classifier using time-series structures to model the variation of social context features. (Ma et al., 2015) GRU: The RNN-based rumor detection model. (Ma et al., 2016) BOW: linear SVM classifier using bag-of-words. Ours (PTK and cPTK): Our kernel based model PTK- and cPTK-: Our kernel based model with subset node features.
[]
GEM-SciDuet-train-101#paper-1265#slide-15
1265
Detect Rumors in Microblog Posts Using Propagation Structure via Kernel Learning
How fake news goes viral via social media? How does its propagation pattern differ from real stories? In this paper, we attempt to address the problem of identifying rumors, i.e., fake information, out of microblog posts based on their propagation structure. We firstly model microblog posts diffusion with propagation trees, which provide valuable clues on how an original message is transmitted and developed over time. We then propose a kernel-based method called Propagation Tree Kernel, which captures high-order patterns differentiating different types of rumors by evaluating the similarities between their propagation tree structures. Experimental results on two real-world datasets demonstrate that the proposed kernel-based approach can detect rumors more quickly and accurately than state-ofthe-art rumor detection models.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170 ], "paper_content_text": [ "Introduction On November 9th, 2016, Eric Tucker, a grassroots user who had just about 40 followers on Twitter, tweeted his unverified observations about paid protesters being bused to attend anti-Trump demonstration in Austin, Texas.", "The tweet, which was proved false later, was shared over 16 thousand times on Twitter and 350 thousand times on Facebook within a couple of days, fueling a nation-wide conspiracy theory 1 .", "The diffusion of the story is illustrated as Figure 1 which gives the key spreading points of the story along the time line.", "We can see that after the initial post, the tweet was shared or promoted by some influential online communities and users (including Trump himself), resulting in its wide spread.", "A widely accepted definition of rumor is \"unverified and instrumentally relevant information statements in circulation\" (DiFonzo and Bordia, 2007) .", "This unverified information may eventually turn out to be true, or partly or entirely false.", "In today's ever-connected world, rumors can arise and spread at lightening speed thanks to social media platforms, which could not only be wrong, but be misleading and dangerous to the public society.", "Therefore, it is crucial to track and debunk such rumors in timely manner.", "Journalists and fact-checking websites such as snopes.com have made efforts to track and detect rumors.", "However, such endeavor is manual, thus prone to poor coverage and low speed.", "Feature-based methods (Castillo et al., 2011; Yang et al., 2012; Ma et al., 2015) achieved certain success by employing large feature sets crafted from message contents, user profiles and holistic statistics of diffusion patterns (e.g., number of retweets, propagation time, etc.).", "But such an approach was over simplified as they ignored the dynamics of rumor propagation.", "Existing studies considering propagation characteristics mainly focused on the temporal features (Kwon et al., 2013 (Kwon et al., , 2017 rather than the structure of propagation.", "So, can the propagation structure make any difference for differentiating rumors from nonrumors?", "Recent studies showed that rumor spreaders are persons who want to get attention and popularity (Sunstein, 2014) .", "However, popular users who get more attention on Twitter (e.g., with more followers) are actually less likely to spread rumor in a sense that the high audience size might hinder a user from participating in propagating unverified information (Kwon et al., 2017) .", "Intuitively, for \"successful\" rumors being propagated as widely as popular real news, initial spreaders (typically lack of popularity) must attract certain amount of broadcasting power, e.g., attention of influential users or communities that have a lot of audiences joining in promoting the propagation.", "We refer to this as a constrained mode propagation, relative to the open mode propagation of normal messages that everyone is open to share.", "Such different modes of propagation may imply some distinct propagation structures between rumors and nonrumors and even among different types of rumors.", "Due to the complex nature of information diffusion, explicitly defining discriminant features based on propagation structure is difficult and biased.", "Figure 2 exemplifies the propagation structures of two Twitter posts, a rumor and a nonrumor, initiated by two users shown as the root nodes (in green color).", "The information flows here illustrate that the rumorous tweet is first posted by a low-impact user, then some popular users joining in who boost the spreading, but the non-rumorous tweet is initially posted by a popular user and directly spread by many general users; contentbased signal like various users' stance (Zhao et al., 2015) and edge-based signal such as relative influence (Kwon et al., 2017) can also suggest the different nature of source tweets.", "Many of such implicit distinctions throughout message propagation are hard to hand craft specifically using flat summary of statistics as previous work did.", "In addition, unlike representation learning for plain text, learning for representation of structures such as networks is not well studied in general.", "Therefore, traditional and latest text-based models (Castillo (a) A rumor (b) A non-rumor Figure 2 : Fragments of the propagation for two source tweets.", "Node size: denotes the popularity of the user who tweet the post (represented by # of followers); Red, black, blue node: content-wise the user express doubt/denial, support, neutrality in the tweet, respectively; Solid (dotted) edge: information flow from a more (less) popular user to a less (more) popular user; Dashed concentric circles: time stamps.", "Ma et al., 2015 Ma et al., , 2016 cannot be applied easily on such complex, dynamic structures.", "To capture high-order propagation patterns for rumor detection, we firstly represent the propagation of each source tweet with a propagation tree which is formed by harvesting user's interactions to one another triggered by the source tweet.", "Then, we propose a kernel-based data-driven method called Propagation Tree Kernel (PTK) to generate relevant features (i.e., subtrees) automatically for estimating the similarity between two propagation trees.", "Unlike traditional tree kernel (Moschitti, 2006; Zhang et al., 2008) for modeling syntactic structure based on parse tree, our propagation tree consists of nodes corresponding to microblog posts, each represented as a continuous vector, and edges representing the direction of propagation and providing the context to individual posts.", "The basic idea is to find and capture the salient substructures in the propagation trees indicative of rumors.", "We also extend PTK into a context-enriched PTK (cPTK) to enhance the model by considering different propagation paths from source tweet to the roots of subtrees, which capture the context of transmission.", "Extensive experiments on two real-world Twitter datasets show that the proposed methods outperform state-of-the-art rumor detection models with large margin.", "Moreover, most existing approaches regard rumor detection as a binary classification problem, which predicts a candidate hypothesis as rumor or not.", "Since a rumor often begins as unverified and later turns out to be confirmed as true or false, or remains unverified (Zubiaga et al., 2016) , here we consider a set of more practical, finer-grained classes: false rumor, true rumor, unverified rumor, and non-rumor, which becomes an even more challenging problem.", "Related Work Tracking misinformation or debunking rumors has been a hot research topic in multiple disciplines (DiFonzo and Bordia, 2007; Morris et al., 2012; Rosnow, 1991) .", "Castillo et al.", "(2011) studied information credibility on Twitter using a wide range of hand-crafted features.", "Following that, various features corresponding to message contents, user profiles and statistics of propagation patterns were proposed in many studies (Yang et al., 2012; Wu et al., 2015; Sun et al., 2013; Liu et al., 2015) .", "Zhao et al.", "(2015) focused on early rumor detection by using regular expressions for finding questing and denying tweets as the key for debunking rumor.", "All such approaches are over simplistic because they ignore the dynamic propagation patterns given the rich structures of social media data.", "Some studies focus on finding temporal patterns for understanding rumor diffusion.", "Kown et al.", "(2013; 2017) introduced a time-series fitting model based on the temporal properties of tweet volume.", "Ma et al.", "(2015) extended the model using time series to capture the variation of features over time.", "Friggeri et al.", "(2014) and Hannak et al.", "(2014) studied the structure of misinformation cascades by analyzing comments linking to rumor debunking websites.", "More recently, Ma et al.", "(2016) used recurrent neural networks to learn the representations of rumor signals from tweet text at different times.", "Our work will consider temporal, structural and linguistic signals in a unified framework based on propagation tree kernel.", "Most previous work formulated the task as classification at event level where an event is comprised of a number of source tweets, each being associated with a group of retweets and replies.", "Here we focus on classifying a given source tweet regarding a claim which is a finer-grained task.", "Similar setting was also considered in (Wu et al., 2015; Qazvinian et al., 2011) .", "Kernel methods are designed to evaluate the similarity between two objects, and tree kernel specifically addresses structured data which has been successfully applied for modeling syntactic information in many natural language tasks such as syntactic parsing (Collins and Duffy, 2001) , question-answering (Moschitti, 2006) , semantic analysis (Moschitti, 2004) , relation extraction (Zhang et al., 2008) and machine translation (Sun et al., 2010) .", "These kernels are not suitable for modeling the social media propagation structures because the nodes are not given as discrete values like part-of-speech tags, but are represented as high dimensional real-valued vectors.", "Our proposed method is a substantial extension of tree kernel for modeling such structures.", "Representation of Tweets Propagation On microblogging platforms, the follower/friend relationship embeds shared interests among the users.", "Once a user has posted a tweet, all his followers will receive the tweet.", "Furthermore, Twitter allows a user to retweet or comment another user's post, so that the information could reach beyond the network of the original creator.", "We model the propagation of each source tweet as a tree structure T (r) = V, E , where r is the source tweet as well as the root of the tree, V refers to a set of nodes each representing a responsive post (i.e., retweet or reply) of a user at a certain time to the source tweet r which initiates the circulation, and E is a set of directed edges corresponding to the response relation among the nodes in V .", "If there exists a directed edge from v i to v j , it means v j is a direct response to v i .", "More specifically, each node v ∈ V is repre- sented as a tuple v = (u v , c v , t v ) , which provides the following information: u v is the creator of the post, c v represents the text content of the post, and t v is the time lag between the source tweet r and v. In our case, u v contains attributes of the user such as # of followers/friends, verification status, # of history posts, etc., c v is a vector of binary features based on uni-grams and/or bi-grams representing the post's content.", "Propagation Tree Kernel Modeling In this section, we describe our rumor detection model based on propagation trees using kernel method called Propagation Tree Kernel (PTK).", "Our task is, given a propagation tree T (r) of a source tweet r, to predict the label of r. Background of Tree Kernel Before presenting our proposed algorithm, we briefly present the traditional tree kernel, which our PTK model is based on.", "Tree kernel was designed to compute the syntactic and semantic similarity between two natural language sentences by implicitly counting the number of common subtrees between their corresponding parse trees.", "Given a syntactic parse tree, each node with its children is associated with a grammar production rule.", "Figure 3 illustrates the syntactic parse tree of \"cut a tree\" and its subtrees.", "A subtree is defined as any subgraph which has more than one nodes, with the restriction that entire (not partial) rule productions must be included.", "For example, the fragment [NP [D a]] is excluded because it contains only part of the production NP β†’ D N (Collins and Duffy, 2001) .", "Following Collins and Duffy (2001) , given two parse trees T 1 and T 2 , the kernel function K(T 1 , T 2 ) is defined as: v i ∈V 1 v j ∈V 2 βˆ†(v i , v j ) (1) where V 1 and V 2 are the sets of all nodes respectively in T 1 and T 2 , and each node is associated with a production rule, and βˆ†(v i , v j ) evaluates the common subtrees rooted at v i and v j .", "βˆ†(v i , v j ) can be computed using the following recursive procedure (Collins and Duffy, 2001) : 1) if the production rules at v i and v j are different, then βˆ†(v i , v j ) = 0; 2) else if the production rules at v i and v j are same, and v i and v j have only leaf children (i.e., they are pre-terminal symbols), then βˆ†(v i , v j ) = Ξ»; 3) else βˆ†(v i , v j ) = Ξ» min(nc(v i ),nc(v j )) k=1 (1 + βˆ†(ch(v i , k), ch(v j , k))).", "where nc(v) is the number of children of node v, ch(v, k) is the k-th child of node v, and Ξ» (0 < Ξ» ≀ 1) is a decay factor.", "Ξ» = 1 yields the number of common subtrees; Ξ» < 1 down weighs the contribution of larger subtrees to make the kernel value less variable with respect to subtree size.", "Our PTK Model To classify propagation trees, we can calculate the similarity between the trees, which is supposed to reflect the distinction of different types of rumors and non-rumors based on structural, linguistic and temporal properties.", "However, existing tree kernels cannot be readily applied on propagation trees because 1) unlike parse tree where the node is represented by enumerable nominal value (e.g., part-of-speech tag), the propagation tree node is given as a vector of continuous numerical values representing the basic properties of the node; 2) the similarity of two parse trees is based on the count of common subtrees, for which the commonality of subtrees is evaluated by checking if the same production rules and the same children are associated with the nodes in two subtrees being compared, whereas in our context the similarity function should be defined softly since hardly two nodes from different propagation trees are same.", "With the representation of propagation tree, we first define a function f to evaluate the similarity between two nodes v i and v j (we simplify the node representation for instance v i = (u i , c i , t i )) as the following: f (v i , v j ) = e βˆ’t (Ξ±E(u i , u j ) + (1 βˆ’ Ξ±)J (c i , c j )) where t = |t i βˆ’ t j | is the absolute difference between the time lags of v i and v j , E and J are user-based similarity and content-based similarity, respectively, and Ξ± is the trade-off parameter.", "The intuition of using exponential function of t to scale down the similarity is to capture the discriminant signals or patterns at the different stages of propagation.", "For example, a questioning message posted very early may signal a false rumor while the same posted far later from initial post may indicate the rumor is still unverified, despite that the two messages are semantically similar.", "The user-based similarity is defined as an Euclidean distance E(u i , u j ) = ||u i βˆ’ u j || 2 , where u i and u j are the user vectors of node v i and v j and || β€’ || 2 is the 2-norm of a vector.", "Here E is used to capture the characteristics of users participating in spreading rumors as discriminant signals, throughout the entire stage of propagation.", "Contentwise, we use Jaccard coefficient to measure the similarity of post content: J (c i , c j ) = |N gram(c i ) ∩ N gram(c j )| |N gram(c i ) βˆͺ N gram(c j )| where c i and c j are the sets of content words in two nodes.", "For n-grams here, we adopt both uni-grams and bi-grams.", "It can capture cue terms e.g., 'false', 'debunk', 'not true', etc.", "commonly occurring in rumors but not in non-rumors.", "Given two propagation trees T 1 = V 1 , E 1 and T 2 = V 2 , E 2 , PTK aims to compute the similarity between T 1 and T 2 iteratively based on enumerating all pairs of most similar subtrees.", "First, for each node v i ∈ V 1 , we obtain v i ∈ V 2 , the most similar node of v i from V 2 : v i = arg max v j ∈V 2 f (v i , v j ) Similarly, for each v j ∈ V 2 , we obtain v j ∈ V 1 : v j = arg max v i ∈V 1 f (v i , v j ) Then, the propagation tree kernel K P (T 1 , T 2 ) is defined as: v i ∈V 1 Ξ›(v i , v i ) + v j ∈V 2 Ξ›(v j , v j ) (2) where Ξ›(v, v ) evaluates the similarity of two subtrees rooted at v and v , which is computed recursively as follows: 1) if v or v are leaf nodes, then Ξ›(v, v ) = f (v, v ); 2) else Ξ›(v, v ) = f (v, v ) min(nc(v),nc(v )) k=1 (1 + Ξ›(ch(v, k), ch(v , k))) Note that unlike traditional tree kernel, in PTK the node similarity f ∈ [0, 1] is used for softly counting similar subtrees instead of common subtrees.", "Also, Ξ» in tree kernel is not needed as subtree size is not an issue here thanks to node similarity f .", "PTK aims to capture discriminant patterns from propagation trees inclusive of user, content and temporal traits, which is inspired by prior analyses on rumors spreading, e.g., user information can be a strong clue in the initial broadcast, content features are important throughout entire propagation periods, and structural and temporal patterns help for longitudinal diffusion (Zubiaga et al., 2016; Kwon et al., 2017) .", "Context-Sensitive Extension of PTK One defect of PTK is that it ignores the clues outside the subtrees, e.g., how the information propagates from source post to the current subtree.", "Intuitively, propagation paths provide further clue for determining the truthfulness of information since they embed the route and context of how the propagation happens.", "Therefore, we propose contextsensitive PTK (cPTK) by considering the propagation paths from the root of the tree to the roots of subtrees, which shares similar intuition with the context-sensitive tree kernel (Zhou et al., 2007) .", "For a propagation tree node v ∈ T (r), let L r v be the length (i.e., # of nodes) of the propagation path from root r to v, and v[x] be the x-th ancestor of v on the path starting from v (0 ≀ x < L r v , v[0] = v, v[L r v βˆ’ 1] = r) .", "cPTK evaluates the similarity between two trees T 1 (r 1 ) and T 2 (r 2 ) as follows: v i ∈V 1 L r 1 v i βˆ’1 x=0 Ξ› x (v i , v i ) + v j ∈V 2 L r 2 v j βˆ’1 x=0 Ξ› x (v j , v j ) (3) where Ξ› x (v, v ) measures the similarity of sub- trees rooted at v[x] and v [x] for context-sensitive evaluation, which is computed as follows: 1) if x > 0, Ξ› x (v, v ) = f (v[x], v [x]), where v[x] and v [x] are the x-th ancestor nodes of v and v on the respective propagation path.", "2) else Ξ› x (v, v ) = Ξ›(v, v ), namely PTK.", "Clearly, PTK is a special case of cPTK when x = 0 (see equation 3).", "cPTK evaluates the oc-currence of both context-free (without considering ancestors on propagation paths) and contextsensitive cases.", "Rumor Detection via Kernel Learning The advantage of kernel-based method is that we can avoid painstakingly engineering the features.", "This is possible because the kernel function can explore an implicit feature space when calculating the similarity between two objects (Culotta and Sorensen, 2004) .", "We incorporate the proposed tree kernel functions, i.e., PTK (equation 2) or cPTK (equation 3), into a supervised learning framework, for which we utilize a kernel-based SVM classifier.", "We treat each tree as an instance, and its similarity values with all training instances as feature space.", "Therefore, the kernel matrix of training set is m Γ— m and that of test set is n Γ— m where m and n are the sizes of training and test sets, respectively.", "For our multi-class task, we perform a one-vsall classification for each label and then assign the one with the highest likelihood among the four, i.e., non-rumor, false rumor, true rumor or unverified rumor.", "We choose this method due to interpretability of results, similar to recent work on occupational class classification (Preotiuc-Pietro et al., 2015; Lukasik et al., 2015) .", "Experiments and Results Data Sets To our knowledge, there is no public large dataset available for classifying propagation trees, where we need a good number of source tweets, more accurately, the tree roots together with the corresponding propagation structure, to be appropriately annotated with ground truth.", "We constructed our datasets based on a couple of reference datasets, namely Twitter15 (Liu et al., 2015) and Twitter16 (Ma et al., 2016) .", "The original datasets were released and used for binary classification of rumor and non-rumor with respect to given events that contain their relevant tweets.", "First, we extracted the popular source tweets 2 that are highly retweeted or replied.", "We then collected all the propagation threads (i.e., retweets and replies) for these source tweets.", "Because Twitter API cannot retrieve the retweets or replies, we gathered the retweet users for a given tweet from 2 Though unpopular tweets could be false, we ignore them as they do not draw much attention and are hardly impactful Twrench 3 and crawled the replies through Twitter's web interface.", "Finally, we annotated the source tweets by referring to the labels of the events they are from.", "We first turned the label of each event in Twitter15 and Twitter16 from binary to quaternary according to the veracity tag of the article in rumor debunking websites (e.g., snopes.com, Emergent.info, etc).", "Then we labeled the source tweets by following these rules: 1) Source tweets from unverified rumor events or non-rumor events are labeled the same as the corresponding event's label; 2) For a source tweet in false rumor event, we flip over the label and assign true to the source tweet if it expresses denial type of stance; otherwise, the label is assigned as false; 3) The analogous flip-over/nochange rule applies to the source tweets from true rumor events.", "We make the datasets produced publicly accessible 4 .", "Table 1 gives statistics on the resulting datasets.", "Experimental Setup We compare our kernel-based method against the following baselines: SVM-TS: A linear SVM classification model that uses time-series to model the variation of a set of hand-crafted features (Ma et al., 2015) .", "DTR: A Decision-Tree-based Ranking method to identify trending rumors (Zhao et al., 2015) , which searches for enquiry phrases and clusters disputed factual claims, and ranked the clustered results based on statistical features.", "DTC and SVM-RBF: The Twitter information credibility model using Decision Tree Classifier (Castillo et al., 2011) and the SVM-based model with RBF kernel (Yang et al., 2012) , respectively, both using hand-crafted features based on the overall statistics of the posts.", "RFC: The Random Forest Classifier proposed by Kwon et al.", "(2017) using three parameters to fit the temporal properties and an extensive set of hand-crafted features related to the user, linguistic and structure characteristics.", "GRU: The RNN-based rumor detection model proposed by Ma et al.", "(2016) with gated recurrent unit for representation learning of high-level features from relevant posts over time.", "BOW: A naive baseline we worked by representing the text in each tree using bag-of-words and building the rumor classifier with linear SVM.", "Our models: PTK and cPTK are our full PTK and cPTK models, respectively; PTKand cPTKare the setting of only using content while ignoring user properties.", "We implemented DTC and RFC with Weka 5 , SVM models with LibSVM 6 and GRU with Theano 7 .", "We held out 10% of the trees in each dataset for model tuning, and for the rest of the trees, we performed 3-fold cross-validation.", "We used accuracy, F 1 measure as evaluation metrics.", "Table 2 shows that our proposed methods outperform all the baselines on both datasets.", "Experimental Results Among all baselines, GRU performs the best, which learns the low-dimensional representation of responsive tweets by capturing the textual and temporal information.", "This indicates the effectiveness of complex signals indicative of rumors beyond cue words or phrases (e.g., \"what?", "\", \"really?", "\", \"not sure\", etc.).", "This also justifies the good performance of BOW even though it only uses uni-grams for representation.", "Although DTR uses a set of regular expressions, we found only 19.59% and 22.21% tweets in our datasets containing these expressions.", "That is why the results of DTR are not satisfactory.", "SVM-TS and RFC are comparable because both of them utilize an extensive set of features especially focusing on temporal traits.", "But none of the models can directly incorporate structured propagation patterns for deep similarity compar- ison between propagation trees.", "SVM-RBF, although using a non-linear kernel, is based on traditional hand-crafted features instead of the structural kernel like ours.", "So, they performed obviously worse than our approach.", "Representation learning methods like GRU cannot easily utilize complex structural information for learning important features from our networked data.", "In contrast, our models can capture complex propagation patterns from structured data rich of linguistic, user and temporal signals.", "Therefore, the superiority of our models is clear: PTKwhich only uses text is already better than GRU, demonstrating the importance of propagation structures.", "PTK that combines text and user yields better results on both datasets, implying that both properties are complementary and PTK integrating flat and structured information is obviously more effective.", "It is also observed that cPTK outperforms PTK except for non-rumor class.", "This suggests the context-sensitive modeling based on PTK is effective for different types of rumors, but for non- The example subtree of a rumor captured by the algorithm at early stage of propagation rumors, it seems that considering context of propagation path is not always helpful.", "This might be due to the generally weak signals originated from node properties on the paths during non-rumor's diffusion since user distribution patterns in nonrumors do not seem as obvious as in rumors.", "This is not an issue in cPTKsince user information is not considered at all.", "Over all classes, cPTK achieves the highest accuracies on both datasets.", "Furthermore, we observe that all the baseline methods perform much better on non-rumors than on rumors.", "This is because the features of existing methods were defined for a binary (rumor vs. non-rumor) classification problem.", "So, they do not perform well for finer-grained classes.", "Our ap-proach can differentiate various classes much better by deep, detailed comparison of different patterns based on propagation structure.", "Early Detection Performance Detecting rumors at an early stage of propagation is very important so that preventive measures could be taken as quickly as possible.", "In early detection task, all the posts after a detection deadline are invisible during test.", "The earlier the deadline, the less propagation information can be available.", "Figure 4 shows the performances of our PTK and cPTK models versus RFC (the best system based on feature engineering), GRU (the best system based on RNN) and DTR (an early-detection-specific algorithm) against various deadlines.", "In the first few hours, our approach demonstrates superior early detection performance than other models.", "Particularly, cPTK achieve 75% accuracy on Twitter15 and 73% on Twitter16 within 24 hours, that is much faster than other models.", "Our analysis shows that rumors typically demonstrate more complex propagation substructures especially at early stage.", "Figure 5 shows a detected subtree of a false rumor spread in its first few hours, where influential users are somehow captured to boost its propagation and the information flows among the users with an obvious unpopular-to-popular-to-unpopular trend in terms of user popularity, but such pattern was not witnessed in non-rumors in early stage.", "Many textual signals (underlined) can also be observed in that early period.", "Our method can learn such structures and patterns naturally, but it is difficult to know and hand-craft them in feature engineering.", "Conclusion and Future Work We propose a novel approach for detecting rumors in microblog posts based on kernel learning method using propagation trees.", "A propagation tree encodes the spread of a hypothesis (i.e., a source tweet) with complex structured patterns and flat information regarding content, user and time associated with the tree nodes.", "Enlightened by tree kernel techniques, our kernel method learns discriminant clues for identifying rumors of finer-grained levels by directly measuring the similarity among propagation trees via kernel functions.", "Experiments on two Twitter datasets show that our approach outperforms stateof-the-art baselines with large margin for both general and early rumor detection tasks.", "Since kernel-based approach covers more structural information than feature-based methods, it allows kernel to further incorporate information from a high dimensional space for possibly better discrimination.", "In the future, we will focus on improving the rumor detection task by exploring network representation learning framework.", "Moreover, we plan to investigate unsupervised models considering massive unlabeled rumorous data from social media." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5.1", "5.2", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Representation of Tweets Propagation", "Propagation Tree Kernel Modeling", "Background of Tree Kernel", "Our PTK Model", "Context-Sensitive Extension of PTK", "Rumor Detection via Kernel Learning", "Data Sets", "Experimental Setup", "Experimental Results", "Early Detection Performance", "Conclusion and Future Work" ] }
GEM-SciDuet-train-101#paper-1265#slide-15
Results on Twitter15
NR: Non-Rumor; FR: False Rumor; TR: True Rumor; UR: Unverified Rumor; NR FR TR UR
NR: Non-Rumor; FR: False Rumor; TR: True Rumor; UR: Unverified Rumor; NR FR TR UR
[]
GEM-SciDuet-train-101#paper-1265#slide-16
1265
Detect Rumors in Microblog Posts Using Propagation Structure via Kernel Learning
How fake news goes viral via social media? How does its propagation pattern differ from real stories? In this paper, we attempt to address the problem of identifying rumors, i.e., fake information, out of microblog posts based on their propagation structure. We firstly model microblog posts diffusion with propagation trees, which provide valuable clues on how an original message is transmitted and developed over time. We then propose a kernel-based method called Propagation Tree Kernel, which captures high-order patterns differentiating different types of rumors by evaluating the similarities between their propagation tree structures. Experimental results on two real-world datasets demonstrate that the proposed kernel-based approach can detect rumors more quickly and accurately than state-ofthe-art rumor detection models.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170 ], "paper_content_text": [ "Introduction On November 9th, 2016, Eric Tucker, a grassroots user who had just about 40 followers on Twitter, tweeted his unverified observations about paid protesters being bused to attend anti-Trump demonstration in Austin, Texas.", "The tweet, which was proved false later, was shared over 16 thousand times on Twitter and 350 thousand times on Facebook within a couple of days, fueling a nation-wide conspiracy theory 1 .", "The diffusion of the story is illustrated as Figure 1 which gives the key spreading points of the story along the time line.", "We can see that after the initial post, the tweet was shared or promoted by some influential online communities and users (including Trump himself), resulting in its wide spread.", "A widely accepted definition of rumor is \"unverified and instrumentally relevant information statements in circulation\" (DiFonzo and Bordia, 2007) .", "This unverified information may eventually turn out to be true, or partly or entirely false.", "In today's ever-connected world, rumors can arise and spread at lightening speed thanks to social media platforms, which could not only be wrong, but be misleading and dangerous to the public society.", "Therefore, it is crucial to track and debunk such rumors in timely manner.", "Journalists and fact-checking websites such as snopes.com have made efforts to track and detect rumors.", "However, such endeavor is manual, thus prone to poor coverage and low speed.", "Feature-based methods (Castillo et al., 2011; Yang et al., 2012; Ma et al., 2015) achieved certain success by employing large feature sets crafted from message contents, user profiles and holistic statistics of diffusion patterns (e.g., number of retweets, propagation time, etc.).", "But such an approach was over simplified as they ignored the dynamics of rumor propagation.", "Existing studies considering propagation characteristics mainly focused on the temporal features (Kwon et al., 2013 (Kwon et al., , 2017 rather than the structure of propagation.", "So, can the propagation structure make any difference for differentiating rumors from nonrumors?", "Recent studies showed that rumor spreaders are persons who want to get attention and popularity (Sunstein, 2014) .", "However, popular users who get more attention on Twitter (e.g., with more followers) are actually less likely to spread rumor in a sense that the high audience size might hinder a user from participating in propagating unverified information (Kwon et al., 2017) .", "Intuitively, for \"successful\" rumors being propagated as widely as popular real news, initial spreaders (typically lack of popularity) must attract certain amount of broadcasting power, e.g., attention of influential users or communities that have a lot of audiences joining in promoting the propagation.", "We refer to this as a constrained mode propagation, relative to the open mode propagation of normal messages that everyone is open to share.", "Such different modes of propagation may imply some distinct propagation structures between rumors and nonrumors and even among different types of rumors.", "Due to the complex nature of information diffusion, explicitly defining discriminant features based on propagation structure is difficult and biased.", "Figure 2 exemplifies the propagation structures of two Twitter posts, a rumor and a nonrumor, initiated by two users shown as the root nodes (in green color).", "The information flows here illustrate that the rumorous tweet is first posted by a low-impact user, then some popular users joining in who boost the spreading, but the non-rumorous tweet is initially posted by a popular user and directly spread by many general users; contentbased signal like various users' stance (Zhao et al., 2015) and edge-based signal such as relative influence (Kwon et al., 2017) can also suggest the different nature of source tweets.", "Many of such implicit distinctions throughout message propagation are hard to hand craft specifically using flat summary of statistics as previous work did.", "In addition, unlike representation learning for plain text, learning for representation of structures such as networks is not well studied in general.", "Therefore, traditional and latest text-based models (Castillo (a) A rumor (b) A non-rumor Figure 2 : Fragments of the propagation for two source tweets.", "Node size: denotes the popularity of the user who tweet the post (represented by # of followers); Red, black, blue node: content-wise the user express doubt/denial, support, neutrality in the tweet, respectively; Solid (dotted) edge: information flow from a more (less) popular user to a less (more) popular user; Dashed concentric circles: time stamps.", "Ma et al., 2015 Ma et al., , 2016 cannot be applied easily on such complex, dynamic structures.", "To capture high-order propagation patterns for rumor detection, we firstly represent the propagation of each source tweet with a propagation tree which is formed by harvesting user's interactions to one another triggered by the source tweet.", "Then, we propose a kernel-based data-driven method called Propagation Tree Kernel (PTK) to generate relevant features (i.e., subtrees) automatically for estimating the similarity between two propagation trees.", "Unlike traditional tree kernel (Moschitti, 2006; Zhang et al., 2008) for modeling syntactic structure based on parse tree, our propagation tree consists of nodes corresponding to microblog posts, each represented as a continuous vector, and edges representing the direction of propagation and providing the context to individual posts.", "The basic idea is to find and capture the salient substructures in the propagation trees indicative of rumors.", "We also extend PTK into a context-enriched PTK (cPTK) to enhance the model by considering different propagation paths from source tweet to the roots of subtrees, which capture the context of transmission.", "Extensive experiments on two real-world Twitter datasets show that the proposed methods outperform state-of-the-art rumor detection models with large margin.", "Moreover, most existing approaches regard rumor detection as a binary classification problem, which predicts a candidate hypothesis as rumor or not.", "Since a rumor often begins as unverified and later turns out to be confirmed as true or false, or remains unverified (Zubiaga et al., 2016) , here we consider a set of more practical, finer-grained classes: false rumor, true rumor, unverified rumor, and non-rumor, which becomes an even more challenging problem.", "Related Work Tracking misinformation or debunking rumors has been a hot research topic in multiple disciplines (DiFonzo and Bordia, 2007; Morris et al., 2012; Rosnow, 1991) .", "Castillo et al.", "(2011) studied information credibility on Twitter using a wide range of hand-crafted features.", "Following that, various features corresponding to message contents, user profiles and statistics of propagation patterns were proposed in many studies (Yang et al., 2012; Wu et al., 2015; Sun et al., 2013; Liu et al., 2015) .", "Zhao et al.", "(2015) focused on early rumor detection by using regular expressions for finding questing and denying tweets as the key for debunking rumor.", "All such approaches are over simplistic because they ignore the dynamic propagation patterns given the rich structures of social media data.", "Some studies focus on finding temporal patterns for understanding rumor diffusion.", "Kown et al.", "(2013; 2017) introduced a time-series fitting model based on the temporal properties of tweet volume.", "Ma et al.", "(2015) extended the model using time series to capture the variation of features over time.", "Friggeri et al.", "(2014) and Hannak et al.", "(2014) studied the structure of misinformation cascades by analyzing comments linking to rumor debunking websites.", "More recently, Ma et al.", "(2016) used recurrent neural networks to learn the representations of rumor signals from tweet text at different times.", "Our work will consider temporal, structural and linguistic signals in a unified framework based on propagation tree kernel.", "Most previous work formulated the task as classification at event level where an event is comprised of a number of source tweets, each being associated with a group of retweets and replies.", "Here we focus on classifying a given source tweet regarding a claim which is a finer-grained task.", "Similar setting was also considered in (Wu et al., 2015; Qazvinian et al., 2011) .", "Kernel methods are designed to evaluate the similarity between two objects, and tree kernel specifically addresses structured data which has been successfully applied for modeling syntactic information in many natural language tasks such as syntactic parsing (Collins and Duffy, 2001) , question-answering (Moschitti, 2006) , semantic analysis (Moschitti, 2004) , relation extraction (Zhang et al., 2008) and machine translation (Sun et al., 2010) .", "These kernels are not suitable for modeling the social media propagation structures because the nodes are not given as discrete values like part-of-speech tags, but are represented as high dimensional real-valued vectors.", "Our proposed method is a substantial extension of tree kernel for modeling such structures.", "Representation of Tweets Propagation On microblogging platforms, the follower/friend relationship embeds shared interests among the users.", "Once a user has posted a tweet, all his followers will receive the tweet.", "Furthermore, Twitter allows a user to retweet or comment another user's post, so that the information could reach beyond the network of the original creator.", "We model the propagation of each source tweet as a tree structure T (r) = V, E , where r is the source tweet as well as the root of the tree, V refers to a set of nodes each representing a responsive post (i.e., retweet or reply) of a user at a certain time to the source tweet r which initiates the circulation, and E is a set of directed edges corresponding to the response relation among the nodes in V .", "If there exists a directed edge from v i to v j , it means v j is a direct response to v i .", "More specifically, each node v ∈ V is repre- sented as a tuple v = (u v , c v , t v ) , which provides the following information: u v is the creator of the post, c v represents the text content of the post, and t v is the time lag between the source tweet r and v. In our case, u v contains attributes of the user such as # of followers/friends, verification status, # of history posts, etc., c v is a vector of binary features based on uni-grams and/or bi-grams representing the post's content.", "Propagation Tree Kernel Modeling In this section, we describe our rumor detection model based on propagation trees using kernel method called Propagation Tree Kernel (PTK).", "Our task is, given a propagation tree T (r) of a source tweet r, to predict the label of r. Background of Tree Kernel Before presenting our proposed algorithm, we briefly present the traditional tree kernel, which our PTK model is based on.", "Tree kernel was designed to compute the syntactic and semantic similarity between two natural language sentences by implicitly counting the number of common subtrees between their corresponding parse trees.", "Given a syntactic parse tree, each node with its children is associated with a grammar production rule.", "Figure 3 illustrates the syntactic parse tree of \"cut a tree\" and its subtrees.", "A subtree is defined as any subgraph which has more than one nodes, with the restriction that entire (not partial) rule productions must be included.", "For example, the fragment [NP [D a]] is excluded because it contains only part of the production NP β†’ D N (Collins and Duffy, 2001) .", "Following Collins and Duffy (2001) , given two parse trees T 1 and T 2 , the kernel function K(T 1 , T 2 ) is defined as: v i ∈V 1 v j ∈V 2 βˆ†(v i , v j ) (1) where V 1 and V 2 are the sets of all nodes respectively in T 1 and T 2 , and each node is associated with a production rule, and βˆ†(v i , v j ) evaluates the common subtrees rooted at v i and v j .", "βˆ†(v i , v j ) can be computed using the following recursive procedure (Collins and Duffy, 2001) : 1) if the production rules at v i and v j are different, then βˆ†(v i , v j ) = 0; 2) else if the production rules at v i and v j are same, and v i and v j have only leaf children (i.e., they are pre-terminal symbols), then βˆ†(v i , v j ) = Ξ»; 3) else βˆ†(v i , v j ) = Ξ» min(nc(v i ),nc(v j )) k=1 (1 + βˆ†(ch(v i , k), ch(v j , k))).", "where nc(v) is the number of children of node v, ch(v, k) is the k-th child of node v, and Ξ» (0 < Ξ» ≀ 1) is a decay factor.", "Ξ» = 1 yields the number of common subtrees; Ξ» < 1 down weighs the contribution of larger subtrees to make the kernel value less variable with respect to subtree size.", "Our PTK Model To classify propagation trees, we can calculate the similarity between the trees, which is supposed to reflect the distinction of different types of rumors and non-rumors based on structural, linguistic and temporal properties.", "However, existing tree kernels cannot be readily applied on propagation trees because 1) unlike parse tree where the node is represented by enumerable nominal value (e.g., part-of-speech tag), the propagation tree node is given as a vector of continuous numerical values representing the basic properties of the node; 2) the similarity of two parse trees is based on the count of common subtrees, for which the commonality of subtrees is evaluated by checking if the same production rules and the same children are associated with the nodes in two subtrees being compared, whereas in our context the similarity function should be defined softly since hardly two nodes from different propagation trees are same.", "With the representation of propagation tree, we first define a function f to evaluate the similarity between two nodes v i and v j (we simplify the node representation for instance v i = (u i , c i , t i )) as the following: f (v i , v j ) = e βˆ’t (Ξ±E(u i , u j ) + (1 βˆ’ Ξ±)J (c i , c j )) where t = |t i βˆ’ t j | is the absolute difference between the time lags of v i and v j , E and J are user-based similarity and content-based similarity, respectively, and Ξ± is the trade-off parameter.", "The intuition of using exponential function of t to scale down the similarity is to capture the discriminant signals or patterns at the different stages of propagation.", "For example, a questioning message posted very early may signal a false rumor while the same posted far later from initial post may indicate the rumor is still unverified, despite that the two messages are semantically similar.", "The user-based similarity is defined as an Euclidean distance E(u i , u j ) = ||u i βˆ’ u j || 2 , where u i and u j are the user vectors of node v i and v j and || β€’ || 2 is the 2-norm of a vector.", "Here E is used to capture the characteristics of users participating in spreading rumors as discriminant signals, throughout the entire stage of propagation.", "Contentwise, we use Jaccard coefficient to measure the similarity of post content: J (c i , c j ) = |N gram(c i ) ∩ N gram(c j )| |N gram(c i ) βˆͺ N gram(c j )| where c i and c j are the sets of content words in two nodes.", "For n-grams here, we adopt both uni-grams and bi-grams.", "It can capture cue terms e.g., 'false', 'debunk', 'not true', etc.", "commonly occurring in rumors but not in non-rumors.", "Given two propagation trees T 1 = V 1 , E 1 and T 2 = V 2 , E 2 , PTK aims to compute the similarity between T 1 and T 2 iteratively based on enumerating all pairs of most similar subtrees.", "First, for each node v i ∈ V 1 , we obtain v i ∈ V 2 , the most similar node of v i from V 2 : v i = arg max v j ∈V 2 f (v i , v j ) Similarly, for each v j ∈ V 2 , we obtain v j ∈ V 1 : v j = arg max v i ∈V 1 f (v i , v j ) Then, the propagation tree kernel K P (T 1 , T 2 ) is defined as: v i ∈V 1 Ξ›(v i , v i ) + v j ∈V 2 Ξ›(v j , v j ) (2) where Ξ›(v, v ) evaluates the similarity of two subtrees rooted at v and v , which is computed recursively as follows: 1) if v or v are leaf nodes, then Ξ›(v, v ) = f (v, v ); 2) else Ξ›(v, v ) = f (v, v ) min(nc(v),nc(v )) k=1 (1 + Ξ›(ch(v, k), ch(v , k))) Note that unlike traditional tree kernel, in PTK the node similarity f ∈ [0, 1] is used for softly counting similar subtrees instead of common subtrees.", "Also, Ξ» in tree kernel is not needed as subtree size is not an issue here thanks to node similarity f .", "PTK aims to capture discriminant patterns from propagation trees inclusive of user, content and temporal traits, which is inspired by prior analyses on rumors spreading, e.g., user information can be a strong clue in the initial broadcast, content features are important throughout entire propagation periods, and structural and temporal patterns help for longitudinal diffusion (Zubiaga et al., 2016; Kwon et al., 2017) .", "Context-Sensitive Extension of PTK One defect of PTK is that it ignores the clues outside the subtrees, e.g., how the information propagates from source post to the current subtree.", "Intuitively, propagation paths provide further clue for determining the truthfulness of information since they embed the route and context of how the propagation happens.", "Therefore, we propose contextsensitive PTK (cPTK) by considering the propagation paths from the root of the tree to the roots of subtrees, which shares similar intuition with the context-sensitive tree kernel (Zhou et al., 2007) .", "For a propagation tree node v ∈ T (r), let L r v be the length (i.e., # of nodes) of the propagation path from root r to v, and v[x] be the x-th ancestor of v on the path starting from v (0 ≀ x < L r v , v[0] = v, v[L r v βˆ’ 1] = r) .", "cPTK evaluates the similarity between two trees T 1 (r 1 ) and T 2 (r 2 ) as follows: v i ∈V 1 L r 1 v i βˆ’1 x=0 Ξ› x (v i , v i ) + v j ∈V 2 L r 2 v j βˆ’1 x=0 Ξ› x (v j , v j ) (3) where Ξ› x (v, v ) measures the similarity of sub- trees rooted at v[x] and v [x] for context-sensitive evaluation, which is computed as follows: 1) if x > 0, Ξ› x (v, v ) = f (v[x], v [x]), where v[x] and v [x] are the x-th ancestor nodes of v and v on the respective propagation path.", "2) else Ξ› x (v, v ) = Ξ›(v, v ), namely PTK.", "Clearly, PTK is a special case of cPTK when x = 0 (see equation 3).", "cPTK evaluates the oc-currence of both context-free (without considering ancestors on propagation paths) and contextsensitive cases.", "Rumor Detection via Kernel Learning The advantage of kernel-based method is that we can avoid painstakingly engineering the features.", "This is possible because the kernel function can explore an implicit feature space when calculating the similarity between two objects (Culotta and Sorensen, 2004) .", "We incorporate the proposed tree kernel functions, i.e., PTK (equation 2) or cPTK (equation 3), into a supervised learning framework, for which we utilize a kernel-based SVM classifier.", "We treat each tree as an instance, and its similarity values with all training instances as feature space.", "Therefore, the kernel matrix of training set is m Γ— m and that of test set is n Γ— m where m and n are the sizes of training and test sets, respectively.", "For our multi-class task, we perform a one-vsall classification for each label and then assign the one with the highest likelihood among the four, i.e., non-rumor, false rumor, true rumor or unverified rumor.", "We choose this method due to interpretability of results, similar to recent work on occupational class classification (Preotiuc-Pietro et al., 2015; Lukasik et al., 2015) .", "Experiments and Results Data Sets To our knowledge, there is no public large dataset available for classifying propagation trees, where we need a good number of source tweets, more accurately, the tree roots together with the corresponding propagation structure, to be appropriately annotated with ground truth.", "We constructed our datasets based on a couple of reference datasets, namely Twitter15 (Liu et al., 2015) and Twitter16 (Ma et al., 2016) .", "The original datasets were released and used for binary classification of rumor and non-rumor with respect to given events that contain their relevant tweets.", "First, we extracted the popular source tweets 2 that are highly retweeted or replied.", "We then collected all the propagation threads (i.e., retweets and replies) for these source tweets.", "Because Twitter API cannot retrieve the retweets or replies, we gathered the retweet users for a given tweet from 2 Though unpopular tweets could be false, we ignore them as they do not draw much attention and are hardly impactful Twrench 3 and crawled the replies through Twitter's web interface.", "Finally, we annotated the source tweets by referring to the labels of the events they are from.", "We first turned the label of each event in Twitter15 and Twitter16 from binary to quaternary according to the veracity tag of the article in rumor debunking websites (e.g., snopes.com, Emergent.info, etc).", "Then we labeled the source tweets by following these rules: 1) Source tweets from unverified rumor events or non-rumor events are labeled the same as the corresponding event's label; 2) For a source tweet in false rumor event, we flip over the label and assign true to the source tweet if it expresses denial type of stance; otherwise, the label is assigned as false; 3) The analogous flip-over/nochange rule applies to the source tweets from true rumor events.", "We make the datasets produced publicly accessible 4 .", "Table 1 gives statistics on the resulting datasets.", "Experimental Setup We compare our kernel-based method against the following baselines: SVM-TS: A linear SVM classification model that uses time-series to model the variation of a set of hand-crafted features (Ma et al., 2015) .", "DTR: A Decision-Tree-based Ranking method to identify trending rumors (Zhao et al., 2015) , which searches for enquiry phrases and clusters disputed factual claims, and ranked the clustered results based on statistical features.", "DTC and SVM-RBF: The Twitter information credibility model using Decision Tree Classifier (Castillo et al., 2011) and the SVM-based model with RBF kernel (Yang et al., 2012) , respectively, both using hand-crafted features based on the overall statistics of the posts.", "RFC: The Random Forest Classifier proposed by Kwon et al.", "(2017) using three parameters to fit the temporal properties and an extensive set of hand-crafted features related to the user, linguistic and structure characteristics.", "GRU: The RNN-based rumor detection model proposed by Ma et al.", "(2016) with gated recurrent unit for representation learning of high-level features from relevant posts over time.", "BOW: A naive baseline we worked by representing the text in each tree using bag-of-words and building the rumor classifier with linear SVM.", "Our models: PTK and cPTK are our full PTK and cPTK models, respectively; PTKand cPTKare the setting of only using content while ignoring user properties.", "We implemented DTC and RFC with Weka 5 , SVM models with LibSVM 6 and GRU with Theano 7 .", "We held out 10% of the trees in each dataset for model tuning, and for the rest of the trees, we performed 3-fold cross-validation.", "We used accuracy, F 1 measure as evaluation metrics.", "Table 2 shows that our proposed methods outperform all the baselines on both datasets.", "Experimental Results Among all baselines, GRU performs the best, which learns the low-dimensional representation of responsive tweets by capturing the textual and temporal information.", "This indicates the effectiveness of complex signals indicative of rumors beyond cue words or phrases (e.g., \"what?", "\", \"really?", "\", \"not sure\", etc.).", "This also justifies the good performance of BOW even though it only uses uni-grams for representation.", "Although DTR uses a set of regular expressions, we found only 19.59% and 22.21% tweets in our datasets containing these expressions.", "That is why the results of DTR are not satisfactory.", "SVM-TS and RFC are comparable because both of them utilize an extensive set of features especially focusing on temporal traits.", "But none of the models can directly incorporate structured propagation patterns for deep similarity compar- ison between propagation trees.", "SVM-RBF, although using a non-linear kernel, is based on traditional hand-crafted features instead of the structural kernel like ours.", "So, they performed obviously worse than our approach.", "Representation learning methods like GRU cannot easily utilize complex structural information for learning important features from our networked data.", "In contrast, our models can capture complex propagation patterns from structured data rich of linguistic, user and temporal signals.", "Therefore, the superiority of our models is clear: PTKwhich only uses text is already better than GRU, demonstrating the importance of propagation structures.", "PTK that combines text and user yields better results on both datasets, implying that both properties are complementary and PTK integrating flat and structured information is obviously more effective.", "It is also observed that cPTK outperforms PTK except for non-rumor class.", "This suggests the context-sensitive modeling based on PTK is effective for different types of rumors, but for non- The example subtree of a rumor captured by the algorithm at early stage of propagation rumors, it seems that considering context of propagation path is not always helpful.", "This might be due to the generally weak signals originated from node properties on the paths during non-rumor's diffusion since user distribution patterns in nonrumors do not seem as obvious as in rumors.", "This is not an issue in cPTKsince user information is not considered at all.", "Over all classes, cPTK achieves the highest accuracies on both datasets.", "Furthermore, we observe that all the baseline methods perform much better on non-rumors than on rumors.", "This is because the features of existing methods were defined for a binary (rumor vs. non-rumor) classification problem.", "So, they do not perform well for finer-grained classes.", "Our ap-proach can differentiate various classes much better by deep, detailed comparison of different patterns based on propagation structure.", "Early Detection Performance Detecting rumors at an early stage of propagation is very important so that preventive measures could be taken as quickly as possible.", "In early detection task, all the posts after a detection deadline are invisible during test.", "The earlier the deadline, the less propagation information can be available.", "Figure 4 shows the performances of our PTK and cPTK models versus RFC (the best system based on feature engineering), GRU (the best system based on RNN) and DTR (an early-detection-specific algorithm) against various deadlines.", "In the first few hours, our approach demonstrates superior early detection performance than other models.", "Particularly, cPTK achieve 75% accuracy on Twitter15 and 73% on Twitter16 within 24 hours, that is much faster than other models.", "Our analysis shows that rumors typically demonstrate more complex propagation substructures especially at early stage.", "Figure 5 shows a detected subtree of a false rumor spread in its first few hours, where influential users are somehow captured to boost its propagation and the information flows among the users with an obvious unpopular-to-popular-to-unpopular trend in terms of user popularity, but such pattern was not witnessed in non-rumors in early stage.", "Many textual signals (underlined) can also be observed in that early period.", "Our method can learn such structures and patterns naturally, but it is difficult to know and hand-craft them in feature engineering.", "Conclusion and Future Work We propose a novel approach for detecting rumors in microblog posts based on kernel learning method using propagation trees.", "A propagation tree encodes the spread of a hypothesis (i.e., a source tweet) with complex structured patterns and flat information regarding content, user and time associated with the tree nodes.", "Enlightened by tree kernel techniques, our kernel method learns discriminant clues for identifying rumors of finer-grained levels by directly measuring the similarity among propagation trees via kernel functions.", "Experiments on two Twitter datasets show that our approach outperforms stateof-the-art baselines with large margin for both general and early rumor detection tasks.", "Since kernel-based approach covers more structural information than feature-based methods, it allows kernel to further incorporate information from a high dimensional space for possibly better discrimination.", "In the future, we will focus on improving the rumor detection task by exploring network representation learning framework.", "Moreover, we plan to investigate unsupervised models considering massive unlabeled rumorous data from social media." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5.1", "5.2", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Representation of Tweets Propagation", "Propagation Tree Kernel Modeling", "Background of Tree Kernel", "Our PTK Model", "Context-Sensitive Extension of PTK", "Rumor Detection via Kernel Learning", "Data Sets", "Experimental Setup", "Experimental Results", "Early Detection Performance", "Conclusion and Future Work" ] }
GEM-SciDuet-train-101#paper-1265#slide-16
Results on Twitter16
NR: Non-Rumor; FR: False Rumor; TR: True Rumor; UR: Unverified Rumor; NR FR TR UR
NR: Non-Rumor; FR: False Rumor; TR: True Rumor; UR: Unverified Rumor; NR FR TR UR
[]
GEM-SciDuet-train-101#paper-1265#slide-17
1265
Detect Rumors in Microblog Posts Using Propagation Structure via Kernel Learning
How fake news goes viral via social media? How does its propagation pattern differ from real stories? In this paper, we attempt to address the problem of identifying rumors, i.e., fake information, out of microblog posts based on their propagation structure. We firstly model microblog posts diffusion with propagation trees, which provide valuable clues on how an original message is transmitted and developed over time. We then propose a kernel-based method called Propagation Tree Kernel, which captures high-order patterns differentiating different types of rumors by evaluating the similarities between their propagation tree structures. Experimental results on two real-world datasets demonstrate that the proposed kernel-based approach can detect rumors more quickly and accurately than state-ofthe-art rumor detection models.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170 ], "paper_content_text": [ "Introduction On November 9th, 2016, Eric Tucker, a grassroots user who had just about 40 followers on Twitter, tweeted his unverified observations about paid protesters being bused to attend anti-Trump demonstration in Austin, Texas.", "The tweet, which was proved false later, was shared over 16 thousand times on Twitter and 350 thousand times on Facebook within a couple of days, fueling a nation-wide conspiracy theory 1 .", "The diffusion of the story is illustrated as Figure 1 which gives the key spreading points of the story along the time line.", "We can see that after the initial post, the tweet was shared or promoted by some influential online communities and users (including Trump himself), resulting in its wide spread.", "A widely accepted definition of rumor is \"unverified and instrumentally relevant information statements in circulation\" (DiFonzo and Bordia, 2007) .", "This unverified information may eventually turn out to be true, or partly or entirely false.", "In today's ever-connected world, rumors can arise and spread at lightening speed thanks to social media platforms, which could not only be wrong, but be misleading and dangerous to the public society.", "Therefore, it is crucial to track and debunk such rumors in timely manner.", "Journalists and fact-checking websites such as snopes.com have made efforts to track and detect rumors.", "However, such endeavor is manual, thus prone to poor coverage and low speed.", "Feature-based methods (Castillo et al., 2011; Yang et al., 2012; Ma et al., 2015) achieved certain success by employing large feature sets crafted from message contents, user profiles and holistic statistics of diffusion patterns (e.g., number of retweets, propagation time, etc.).", "But such an approach was over simplified as they ignored the dynamics of rumor propagation.", "Existing studies considering propagation characteristics mainly focused on the temporal features (Kwon et al., 2013 (Kwon et al., , 2017 rather than the structure of propagation.", "So, can the propagation structure make any difference for differentiating rumors from nonrumors?", "Recent studies showed that rumor spreaders are persons who want to get attention and popularity (Sunstein, 2014) .", "However, popular users who get more attention on Twitter (e.g., with more followers) are actually less likely to spread rumor in a sense that the high audience size might hinder a user from participating in propagating unverified information (Kwon et al., 2017) .", "Intuitively, for \"successful\" rumors being propagated as widely as popular real news, initial spreaders (typically lack of popularity) must attract certain amount of broadcasting power, e.g., attention of influential users or communities that have a lot of audiences joining in promoting the propagation.", "We refer to this as a constrained mode propagation, relative to the open mode propagation of normal messages that everyone is open to share.", "Such different modes of propagation may imply some distinct propagation structures between rumors and nonrumors and even among different types of rumors.", "Due to the complex nature of information diffusion, explicitly defining discriminant features based on propagation structure is difficult and biased.", "Figure 2 exemplifies the propagation structures of two Twitter posts, a rumor and a nonrumor, initiated by two users shown as the root nodes (in green color).", "The information flows here illustrate that the rumorous tweet is first posted by a low-impact user, then some popular users joining in who boost the spreading, but the non-rumorous tweet is initially posted by a popular user and directly spread by many general users; contentbased signal like various users' stance (Zhao et al., 2015) and edge-based signal such as relative influence (Kwon et al., 2017) can also suggest the different nature of source tweets.", "Many of such implicit distinctions throughout message propagation are hard to hand craft specifically using flat summary of statistics as previous work did.", "In addition, unlike representation learning for plain text, learning for representation of structures such as networks is not well studied in general.", "Therefore, traditional and latest text-based models (Castillo (a) A rumor (b) A non-rumor Figure 2 : Fragments of the propagation for two source tweets.", "Node size: denotes the popularity of the user who tweet the post (represented by # of followers); Red, black, blue node: content-wise the user express doubt/denial, support, neutrality in the tweet, respectively; Solid (dotted) edge: information flow from a more (less) popular user to a less (more) popular user; Dashed concentric circles: time stamps.", "Ma et al., 2015 Ma et al., , 2016 cannot be applied easily on such complex, dynamic structures.", "To capture high-order propagation patterns for rumor detection, we firstly represent the propagation of each source tweet with a propagation tree which is formed by harvesting user's interactions to one another triggered by the source tweet.", "Then, we propose a kernel-based data-driven method called Propagation Tree Kernel (PTK) to generate relevant features (i.e., subtrees) automatically for estimating the similarity between two propagation trees.", "Unlike traditional tree kernel (Moschitti, 2006; Zhang et al., 2008) for modeling syntactic structure based on parse tree, our propagation tree consists of nodes corresponding to microblog posts, each represented as a continuous vector, and edges representing the direction of propagation and providing the context to individual posts.", "The basic idea is to find and capture the salient substructures in the propagation trees indicative of rumors.", "We also extend PTK into a context-enriched PTK (cPTK) to enhance the model by considering different propagation paths from source tweet to the roots of subtrees, which capture the context of transmission.", "Extensive experiments on two real-world Twitter datasets show that the proposed methods outperform state-of-the-art rumor detection models with large margin.", "Moreover, most existing approaches regard rumor detection as a binary classification problem, which predicts a candidate hypothesis as rumor or not.", "Since a rumor often begins as unverified and later turns out to be confirmed as true or false, or remains unverified (Zubiaga et al., 2016) , here we consider a set of more practical, finer-grained classes: false rumor, true rumor, unverified rumor, and non-rumor, which becomes an even more challenging problem.", "Related Work Tracking misinformation or debunking rumors has been a hot research topic in multiple disciplines (DiFonzo and Bordia, 2007; Morris et al., 2012; Rosnow, 1991) .", "Castillo et al.", "(2011) studied information credibility on Twitter using a wide range of hand-crafted features.", "Following that, various features corresponding to message contents, user profiles and statistics of propagation patterns were proposed in many studies (Yang et al., 2012; Wu et al., 2015; Sun et al., 2013; Liu et al., 2015) .", "Zhao et al.", "(2015) focused on early rumor detection by using regular expressions for finding questing and denying tweets as the key for debunking rumor.", "All such approaches are over simplistic because they ignore the dynamic propagation patterns given the rich structures of social media data.", "Some studies focus on finding temporal patterns for understanding rumor diffusion.", "Kown et al.", "(2013; 2017) introduced a time-series fitting model based on the temporal properties of tweet volume.", "Ma et al.", "(2015) extended the model using time series to capture the variation of features over time.", "Friggeri et al.", "(2014) and Hannak et al.", "(2014) studied the structure of misinformation cascades by analyzing comments linking to rumor debunking websites.", "More recently, Ma et al.", "(2016) used recurrent neural networks to learn the representations of rumor signals from tweet text at different times.", "Our work will consider temporal, structural and linguistic signals in a unified framework based on propagation tree kernel.", "Most previous work formulated the task as classification at event level where an event is comprised of a number of source tweets, each being associated with a group of retweets and replies.", "Here we focus on classifying a given source tweet regarding a claim which is a finer-grained task.", "Similar setting was also considered in (Wu et al., 2015; Qazvinian et al., 2011) .", "Kernel methods are designed to evaluate the similarity between two objects, and tree kernel specifically addresses structured data which has been successfully applied for modeling syntactic information in many natural language tasks such as syntactic parsing (Collins and Duffy, 2001) , question-answering (Moschitti, 2006) , semantic analysis (Moschitti, 2004) , relation extraction (Zhang et al., 2008) and machine translation (Sun et al., 2010) .", "These kernels are not suitable for modeling the social media propagation structures because the nodes are not given as discrete values like part-of-speech tags, but are represented as high dimensional real-valued vectors.", "Our proposed method is a substantial extension of tree kernel for modeling such structures.", "Representation of Tweets Propagation On microblogging platforms, the follower/friend relationship embeds shared interests among the users.", "Once a user has posted a tweet, all his followers will receive the tweet.", "Furthermore, Twitter allows a user to retweet or comment another user's post, so that the information could reach beyond the network of the original creator.", "We model the propagation of each source tweet as a tree structure T (r) = V, E , where r is the source tweet as well as the root of the tree, V refers to a set of nodes each representing a responsive post (i.e., retweet or reply) of a user at a certain time to the source tweet r which initiates the circulation, and E is a set of directed edges corresponding to the response relation among the nodes in V .", "If there exists a directed edge from v i to v j , it means v j is a direct response to v i .", "More specifically, each node v ∈ V is repre- sented as a tuple v = (u v , c v , t v ) , which provides the following information: u v is the creator of the post, c v represents the text content of the post, and t v is the time lag between the source tweet r and v. In our case, u v contains attributes of the user such as # of followers/friends, verification status, # of history posts, etc., c v is a vector of binary features based on uni-grams and/or bi-grams representing the post's content.", "Propagation Tree Kernel Modeling In this section, we describe our rumor detection model based on propagation trees using kernel method called Propagation Tree Kernel (PTK).", "Our task is, given a propagation tree T (r) of a source tweet r, to predict the label of r. Background of Tree Kernel Before presenting our proposed algorithm, we briefly present the traditional tree kernel, which our PTK model is based on.", "Tree kernel was designed to compute the syntactic and semantic similarity between two natural language sentences by implicitly counting the number of common subtrees between their corresponding parse trees.", "Given a syntactic parse tree, each node with its children is associated with a grammar production rule.", "Figure 3 illustrates the syntactic parse tree of \"cut a tree\" and its subtrees.", "A subtree is defined as any subgraph which has more than one nodes, with the restriction that entire (not partial) rule productions must be included.", "For example, the fragment [NP [D a]] is excluded because it contains only part of the production NP β†’ D N (Collins and Duffy, 2001) .", "Following Collins and Duffy (2001) , given two parse trees T 1 and T 2 , the kernel function K(T 1 , T 2 ) is defined as: v i ∈V 1 v j ∈V 2 βˆ†(v i , v j ) (1) where V 1 and V 2 are the sets of all nodes respectively in T 1 and T 2 , and each node is associated with a production rule, and βˆ†(v i , v j ) evaluates the common subtrees rooted at v i and v j .", "βˆ†(v i , v j ) can be computed using the following recursive procedure (Collins and Duffy, 2001) : 1) if the production rules at v i and v j are different, then βˆ†(v i , v j ) = 0; 2) else if the production rules at v i and v j are same, and v i and v j have only leaf children (i.e., they are pre-terminal symbols), then βˆ†(v i , v j ) = Ξ»; 3) else βˆ†(v i , v j ) = Ξ» min(nc(v i ),nc(v j )) k=1 (1 + βˆ†(ch(v i , k), ch(v j , k))).", "where nc(v) is the number of children of node v, ch(v, k) is the k-th child of node v, and Ξ» (0 < Ξ» ≀ 1) is a decay factor.", "Ξ» = 1 yields the number of common subtrees; Ξ» < 1 down weighs the contribution of larger subtrees to make the kernel value less variable with respect to subtree size.", "Our PTK Model To classify propagation trees, we can calculate the similarity between the trees, which is supposed to reflect the distinction of different types of rumors and non-rumors based on structural, linguistic and temporal properties.", "However, existing tree kernels cannot be readily applied on propagation trees because 1) unlike parse tree where the node is represented by enumerable nominal value (e.g., part-of-speech tag), the propagation tree node is given as a vector of continuous numerical values representing the basic properties of the node; 2) the similarity of two parse trees is based on the count of common subtrees, for which the commonality of subtrees is evaluated by checking if the same production rules and the same children are associated with the nodes in two subtrees being compared, whereas in our context the similarity function should be defined softly since hardly two nodes from different propagation trees are same.", "With the representation of propagation tree, we first define a function f to evaluate the similarity between two nodes v i and v j (we simplify the node representation for instance v i = (u i , c i , t i )) as the following: f (v i , v j ) = e βˆ’t (Ξ±E(u i , u j ) + (1 βˆ’ Ξ±)J (c i , c j )) where t = |t i βˆ’ t j | is the absolute difference between the time lags of v i and v j , E and J are user-based similarity and content-based similarity, respectively, and Ξ± is the trade-off parameter.", "The intuition of using exponential function of t to scale down the similarity is to capture the discriminant signals or patterns at the different stages of propagation.", "For example, a questioning message posted very early may signal a false rumor while the same posted far later from initial post may indicate the rumor is still unverified, despite that the two messages are semantically similar.", "The user-based similarity is defined as an Euclidean distance E(u i , u j ) = ||u i βˆ’ u j || 2 , where u i and u j are the user vectors of node v i and v j and || β€’ || 2 is the 2-norm of a vector.", "Here E is used to capture the characteristics of users participating in spreading rumors as discriminant signals, throughout the entire stage of propagation.", "Contentwise, we use Jaccard coefficient to measure the similarity of post content: J (c i , c j ) = |N gram(c i ) ∩ N gram(c j )| |N gram(c i ) βˆͺ N gram(c j )| where c i and c j are the sets of content words in two nodes.", "For n-grams here, we adopt both uni-grams and bi-grams.", "It can capture cue terms e.g., 'false', 'debunk', 'not true', etc.", "commonly occurring in rumors but not in non-rumors.", "Given two propagation trees T 1 = V 1 , E 1 and T 2 = V 2 , E 2 , PTK aims to compute the similarity between T 1 and T 2 iteratively based on enumerating all pairs of most similar subtrees.", "First, for each node v i ∈ V 1 , we obtain v i ∈ V 2 , the most similar node of v i from V 2 : v i = arg max v j ∈V 2 f (v i , v j ) Similarly, for each v j ∈ V 2 , we obtain v j ∈ V 1 : v j = arg max v i ∈V 1 f (v i , v j ) Then, the propagation tree kernel K P (T 1 , T 2 ) is defined as: v i ∈V 1 Ξ›(v i , v i ) + v j ∈V 2 Ξ›(v j , v j ) (2) where Ξ›(v, v ) evaluates the similarity of two subtrees rooted at v and v , which is computed recursively as follows: 1) if v or v are leaf nodes, then Ξ›(v, v ) = f (v, v ); 2) else Ξ›(v, v ) = f (v, v ) min(nc(v),nc(v )) k=1 (1 + Ξ›(ch(v, k), ch(v , k))) Note that unlike traditional tree kernel, in PTK the node similarity f ∈ [0, 1] is used for softly counting similar subtrees instead of common subtrees.", "Also, Ξ» in tree kernel is not needed as subtree size is not an issue here thanks to node similarity f .", "PTK aims to capture discriminant patterns from propagation trees inclusive of user, content and temporal traits, which is inspired by prior analyses on rumors spreading, e.g., user information can be a strong clue in the initial broadcast, content features are important throughout entire propagation periods, and structural and temporal patterns help for longitudinal diffusion (Zubiaga et al., 2016; Kwon et al., 2017) .", "Context-Sensitive Extension of PTK One defect of PTK is that it ignores the clues outside the subtrees, e.g., how the information propagates from source post to the current subtree.", "Intuitively, propagation paths provide further clue for determining the truthfulness of information since they embed the route and context of how the propagation happens.", "Therefore, we propose contextsensitive PTK (cPTK) by considering the propagation paths from the root of the tree to the roots of subtrees, which shares similar intuition with the context-sensitive tree kernel (Zhou et al., 2007) .", "For a propagation tree node v ∈ T (r), let L r v be the length (i.e., # of nodes) of the propagation path from root r to v, and v[x] be the x-th ancestor of v on the path starting from v (0 ≀ x < L r v , v[0] = v, v[L r v βˆ’ 1] = r) .", "cPTK evaluates the similarity between two trees T 1 (r 1 ) and T 2 (r 2 ) as follows: v i ∈V 1 L r 1 v i βˆ’1 x=0 Ξ› x (v i , v i ) + v j ∈V 2 L r 2 v j βˆ’1 x=0 Ξ› x (v j , v j ) (3) where Ξ› x (v, v ) measures the similarity of sub- trees rooted at v[x] and v [x] for context-sensitive evaluation, which is computed as follows: 1) if x > 0, Ξ› x (v, v ) = f (v[x], v [x]), where v[x] and v [x] are the x-th ancestor nodes of v and v on the respective propagation path.", "2) else Ξ› x (v, v ) = Ξ›(v, v ), namely PTK.", "Clearly, PTK is a special case of cPTK when x = 0 (see equation 3).", "cPTK evaluates the oc-currence of both context-free (without considering ancestors on propagation paths) and contextsensitive cases.", "Rumor Detection via Kernel Learning The advantage of kernel-based method is that we can avoid painstakingly engineering the features.", "This is possible because the kernel function can explore an implicit feature space when calculating the similarity between two objects (Culotta and Sorensen, 2004) .", "We incorporate the proposed tree kernel functions, i.e., PTK (equation 2) or cPTK (equation 3), into a supervised learning framework, for which we utilize a kernel-based SVM classifier.", "We treat each tree as an instance, and its similarity values with all training instances as feature space.", "Therefore, the kernel matrix of training set is m Γ— m and that of test set is n Γ— m where m and n are the sizes of training and test sets, respectively.", "For our multi-class task, we perform a one-vsall classification for each label and then assign the one with the highest likelihood among the four, i.e., non-rumor, false rumor, true rumor or unverified rumor.", "We choose this method due to interpretability of results, similar to recent work on occupational class classification (Preotiuc-Pietro et al., 2015; Lukasik et al., 2015) .", "Experiments and Results Data Sets To our knowledge, there is no public large dataset available for classifying propagation trees, where we need a good number of source tweets, more accurately, the tree roots together with the corresponding propagation structure, to be appropriately annotated with ground truth.", "We constructed our datasets based on a couple of reference datasets, namely Twitter15 (Liu et al., 2015) and Twitter16 (Ma et al., 2016) .", "The original datasets were released and used for binary classification of rumor and non-rumor with respect to given events that contain their relevant tweets.", "First, we extracted the popular source tweets 2 that are highly retweeted or replied.", "We then collected all the propagation threads (i.e., retweets and replies) for these source tweets.", "Because Twitter API cannot retrieve the retweets or replies, we gathered the retweet users for a given tweet from 2 Though unpopular tweets could be false, we ignore them as they do not draw much attention and are hardly impactful Twrench 3 and crawled the replies through Twitter's web interface.", "Finally, we annotated the source tweets by referring to the labels of the events they are from.", "We first turned the label of each event in Twitter15 and Twitter16 from binary to quaternary according to the veracity tag of the article in rumor debunking websites (e.g., snopes.com, Emergent.info, etc).", "Then we labeled the source tweets by following these rules: 1) Source tweets from unverified rumor events or non-rumor events are labeled the same as the corresponding event's label; 2) For a source tweet in false rumor event, we flip over the label and assign true to the source tweet if it expresses denial type of stance; otherwise, the label is assigned as false; 3) The analogous flip-over/nochange rule applies to the source tweets from true rumor events.", "We make the datasets produced publicly accessible 4 .", "Table 1 gives statistics on the resulting datasets.", "Experimental Setup We compare our kernel-based method against the following baselines: SVM-TS: A linear SVM classification model that uses time-series to model the variation of a set of hand-crafted features (Ma et al., 2015) .", "DTR: A Decision-Tree-based Ranking method to identify trending rumors (Zhao et al., 2015) , which searches for enquiry phrases and clusters disputed factual claims, and ranked the clustered results based on statistical features.", "DTC and SVM-RBF: The Twitter information credibility model using Decision Tree Classifier (Castillo et al., 2011) and the SVM-based model with RBF kernel (Yang et al., 2012) , respectively, both using hand-crafted features based on the overall statistics of the posts.", "RFC: The Random Forest Classifier proposed by Kwon et al.", "(2017) using three parameters to fit the temporal properties and an extensive set of hand-crafted features related to the user, linguistic and structure characteristics.", "GRU: The RNN-based rumor detection model proposed by Ma et al.", "(2016) with gated recurrent unit for representation learning of high-level features from relevant posts over time.", "BOW: A naive baseline we worked by representing the text in each tree using bag-of-words and building the rumor classifier with linear SVM.", "Our models: PTK and cPTK are our full PTK and cPTK models, respectively; PTKand cPTKare the setting of only using content while ignoring user properties.", "We implemented DTC and RFC with Weka 5 , SVM models with LibSVM 6 and GRU with Theano 7 .", "We held out 10% of the trees in each dataset for model tuning, and for the rest of the trees, we performed 3-fold cross-validation.", "We used accuracy, F 1 measure as evaluation metrics.", "Table 2 shows that our proposed methods outperform all the baselines on both datasets.", "Experimental Results Among all baselines, GRU performs the best, which learns the low-dimensional representation of responsive tweets by capturing the textual and temporal information.", "This indicates the effectiveness of complex signals indicative of rumors beyond cue words or phrases (e.g., \"what?", "\", \"really?", "\", \"not sure\", etc.).", "This also justifies the good performance of BOW even though it only uses uni-grams for representation.", "Although DTR uses a set of regular expressions, we found only 19.59% and 22.21% tweets in our datasets containing these expressions.", "That is why the results of DTR are not satisfactory.", "SVM-TS and RFC are comparable because both of them utilize an extensive set of features especially focusing on temporal traits.", "But none of the models can directly incorporate structured propagation patterns for deep similarity compar- ison between propagation trees.", "SVM-RBF, although using a non-linear kernel, is based on traditional hand-crafted features instead of the structural kernel like ours.", "So, they performed obviously worse than our approach.", "Representation learning methods like GRU cannot easily utilize complex structural information for learning important features from our networked data.", "In contrast, our models can capture complex propagation patterns from structured data rich of linguistic, user and temporal signals.", "Therefore, the superiority of our models is clear: PTKwhich only uses text is already better than GRU, demonstrating the importance of propagation structures.", "PTK that combines text and user yields better results on both datasets, implying that both properties are complementary and PTK integrating flat and structured information is obviously more effective.", "It is also observed that cPTK outperforms PTK except for non-rumor class.", "This suggests the context-sensitive modeling based on PTK is effective for different types of rumors, but for non- The example subtree of a rumor captured by the algorithm at early stage of propagation rumors, it seems that considering context of propagation path is not always helpful.", "This might be due to the generally weak signals originated from node properties on the paths during non-rumor's diffusion since user distribution patterns in nonrumors do not seem as obvious as in rumors.", "This is not an issue in cPTKsince user information is not considered at all.", "Over all classes, cPTK achieves the highest accuracies on both datasets.", "Furthermore, we observe that all the baseline methods perform much better on non-rumors than on rumors.", "This is because the features of existing methods were defined for a binary (rumor vs. non-rumor) classification problem.", "So, they do not perform well for finer-grained classes.", "Our ap-proach can differentiate various classes much better by deep, detailed comparison of different patterns based on propagation structure.", "Early Detection Performance Detecting rumors at an early stage of propagation is very important so that preventive measures could be taken as quickly as possible.", "In early detection task, all the posts after a detection deadline are invisible during test.", "The earlier the deadline, the less propagation information can be available.", "Figure 4 shows the performances of our PTK and cPTK models versus RFC (the best system based on feature engineering), GRU (the best system based on RNN) and DTR (an early-detection-specific algorithm) against various deadlines.", "In the first few hours, our approach demonstrates superior early detection performance than other models.", "Particularly, cPTK achieve 75% accuracy on Twitter15 and 73% on Twitter16 within 24 hours, that is much faster than other models.", "Our analysis shows that rumors typically demonstrate more complex propagation substructures especially at early stage.", "Figure 5 shows a detected subtree of a false rumor spread in its first few hours, where influential users are somehow captured to boost its propagation and the information flows among the users with an obvious unpopular-to-popular-to-unpopular trend in terms of user popularity, but such pattern was not witnessed in non-rumors in early stage.", "Many textual signals (underlined) can also be observed in that early period.", "Our method can learn such structures and patterns naturally, but it is difficult to know and hand-craft them in feature engineering.", "Conclusion and Future Work We propose a novel approach for detecting rumors in microblog posts based on kernel learning method using propagation trees.", "A propagation tree encodes the spread of a hypothesis (i.e., a source tweet) with complex structured patterns and flat information regarding content, user and time associated with the tree nodes.", "Enlightened by tree kernel techniques, our kernel method learns discriminant clues for identifying rumors of finer-grained levels by directly measuring the similarity among propagation trees via kernel functions.", "Experiments on two Twitter datasets show that our approach outperforms stateof-the-art baselines with large margin for both general and early rumor detection tasks.", "Since kernel-based approach covers more structural information than feature-based methods, it allows kernel to further incorporate information from a high dimensional space for possibly better discrimination.", "In the future, we will focus on improving the rumor detection task by exploring network representation learning framework.", "Moreover, we plan to investigate unsupervised models considering massive unlabeled rumorous data from social media." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5.1", "5.2", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Representation of Tweets Propagation", "Propagation Tree Kernel Modeling", "Background of Tree Kernel", "Our PTK Model", "Context-Sensitive Extension of PTK", "Rumor Detection via Kernel Learning", "Data Sets", "Experimental Setup", "Experimental Results", "Early Detection Performance", "Conclusion and Future Work" ] }
GEM-SciDuet-train-101#paper-1265#slide-17
Results on Early Detection
(a) Twitter15 DATASET (b) Twitter16 DATASET In the first few hours, the accuracy of the kernel-based methods climbs more rapidly and stabilize more quickly cPTK can detect rumors with 72% accuracy for Twitter15 and 69.0% for Twitter16 within 12 hours, which is much earlier than the baselines and the mean official report times
(a) Twitter15 DATASET (b) Twitter16 DATASET In the first few hours, the accuracy of the kernel-based methods climbs more rapidly and stabilize more quickly cPTK can detect rumors with 72% accuracy for Twitter15 and 69.0% for Twitter16 within 12 hours, which is much earlier than the baselines and the mean official report times
[]
GEM-SciDuet-train-101#paper-1265#slide-18
1265
Detect Rumors in Microblog Posts Using Propagation Structure via Kernel Learning
How fake news goes viral via social media? How does its propagation pattern differ from real stories? In this paper, we attempt to address the problem of identifying rumors, i.e., fake information, out of microblog posts based on their propagation structure. We firstly model microblog posts diffusion with propagation trees, which provide valuable clues on how an original message is transmitted and developed over time. We then propose a kernel-based method called Propagation Tree Kernel, which captures high-order patterns differentiating different types of rumors by evaluating the similarities between their propagation tree structures. Experimental results on two real-world datasets demonstrate that the proposed kernel-based approach can detect rumors more quickly and accurately than state-ofthe-art rumor detection models.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170 ], "paper_content_text": [ "Introduction On November 9th, 2016, Eric Tucker, a grassroots user who had just about 40 followers on Twitter, tweeted his unverified observations about paid protesters being bused to attend anti-Trump demonstration in Austin, Texas.", "The tweet, which was proved false later, was shared over 16 thousand times on Twitter and 350 thousand times on Facebook within a couple of days, fueling a nation-wide conspiracy theory 1 .", "The diffusion of the story is illustrated as Figure 1 which gives the key spreading points of the story along the time line.", "We can see that after the initial post, the tweet was shared or promoted by some influential online communities and users (including Trump himself), resulting in its wide spread.", "A widely accepted definition of rumor is \"unverified and instrumentally relevant information statements in circulation\" (DiFonzo and Bordia, 2007) .", "This unverified information may eventually turn out to be true, or partly or entirely false.", "In today's ever-connected world, rumors can arise and spread at lightening speed thanks to social media platforms, which could not only be wrong, but be misleading and dangerous to the public society.", "Therefore, it is crucial to track and debunk such rumors in timely manner.", "Journalists and fact-checking websites such as snopes.com have made efforts to track and detect rumors.", "However, such endeavor is manual, thus prone to poor coverage and low speed.", "Feature-based methods (Castillo et al., 2011; Yang et al., 2012; Ma et al., 2015) achieved certain success by employing large feature sets crafted from message contents, user profiles and holistic statistics of diffusion patterns (e.g., number of retweets, propagation time, etc.).", "But such an approach was over simplified as they ignored the dynamics of rumor propagation.", "Existing studies considering propagation characteristics mainly focused on the temporal features (Kwon et al., 2013 (Kwon et al., , 2017 rather than the structure of propagation.", "So, can the propagation structure make any difference for differentiating rumors from nonrumors?", "Recent studies showed that rumor spreaders are persons who want to get attention and popularity (Sunstein, 2014) .", "However, popular users who get more attention on Twitter (e.g., with more followers) are actually less likely to spread rumor in a sense that the high audience size might hinder a user from participating in propagating unverified information (Kwon et al., 2017) .", "Intuitively, for \"successful\" rumors being propagated as widely as popular real news, initial spreaders (typically lack of popularity) must attract certain amount of broadcasting power, e.g., attention of influential users or communities that have a lot of audiences joining in promoting the propagation.", "We refer to this as a constrained mode propagation, relative to the open mode propagation of normal messages that everyone is open to share.", "Such different modes of propagation may imply some distinct propagation structures between rumors and nonrumors and even among different types of rumors.", "Due to the complex nature of information diffusion, explicitly defining discriminant features based on propagation structure is difficult and biased.", "Figure 2 exemplifies the propagation structures of two Twitter posts, a rumor and a nonrumor, initiated by two users shown as the root nodes (in green color).", "The information flows here illustrate that the rumorous tweet is first posted by a low-impact user, then some popular users joining in who boost the spreading, but the non-rumorous tweet is initially posted by a popular user and directly spread by many general users; contentbased signal like various users' stance (Zhao et al., 2015) and edge-based signal such as relative influence (Kwon et al., 2017) can also suggest the different nature of source tweets.", "Many of such implicit distinctions throughout message propagation are hard to hand craft specifically using flat summary of statistics as previous work did.", "In addition, unlike representation learning for plain text, learning for representation of structures such as networks is not well studied in general.", "Therefore, traditional and latest text-based models (Castillo (a) A rumor (b) A non-rumor Figure 2 : Fragments of the propagation for two source tweets.", "Node size: denotes the popularity of the user who tweet the post (represented by # of followers); Red, black, blue node: content-wise the user express doubt/denial, support, neutrality in the tweet, respectively; Solid (dotted) edge: information flow from a more (less) popular user to a less (more) popular user; Dashed concentric circles: time stamps.", "Ma et al., 2015 Ma et al., , 2016 cannot be applied easily on such complex, dynamic structures.", "To capture high-order propagation patterns for rumor detection, we firstly represent the propagation of each source tweet with a propagation tree which is formed by harvesting user's interactions to one another triggered by the source tweet.", "Then, we propose a kernel-based data-driven method called Propagation Tree Kernel (PTK) to generate relevant features (i.e., subtrees) automatically for estimating the similarity between two propagation trees.", "Unlike traditional tree kernel (Moschitti, 2006; Zhang et al., 2008) for modeling syntactic structure based on parse tree, our propagation tree consists of nodes corresponding to microblog posts, each represented as a continuous vector, and edges representing the direction of propagation and providing the context to individual posts.", "The basic idea is to find and capture the salient substructures in the propagation trees indicative of rumors.", "We also extend PTK into a context-enriched PTK (cPTK) to enhance the model by considering different propagation paths from source tweet to the roots of subtrees, which capture the context of transmission.", "Extensive experiments on two real-world Twitter datasets show that the proposed methods outperform state-of-the-art rumor detection models with large margin.", "Moreover, most existing approaches regard rumor detection as a binary classification problem, which predicts a candidate hypothesis as rumor or not.", "Since a rumor often begins as unverified and later turns out to be confirmed as true or false, or remains unverified (Zubiaga et al., 2016) , here we consider a set of more practical, finer-grained classes: false rumor, true rumor, unverified rumor, and non-rumor, which becomes an even more challenging problem.", "Related Work Tracking misinformation or debunking rumors has been a hot research topic in multiple disciplines (DiFonzo and Bordia, 2007; Morris et al., 2012; Rosnow, 1991) .", "Castillo et al.", "(2011) studied information credibility on Twitter using a wide range of hand-crafted features.", "Following that, various features corresponding to message contents, user profiles and statistics of propagation patterns were proposed in many studies (Yang et al., 2012; Wu et al., 2015; Sun et al., 2013; Liu et al., 2015) .", "Zhao et al.", "(2015) focused on early rumor detection by using regular expressions for finding questing and denying tweets as the key for debunking rumor.", "All such approaches are over simplistic because they ignore the dynamic propagation patterns given the rich structures of social media data.", "Some studies focus on finding temporal patterns for understanding rumor diffusion.", "Kown et al.", "(2013; 2017) introduced a time-series fitting model based on the temporal properties of tweet volume.", "Ma et al.", "(2015) extended the model using time series to capture the variation of features over time.", "Friggeri et al.", "(2014) and Hannak et al.", "(2014) studied the structure of misinformation cascades by analyzing comments linking to rumor debunking websites.", "More recently, Ma et al.", "(2016) used recurrent neural networks to learn the representations of rumor signals from tweet text at different times.", "Our work will consider temporal, structural and linguistic signals in a unified framework based on propagation tree kernel.", "Most previous work formulated the task as classification at event level where an event is comprised of a number of source tweets, each being associated with a group of retweets and replies.", "Here we focus on classifying a given source tweet regarding a claim which is a finer-grained task.", "Similar setting was also considered in (Wu et al., 2015; Qazvinian et al., 2011) .", "Kernel methods are designed to evaluate the similarity between two objects, and tree kernel specifically addresses structured data which has been successfully applied for modeling syntactic information in many natural language tasks such as syntactic parsing (Collins and Duffy, 2001) , question-answering (Moschitti, 2006) , semantic analysis (Moschitti, 2004) , relation extraction (Zhang et al., 2008) and machine translation (Sun et al., 2010) .", "These kernels are not suitable for modeling the social media propagation structures because the nodes are not given as discrete values like part-of-speech tags, but are represented as high dimensional real-valued vectors.", "Our proposed method is a substantial extension of tree kernel for modeling such structures.", "Representation of Tweets Propagation On microblogging platforms, the follower/friend relationship embeds shared interests among the users.", "Once a user has posted a tweet, all his followers will receive the tweet.", "Furthermore, Twitter allows a user to retweet or comment another user's post, so that the information could reach beyond the network of the original creator.", "We model the propagation of each source tweet as a tree structure T (r) = V, E , where r is the source tweet as well as the root of the tree, V refers to a set of nodes each representing a responsive post (i.e., retweet or reply) of a user at a certain time to the source tweet r which initiates the circulation, and E is a set of directed edges corresponding to the response relation among the nodes in V .", "If there exists a directed edge from v i to v j , it means v j is a direct response to v i .", "More specifically, each node v ∈ V is repre- sented as a tuple v = (u v , c v , t v ) , which provides the following information: u v is the creator of the post, c v represents the text content of the post, and t v is the time lag between the source tweet r and v. In our case, u v contains attributes of the user such as # of followers/friends, verification status, # of history posts, etc., c v is a vector of binary features based on uni-grams and/or bi-grams representing the post's content.", "Propagation Tree Kernel Modeling In this section, we describe our rumor detection model based on propagation trees using kernel method called Propagation Tree Kernel (PTK).", "Our task is, given a propagation tree T (r) of a source tweet r, to predict the label of r. Background of Tree Kernel Before presenting our proposed algorithm, we briefly present the traditional tree kernel, which our PTK model is based on.", "Tree kernel was designed to compute the syntactic and semantic similarity between two natural language sentences by implicitly counting the number of common subtrees between their corresponding parse trees.", "Given a syntactic parse tree, each node with its children is associated with a grammar production rule.", "Figure 3 illustrates the syntactic parse tree of \"cut a tree\" and its subtrees.", "A subtree is defined as any subgraph which has more than one nodes, with the restriction that entire (not partial) rule productions must be included.", "For example, the fragment [NP [D a]] is excluded because it contains only part of the production NP β†’ D N (Collins and Duffy, 2001) .", "Following Collins and Duffy (2001) , given two parse trees T 1 and T 2 , the kernel function K(T 1 , T 2 ) is defined as: v i ∈V 1 v j ∈V 2 βˆ†(v i , v j ) (1) where V 1 and V 2 are the sets of all nodes respectively in T 1 and T 2 , and each node is associated with a production rule, and βˆ†(v i , v j ) evaluates the common subtrees rooted at v i and v j .", "βˆ†(v i , v j ) can be computed using the following recursive procedure (Collins and Duffy, 2001) : 1) if the production rules at v i and v j are different, then βˆ†(v i , v j ) = 0; 2) else if the production rules at v i and v j are same, and v i and v j have only leaf children (i.e., they are pre-terminal symbols), then βˆ†(v i , v j ) = Ξ»; 3) else βˆ†(v i , v j ) = Ξ» min(nc(v i ),nc(v j )) k=1 (1 + βˆ†(ch(v i , k), ch(v j , k))).", "where nc(v) is the number of children of node v, ch(v, k) is the k-th child of node v, and Ξ» (0 < Ξ» ≀ 1) is a decay factor.", "Ξ» = 1 yields the number of common subtrees; Ξ» < 1 down weighs the contribution of larger subtrees to make the kernel value less variable with respect to subtree size.", "Our PTK Model To classify propagation trees, we can calculate the similarity between the trees, which is supposed to reflect the distinction of different types of rumors and non-rumors based on structural, linguistic and temporal properties.", "However, existing tree kernels cannot be readily applied on propagation trees because 1) unlike parse tree where the node is represented by enumerable nominal value (e.g., part-of-speech tag), the propagation tree node is given as a vector of continuous numerical values representing the basic properties of the node; 2) the similarity of two parse trees is based on the count of common subtrees, for which the commonality of subtrees is evaluated by checking if the same production rules and the same children are associated with the nodes in two subtrees being compared, whereas in our context the similarity function should be defined softly since hardly two nodes from different propagation trees are same.", "With the representation of propagation tree, we first define a function f to evaluate the similarity between two nodes v i and v j (we simplify the node representation for instance v i = (u i , c i , t i )) as the following: f (v i , v j ) = e βˆ’t (Ξ±E(u i , u j ) + (1 βˆ’ Ξ±)J (c i , c j )) where t = |t i βˆ’ t j | is the absolute difference between the time lags of v i and v j , E and J are user-based similarity and content-based similarity, respectively, and Ξ± is the trade-off parameter.", "The intuition of using exponential function of t to scale down the similarity is to capture the discriminant signals or patterns at the different stages of propagation.", "For example, a questioning message posted very early may signal a false rumor while the same posted far later from initial post may indicate the rumor is still unverified, despite that the two messages are semantically similar.", "The user-based similarity is defined as an Euclidean distance E(u i , u j ) = ||u i βˆ’ u j || 2 , where u i and u j are the user vectors of node v i and v j and || β€’ || 2 is the 2-norm of a vector.", "Here E is used to capture the characteristics of users participating in spreading rumors as discriminant signals, throughout the entire stage of propagation.", "Contentwise, we use Jaccard coefficient to measure the similarity of post content: J (c i , c j ) = |N gram(c i ) ∩ N gram(c j )| |N gram(c i ) βˆͺ N gram(c j )| where c i and c j are the sets of content words in two nodes.", "For n-grams here, we adopt both uni-grams and bi-grams.", "It can capture cue terms e.g., 'false', 'debunk', 'not true', etc.", "commonly occurring in rumors but not in non-rumors.", "Given two propagation trees T 1 = V 1 , E 1 and T 2 = V 2 , E 2 , PTK aims to compute the similarity between T 1 and T 2 iteratively based on enumerating all pairs of most similar subtrees.", "First, for each node v i ∈ V 1 , we obtain v i ∈ V 2 , the most similar node of v i from V 2 : v i = arg max v j ∈V 2 f (v i , v j ) Similarly, for each v j ∈ V 2 , we obtain v j ∈ V 1 : v j = arg max v i ∈V 1 f (v i , v j ) Then, the propagation tree kernel K P (T 1 , T 2 ) is defined as: v i ∈V 1 Ξ›(v i , v i ) + v j ∈V 2 Ξ›(v j , v j ) (2) where Ξ›(v, v ) evaluates the similarity of two subtrees rooted at v and v , which is computed recursively as follows: 1) if v or v are leaf nodes, then Ξ›(v, v ) = f (v, v ); 2) else Ξ›(v, v ) = f (v, v ) min(nc(v),nc(v )) k=1 (1 + Ξ›(ch(v, k), ch(v , k))) Note that unlike traditional tree kernel, in PTK the node similarity f ∈ [0, 1] is used for softly counting similar subtrees instead of common subtrees.", "Also, Ξ» in tree kernel is not needed as subtree size is not an issue here thanks to node similarity f .", "PTK aims to capture discriminant patterns from propagation trees inclusive of user, content and temporal traits, which is inspired by prior analyses on rumors spreading, e.g., user information can be a strong clue in the initial broadcast, content features are important throughout entire propagation periods, and structural and temporal patterns help for longitudinal diffusion (Zubiaga et al., 2016; Kwon et al., 2017) .", "Context-Sensitive Extension of PTK One defect of PTK is that it ignores the clues outside the subtrees, e.g., how the information propagates from source post to the current subtree.", "Intuitively, propagation paths provide further clue for determining the truthfulness of information since they embed the route and context of how the propagation happens.", "Therefore, we propose contextsensitive PTK (cPTK) by considering the propagation paths from the root of the tree to the roots of subtrees, which shares similar intuition with the context-sensitive tree kernel (Zhou et al., 2007) .", "For a propagation tree node v ∈ T (r), let L r v be the length (i.e., # of nodes) of the propagation path from root r to v, and v[x] be the x-th ancestor of v on the path starting from v (0 ≀ x < L r v , v[0] = v, v[L r v βˆ’ 1] = r) .", "cPTK evaluates the similarity between two trees T 1 (r 1 ) and T 2 (r 2 ) as follows: v i ∈V 1 L r 1 v i βˆ’1 x=0 Ξ› x (v i , v i ) + v j ∈V 2 L r 2 v j βˆ’1 x=0 Ξ› x (v j , v j ) (3) where Ξ› x (v, v ) measures the similarity of sub- trees rooted at v[x] and v [x] for context-sensitive evaluation, which is computed as follows: 1) if x > 0, Ξ› x (v, v ) = f (v[x], v [x]), where v[x] and v [x] are the x-th ancestor nodes of v and v on the respective propagation path.", "2) else Ξ› x (v, v ) = Ξ›(v, v ), namely PTK.", "Clearly, PTK is a special case of cPTK when x = 0 (see equation 3).", "cPTK evaluates the oc-currence of both context-free (without considering ancestors on propagation paths) and contextsensitive cases.", "Rumor Detection via Kernel Learning The advantage of kernel-based method is that we can avoid painstakingly engineering the features.", "This is possible because the kernel function can explore an implicit feature space when calculating the similarity between two objects (Culotta and Sorensen, 2004) .", "We incorporate the proposed tree kernel functions, i.e., PTK (equation 2) or cPTK (equation 3), into a supervised learning framework, for which we utilize a kernel-based SVM classifier.", "We treat each tree as an instance, and its similarity values with all training instances as feature space.", "Therefore, the kernel matrix of training set is m Γ— m and that of test set is n Γ— m where m and n are the sizes of training and test sets, respectively.", "For our multi-class task, we perform a one-vsall classification for each label and then assign the one with the highest likelihood among the four, i.e., non-rumor, false rumor, true rumor or unverified rumor.", "We choose this method due to interpretability of results, similar to recent work on occupational class classification (Preotiuc-Pietro et al., 2015; Lukasik et al., 2015) .", "Experiments and Results Data Sets To our knowledge, there is no public large dataset available for classifying propagation trees, where we need a good number of source tweets, more accurately, the tree roots together with the corresponding propagation structure, to be appropriately annotated with ground truth.", "We constructed our datasets based on a couple of reference datasets, namely Twitter15 (Liu et al., 2015) and Twitter16 (Ma et al., 2016) .", "The original datasets were released and used for binary classification of rumor and non-rumor with respect to given events that contain their relevant tweets.", "First, we extracted the popular source tweets 2 that are highly retweeted or replied.", "We then collected all the propagation threads (i.e., retweets and replies) for these source tweets.", "Because Twitter API cannot retrieve the retweets or replies, we gathered the retweet users for a given tweet from 2 Though unpopular tweets could be false, we ignore them as they do not draw much attention and are hardly impactful Twrench 3 and crawled the replies through Twitter's web interface.", "Finally, we annotated the source tweets by referring to the labels of the events they are from.", "We first turned the label of each event in Twitter15 and Twitter16 from binary to quaternary according to the veracity tag of the article in rumor debunking websites (e.g., snopes.com, Emergent.info, etc).", "Then we labeled the source tweets by following these rules: 1) Source tweets from unverified rumor events or non-rumor events are labeled the same as the corresponding event's label; 2) For a source tweet in false rumor event, we flip over the label and assign true to the source tweet if it expresses denial type of stance; otherwise, the label is assigned as false; 3) The analogous flip-over/nochange rule applies to the source tweets from true rumor events.", "We make the datasets produced publicly accessible 4 .", "Table 1 gives statistics on the resulting datasets.", "Experimental Setup We compare our kernel-based method against the following baselines: SVM-TS: A linear SVM classification model that uses time-series to model the variation of a set of hand-crafted features (Ma et al., 2015) .", "DTR: A Decision-Tree-based Ranking method to identify trending rumors (Zhao et al., 2015) , which searches for enquiry phrases and clusters disputed factual claims, and ranked the clustered results based on statistical features.", "DTC and SVM-RBF: The Twitter information credibility model using Decision Tree Classifier (Castillo et al., 2011) and the SVM-based model with RBF kernel (Yang et al., 2012) , respectively, both using hand-crafted features based on the overall statistics of the posts.", "RFC: The Random Forest Classifier proposed by Kwon et al.", "(2017) using three parameters to fit the temporal properties and an extensive set of hand-crafted features related to the user, linguistic and structure characteristics.", "GRU: The RNN-based rumor detection model proposed by Ma et al.", "(2016) with gated recurrent unit for representation learning of high-level features from relevant posts over time.", "BOW: A naive baseline we worked by representing the text in each tree using bag-of-words and building the rumor classifier with linear SVM.", "Our models: PTK and cPTK are our full PTK and cPTK models, respectively; PTKand cPTKare the setting of only using content while ignoring user properties.", "We implemented DTC and RFC with Weka 5 , SVM models with LibSVM 6 and GRU with Theano 7 .", "We held out 10% of the trees in each dataset for model tuning, and for the rest of the trees, we performed 3-fold cross-validation.", "We used accuracy, F 1 measure as evaluation metrics.", "Table 2 shows that our proposed methods outperform all the baselines on both datasets.", "Experimental Results Among all baselines, GRU performs the best, which learns the low-dimensional representation of responsive tweets by capturing the textual and temporal information.", "This indicates the effectiveness of complex signals indicative of rumors beyond cue words or phrases (e.g., \"what?", "\", \"really?", "\", \"not sure\", etc.).", "This also justifies the good performance of BOW even though it only uses uni-grams for representation.", "Although DTR uses a set of regular expressions, we found only 19.59% and 22.21% tweets in our datasets containing these expressions.", "That is why the results of DTR are not satisfactory.", "SVM-TS and RFC are comparable because both of them utilize an extensive set of features especially focusing on temporal traits.", "But none of the models can directly incorporate structured propagation patterns for deep similarity compar- ison between propagation trees.", "SVM-RBF, although using a non-linear kernel, is based on traditional hand-crafted features instead of the structural kernel like ours.", "So, they performed obviously worse than our approach.", "Representation learning methods like GRU cannot easily utilize complex structural information for learning important features from our networked data.", "In contrast, our models can capture complex propagation patterns from structured data rich of linguistic, user and temporal signals.", "Therefore, the superiority of our models is clear: PTKwhich only uses text is already better than GRU, demonstrating the importance of propagation structures.", "PTK that combines text and user yields better results on both datasets, implying that both properties are complementary and PTK integrating flat and structured information is obviously more effective.", "It is also observed that cPTK outperforms PTK except for non-rumor class.", "This suggests the context-sensitive modeling based on PTK is effective for different types of rumors, but for non- The example subtree of a rumor captured by the algorithm at early stage of propagation rumors, it seems that considering context of propagation path is not always helpful.", "This might be due to the generally weak signals originated from node properties on the paths during non-rumor's diffusion since user distribution patterns in nonrumors do not seem as obvious as in rumors.", "This is not an issue in cPTKsince user information is not considered at all.", "Over all classes, cPTK achieves the highest accuracies on both datasets.", "Furthermore, we observe that all the baseline methods perform much better on non-rumors than on rumors.", "This is because the features of existing methods were defined for a binary (rumor vs. non-rumor) classification problem.", "So, they do not perform well for finer-grained classes.", "Our ap-proach can differentiate various classes much better by deep, detailed comparison of different patterns based on propagation structure.", "Early Detection Performance Detecting rumors at an early stage of propagation is very important so that preventive measures could be taken as quickly as possible.", "In early detection task, all the posts after a detection deadline are invisible during test.", "The earlier the deadline, the less propagation information can be available.", "Figure 4 shows the performances of our PTK and cPTK models versus RFC (the best system based on feature engineering), GRU (the best system based on RNN) and DTR (an early-detection-specific algorithm) against various deadlines.", "In the first few hours, our approach demonstrates superior early detection performance than other models.", "Particularly, cPTK achieve 75% accuracy on Twitter15 and 73% on Twitter16 within 24 hours, that is much faster than other models.", "Our analysis shows that rumors typically demonstrate more complex propagation substructures especially at early stage.", "Figure 5 shows a detected subtree of a false rumor spread in its first few hours, where influential users are somehow captured to boost its propagation and the information flows among the users with an obvious unpopular-to-popular-to-unpopular trend in terms of user popularity, but such pattern was not witnessed in non-rumors in early stage.", "Many textual signals (underlined) can also be observed in that early period.", "Our method can learn such structures and patterns naturally, but it is difficult to know and hand-craft them in feature engineering.", "Conclusion and Future Work We propose a novel approach for detecting rumors in microblog posts based on kernel learning method using propagation trees.", "A propagation tree encodes the spread of a hypothesis (i.e., a source tweet) with complex structured patterns and flat information regarding content, user and time associated with the tree nodes.", "Enlightened by tree kernel techniques, our kernel method learns discriminant clues for identifying rumors of finer-grained levels by directly measuring the similarity among propagation trees via kernel functions.", "Experiments on two Twitter datasets show that our approach outperforms stateof-the-art baselines with large margin for both general and early rumor detection tasks.", "Since kernel-based approach covers more structural information than feature-based methods, it allows kernel to further incorporate information from a high dimensional space for possibly better discrimination.", "In the future, we will focus on improving the rumor detection task by exploring network representation learning framework.", "Moreover, we plan to investigate unsupervised models considering massive unlabeled rumorous data from social media." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5.1", "5.2", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Representation of Tweets Propagation", "Propagation Tree Kernel Modeling", "Background of Tree Kernel", "Our PTK Model", "Context-Sensitive Extension of PTK", "Rumor Detection via Kernel Learning", "Data Sets", "Experimental Setup", "Experimental Results", "Early Detection Performance", "Conclusion and Future Work" ] }
GEM-SciDuet-train-101#paper-1265#slide-18
Early Detection Example
Example subtree of a rumor captured by the algorithm at early stage of propagation Influential users boost its propagation, unpopular-to-popular information flow, Textual signals (underlined)
Example subtree of a rumor captured by the algorithm at early stage of propagation Influential users boost its propagation, unpopular-to-popular information flow, Textual signals (underlined)
[]
GEM-SciDuet-train-101#paper-1265#slide-19
1265
Detect Rumors in Microblog Posts Using Propagation Structure via Kernel Learning
How fake news goes viral via social media? How does its propagation pattern differ from real stories? In this paper, we attempt to address the problem of identifying rumors, i.e., fake information, out of microblog posts based on their propagation structure. We firstly model microblog posts diffusion with propagation trees, which provide valuable clues on how an original message is transmitted and developed over time. We then propose a kernel-based method called Propagation Tree Kernel, which captures high-order patterns differentiating different types of rumors by evaluating the similarities between their propagation tree structures. Experimental results on two real-world datasets demonstrate that the proposed kernel-based approach can detect rumors more quickly and accurately than state-ofthe-art rumor detection models.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170 ], "paper_content_text": [ "Introduction On November 9th, 2016, Eric Tucker, a grassroots user who had just about 40 followers on Twitter, tweeted his unverified observations about paid protesters being bused to attend anti-Trump demonstration in Austin, Texas.", "The tweet, which was proved false later, was shared over 16 thousand times on Twitter and 350 thousand times on Facebook within a couple of days, fueling a nation-wide conspiracy theory 1 .", "The diffusion of the story is illustrated as Figure 1 which gives the key spreading points of the story along the time line.", "We can see that after the initial post, the tweet was shared or promoted by some influential online communities and users (including Trump himself), resulting in its wide spread.", "A widely accepted definition of rumor is \"unverified and instrumentally relevant information statements in circulation\" (DiFonzo and Bordia, 2007) .", "This unverified information may eventually turn out to be true, or partly or entirely false.", "In today's ever-connected world, rumors can arise and spread at lightening speed thanks to social media platforms, which could not only be wrong, but be misleading and dangerous to the public society.", "Therefore, it is crucial to track and debunk such rumors in timely manner.", "Journalists and fact-checking websites such as snopes.com have made efforts to track and detect rumors.", "However, such endeavor is manual, thus prone to poor coverage and low speed.", "Feature-based methods (Castillo et al., 2011; Yang et al., 2012; Ma et al., 2015) achieved certain success by employing large feature sets crafted from message contents, user profiles and holistic statistics of diffusion patterns (e.g., number of retweets, propagation time, etc.).", "But such an approach was over simplified as they ignored the dynamics of rumor propagation.", "Existing studies considering propagation characteristics mainly focused on the temporal features (Kwon et al., 2013 (Kwon et al., , 2017 rather than the structure of propagation.", "So, can the propagation structure make any difference for differentiating rumors from nonrumors?", "Recent studies showed that rumor spreaders are persons who want to get attention and popularity (Sunstein, 2014) .", "However, popular users who get more attention on Twitter (e.g., with more followers) are actually less likely to spread rumor in a sense that the high audience size might hinder a user from participating in propagating unverified information (Kwon et al., 2017) .", "Intuitively, for \"successful\" rumors being propagated as widely as popular real news, initial spreaders (typically lack of popularity) must attract certain amount of broadcasting power, e.g., attention of influential users or communities that have a lot of audiences joining in promoting the propagation.", "We refer to this as a constrained mode propagation, relative to the open mode propagation of normal messages that everyone is open to share.", "Such different modes of propagation may imply some distinct propagation structures between rumors and nonrumors and even among different types of rumors.", "Due to the complex nature of information diffusion, explicitly defining discriminant features based on propagation structure is difficult and biased.", "Figure 2 exemplifies the propagation structures of two Twitter posts, a rumor and a nonrumor, initiated by two users shown as the root nodes (in green color).", "The information flows here illustrate that the rumorous tweet is first posted by a low-impact user, then some popular users joining in who boost the spreading, but the non-rumorous tweet is initially posted by a popular user and directly spread by many general users; contentbased signal like various users' stance (Zhao et al., 2015) and edge-based signal such as relative influence (Kwon et al., 2017) can also suggest the different nature of source tweets.", "Many of such implicit distinctions throughout message propagation are hard to hand craft specifically using flat summary of statistics as previous work did.", "In addition, unlike representation learning for plain text, learning for representation of structures such as networks is not well studied in general.", "Therefore, traditional and latest text-based models (Castillo (a) A rumor (b) A non-rumor Figure 2 : Fragments of the propagation for two source tweets.", "Node size: denotes the popularity of the user who tweet the post (represented by # of followers); Red, black, blue node: content-wise the user express doubt/denial, support, neutrality in the tweet, respectively; Solid (dotted) edge: information flow from a more (less) popular user to a less (more) popular user; Dashed concentric circles: time stamps.", "Ma et al., 2015 Ma et al., , 2016 cannot be applied easily on such complex, dynamic structures.", "To capture high-order propagation patterns for rumor detection, we firstly represent the propagation of each source tweet with a propagation tree which is formed by harvesting user's interactions to one another triggered by the source tweet.", "Then, we propose a kernel-based data-driven method called Propagation Tree Kernel (PTK) to generate relevant features (i.e., subtrees) automatically for estimating the similarity between two propagation trees.", "Unlike traditional tree kernel (Moschitti, 2006; Zhang et al., 2008) for modeling syntactic structure based on parse tree, our propagation tree consists of nodes corresponding to microblog posts, each represented as a continuous vector, and edges representing the direction of propagation and providing the context to individual posts.", "The basic idea is to find and capture the salient substructures in the propagation trees indicative of rumors.", "We also extend PTK into a context-enriched PTK (cPTK) to enhance the model by considering different propagation paths from source tweet to the roots of subtrees, which capture the context of transmission.", "Extensive experiments on two real-world Twitter datasets show that the proposed methods outperform state-of-the-art rumor detection models with large margin.", "Moreover, most existing approaches regard rumor detection as a binary classification problem, which predicts a candidate hypothesis as rumor or not.", "Since a rumor often begins as unverified and later turns out to be confirmed as true or false, or remains unverified (Zubiaga et al., 2016) , here we consider a set of more practical, finer-grained classes: false rumor, true rumor, unverified rumor, and non-rumor, which becomes an even more challenging problem.", "Related Work Tracking misinformation or debunking rumors has been a hot research topic in multiple disciplines (DiFonzo and Bordia, 2007; Morris et al., 2012; Rosnow, 1991) .", "Castillo et al.", "(2011) studied information credibility on Twitter using a wide range of hand-crafted features.", "Following that, various features corresponding to message contents, user profiles and statistics of propagation patterns were proposed in many studies (Yang et al., 2012; Wu et al., 2015; Sun et al., 2013; Liu et al., 2015) .", "Zhao et al.", "(2015) focused on early rumor detection by using regular expressions for finding questing and denying tweets as the key for debunking rumor.", "All such approaches are over simplistic because they ignore the dynamic propagation patterns given the rich structures of social media data.", "Some studies focus on finding temporal patterns for understanding rumor diffusion.", "Kown et al.", "(2013; 2017) introduced a time-series fitting model based on the temporal properties of tweet volume.", "Ma et al.", "(2015) extended the model using time series to capture the variation of features over time.", "Friggeri et al.", "(2014) and Hannak et al.", "(2014) studied the structure of misinformation cascades by analyzing comments linking to rumor debunking websites.", "More recently, Ma et al.", "(2016) used recurrent neural networks to learn the representations of rumor signals from tweet text at different times.", "Our work will consider temporal, structural and linguistic signals in a unified framework based on propagation tree kernel.", "Most previous work formulated the task as classification at event level where an event is comprised of a number of source tweets, each being associated with a group of retweets and replies.", "Here we focus on classifying a given source tweet regarding a claim which is a finer-grained task.", "Similar setting was also considered in (Wu et al., 2015; Qazvinian et al., 2011) .", "Kernel methods are designed to evaluate the similarity between two objects, and tree kernel specifically addresses structured data which has been successfully applied for modeling syntactic information in many natural language tasks such as syntactic parsing (Collins and Duffy, 2001) , question-answering (Moschitti, 2006) , semantic analysis (Moschitti, 2004) , relation extraction (Zhang et al., 2008) and machine translation (Sun et al., 2010) .", "These kernels are not suitable for modeling the social media propagation structures because the nodes are not given as discrete values like part-of-speech tags, but are represented as high dimensional real-valued vectors.", "Our proposed method is a substantial extension of tree kernel for modeling such structures.", "Representation of Tweets Propagation On microblogging platforms, the follower/friend relationship embeds shared interests among the users.", "Once a user has posted a tweet, all his followers will receive the tweet.", "Furthermore, Twitter allows a user to retweet or comment another user's post, so that the information could reach beyond the network of the original creator.", "We model the propagation of each source tweet as a tree structure T (r) = V, E , where r is the source tweet as well as the root of the tree, V refers to a set of nodes each representing a responsive post (i.e., retweet or reply) of a user at a certain time to the source tweet r which initiates the circulation, and E is a set of directed edges corresponding to the response relation among the nodes in V .", "If there exists a directed edge from v i to v j , it means v j is a direct response to v i .", "More specifically, each node v ∈ V is repre- sented as a tuple v = (u v , c v , t v ) , which provides the following information: u v is the creator of the post, c v represents the text content of the post, and t v is the time lag between the source tweet r and v. In our case, u v contains attributes of the user such as # of followers/friends, verification status, # of history posts, etc., c v is a vector of binary features based on uni-grams and/or bi-grams representing the post's content.", "Propagation Tree Kernel Modeling In this section, we describe our rumor detection model based on propagation trees using kernel method called Propagation Tree Kernel (PTK).", "Our task is, given a propagation tree T (r) of a source tweet r, to predict the label of r. Background of Tree Kernel Before presenting our proposed algorithm, we briefly present the traditional tree kernel, which our PTK model is based on.", "Tree kernel was designed to compute the syntactic and semantic similarity between two natural language sentences by implicitly counting the number of common subtrees between their corresponding parse trees.", "Given a syntactic parse tree, each node with its children is associated with a grammar production rule.", "Figure 3 illustrates the syntactic parse tree of \"cut a tree\" and its subtrees.", "A subtree is defined as any subgraph which has more than one nodes, with the restriction that entire (not partial) rule productions must be included.", "For example, the fragment [NP [D a]] is excluded because it contains only part of the production NP β†’ D N (Collins and Duffy, 2001) .", "Following Collins and Duffy (2001) , given two parse trees T 1 and T 2 , the kernel function K(T 1 , T 2 ) is defined as: v i ∈V 1 v j ∈V 2 βˆ†(v i , v j ) (1) where V 1 and V 2 are the sets of all nodes respectively in T 1 and T 2 , and each node is associated with a production rule, and βˆ†(v i , v j ) evaluates the common subtrees rooted at v i and v j .", "βˆ†(v i , v j ) can be computed using the following recursive procedure (Collins and Duffy, 2001) : 1) if the production rules at v i and v j are different, then βˆ†(v i , v j ) = 0; 2) else if the production rules at v i and v j are same, and v i and v j have only leaf children (i.e., they are pre-terminal symbols), then βˆ†(v i , v j ) = Ξ»; 3) else βˆ†(v i , v j ) = Ξ» min(nc(v i ),nc(v j )) k=1 (1 + βˆ†(ch(v i , k), ch(v j , k))).", "where nc(v) is the number of children of node v, ch(v, k) is the k-th child of node v, and Ξ» (0 < Ξ» ≀ 1) is a decay factor.", "Ξ» = 1 yields the number of common subtrees; Ξ» < 1 down weighs the contribution of larger subtrees to make the kernel value less variable with respect to subtree size.", "Our PTK Model To classify propagation trees, we can calculate the similarity between the trees, which is supposed to reflect the distinction of different types of rumors and non-rumors based on structural, linguistic and temporal properties.", "However, existing tree kernels cannot be readily applied on propagation trees because 1) unlike parse tree where the node is represented by enumerable nominal value (e.g., part-of-speech tag), the propagation tree node is given as a vector of continuous numerical values representing the basic properties of the node; 2) the similarity of two parse trees is based on the count of common subtrees, for which the commonality of subtrees is evaluated by checking if the same production rules and the same children are associated with the nodes in two subtrees being compared, whereas in our context the similarity function should be defined softly since hardly two nodes from different propagation trees are same.", "With the representation of propagation tree, we first define a function f to evaluate the similarity between two nodes v i and v j (we simplify the node representation for instance v i = (u i , c i , t i )) as the following: f (v i , v j ) = e βˆ’t (Ξ±E(u i , u j ) + (1 βˆ’ Ξ±)J (c i , c j )) where t = |t i βˆ’ t j | is the absolute difference between the time lags of v i and v j , E and J are user-based similarity and content-based similarity, respectively, and Ξ± is the trade-off parameter.", "The intuition of using exponential function of t to scale down the similarity is to capture the discriminant signals or patterns at the different stages of propagation.", "For example, a questioning message posted very early may signal a false rumor while the same posted far later from initial post may indicate the rumor is still unverified, despite that the two messages are semantically similar.", "The user-based similarity is defined as an Euclidean distance E(u i , u j ) = ||u i βˆ’ u j || 2 , where u i and u j are the user vectors of node v i and v j and || β€’ || 2 is the 2-norm of a vector.", "Here E is used to capture the characteristics of users participating in spreading rumors as discriminant signals, throughout the entire stage of propagation.", "Contentwise, we use Jaccard coefficient to measure the similarity of post content: J (c i , c j ) = |N gram(c i ) ∩ N gram(c j )| |N gram(c i ) βˆͺ N gram(c j )| where c i and c j are the sets of content words in two nodes.", "For n-grams here, we adopt both uni-grams and bi-grams.", "It can capture cue terms e.g., 'false', 'debunk', 'not true', etc.", "commonly occurring in rumors but not in non-rumors.", "Given two propagation trees T 1 = V 1 , E 1 and T 2 = V 2 , E 2 , PTK aims to compute the similarity between T 1 and T 2 iteratively based on enumerating all pairs of most similar subtrees.", "First, for each node v i ∈ V 1 , we obtain v i ∈ V 2 , the most similar node of v i from V 2 : v i = arg max v j ∈V 2 f (v i , v j ) Similarly, for each v j ∈ V 2 , we obtain v j ∈ V 1 : v j = arg max v i ∈V 1 f (v i , v j ) Then, the propagation tree kernel K P (T 1 , T 2 ) is defined as: v i ∈V 1 Ξ›(v i , v i ) + v j ∈V 2 Ξ›(v j , v j ) (2) where Ξ›(v, v ) evaluates the similarity of two subtrees rooted at v and v , which is computed recursively as follows: 1) if v or v are leaf nodes, then Ξ›(v, v ) = f (v, v ); 2) else Ξ›(v, v ) = f (v, v ) min(nc(v),nc(v )) k=1 (1 + Ξ›(ch(v, k), ch(v , k))) Note that unlike traditional tree kernel, in PTK the node similarity f ∈ [0, 1] is used for softly counting similar subtrees instead of common subtrees.", "Also, Ξ» in tree kernel is not needed as subtree size is not an issue here thanks to node similarity f .", "PTK aims to capture discriminant patterns from propagation trees inclusive of user, content and temporal traits, which is inspired by prior analyses on rumors spreading, e.g., user information can be a strong clue in the initial broadcast, content features are important throughout entire propagation periods, and structural and temporal patterns help for longitudinal diffusion (Zubiaga et al., 2016; Kwon et al., 2017) .", "Context-Sensitive Extension of PTK One defect of PTK is that it ignores the clues outside the subtrees, e.g., how the information propagates from source post to the current subtree.", "Intuitively, propagation paths provide further clue for determining the truthfulness of information since they embed the route and context of how the propagation happens.", "Therefore, we propose contextsensitive PTK (cPTK) by considering the propagation paths from the root of the tree to the roots of subtrees, which shares similar intuition with the context-sensitive tree kernel (Zhou et al., 2007) .", "For a propagation tree node v ∈ T (r), let L r v be the length (i.e., # of nodes) of the propagation path from root r to v, and v[x] be the x-th ancestor of v on the path starting from v (0 ≀ x < L r v , v[0] = v, v[L r v βˆ’ 1] = r) .", "cPTK evaluates the similarity between two trees T 1 (r 1 ) and T 2 (r 2 ) as follows: v i ∈V 1 L r 1 v i βˆ’1 x=0 Ξ› x (v i , v i ) + v j ∈V 2 L r 2 v j βˆ’1 x=0 Ξ› x (v j , v j ) (3) where Ξ› x (v, v ) measures the similarity of sub- trees rooted at v[x] and v [x] for context-sensitive evaluation, which is computed as follows: 1) if x > 0, Ξ› x (v, v ) = f (v[x], v [x]), where v[x] and v [x] are the x-th ancestor nodes of v and v on the respective propagation path.", "2) else Ξ› x (v, v ) = Ξ›(v, v ), namely PTK.", "Clearly, PTK is a special case of cPTK when x = 0 (see equation 3).", "cPTK evaluates the oc-currence of both context-free (without considering ancestors on propagation paths) and contextsensitive cases.", "Rumor Detection via Kernel Learning The advantage of kernel-based method is that we can avoid painstakingly engineering the features.", "This is possible because the kernel function can explore an implicit feature space when calculating the similarity between two objects (Culotta and Sorensen, 2004) .", "We incorporate the proposed tree kernel functions, i.e., PTK (equation 2) or cPTK (equation 3), into a supervised learning framework, for which we utilize a kernel-based SVM classifier.", "We treat each tree as an instance, and its similarity values with all training instances as feature space.", "Therefore, the kernel matrix of training set is m Γ— m and that of test set is n Γ— m where m and n are the sizes of training and test sets, respectively.", "For our multi-class task, we perform a one-vsall classification for each label and then assign the one with the highest likelihood among the four, i.e., non-rumor, false rumor, true rumor or unverified rumor.", "We choose this method due to interpretability of results, similar to recent work on occupational class classification (Preotiuc-Pietro et al., 2015; Lukasik et al., 2015) .", "Experiments and Results Data Sets To our knowledge, there is no public large dataset available for classifying propagation trees, where we need a good number of source tweets, more accurately, the tree roots together with the corresponding propagation structure, to be appropriately annotated with ground truth.", "We constructed our datasets based on a couple of reference datasets, namely Twitter15 (Liu et al., 2015) and Twitter16 (Ma et al., 2016) .", "The original datasets were released and used for binary classification of rumor and non-rumor with respect to given events that contain their relevant tweets.", "First, we extracted the popular source tweets 2 that are highly retweeted or replied.", "We then collected all the propagation threads (i.e., retweets and replies) for these source tweets.", "Because Twitter API cannot retrieve the retweets or replies, we gathered the retweet users for a given tweet from 2 Though unpopular tweets could be false, we ignore them as they do not draw much attention and are hardly impactful Twrench 3 and crawled the replies through Twitter's web interface.", "Finally, we annotated the source tweets by referring to the labels of the events they are from.", "We first turned the label of each event in Twitter15 and Twitter16 from binary to quaternary according to the veracity tag of the article in rumor debunking websites (e.g., snopes.com, Emergent.info, etc).", "Then we labeled the source tweets by following these rules: 1) Source tweets from unverified rumor events or non-rumor events are labeled the same as the corresponding event's label; 2) For a source tweet in false rumor event, we flip over the label and assign true to the source tweet if it expresses denial type of stance; otherwise, the label is assigned as false; 3) The analogous flip-over/nochange rule applies to the source tweets from true rumor events.", "We make the datasets produced publicly accessible 4 .", "Table 1 gives statistics on the resulting datasets.", "Experimental Setup We compare our kernel-based method against the following baselines: SVM-TS: A linear SVM classification model that uses time-series to model the variation of a set of hand-crafted features (Ma et al., 2015) .", "DTR: A Decision-Tree-based Ranking method to identify trending rumors (Zhao et al., 2015) , which searches for enquiry phrases and clusters disputed factual claims, and ranked the clustered results based on statistical features.", "DTC and SVM-RBF: The Twitter information credibility model using Decision Tree Classifier (Castillo et al., 2011) and the SVM-based model with RBF kernel (Yang et al., 2012) , respectively, both using hand-crafted features based on the overall statistics of the posts.", "RFC: The Random Forest Classifier proposed by Kwon et al.", "(2017) using three parameters to fit the temporal properties and an extensive set of hand-crafted features related to the user, linguistic and structure characteristics.", "GRU: The RNN-based rumor detection model proposed by Ma et al.", "(2016) with gated recurrent unit for representation learning of high-level features from relevant posts over time.", "BOW: A naive baseline we worked by representing the text in each tree using bag-of-words and building the rumor classifier with linear SVM.", "Our models: PTK and cPTK are our full PTK and cPTK models, respectively; PTKand cPTKare the setting of only using content while ignoring user properties.", "We implemented DTC and RFC with Weka 5 , SVM models with LibSVM 6 and GRU with Theano 7 .", "We held out 10% of the trees in each dataset for model tuning, and for the rest of the trees, we performed 3-fold cross-validation.", "We used accuracy, F 1 measure as evaluation metrics.", "Table 2 shows that our proposed methods outperform all the baselines on both datasets.", "Experimental Results Among all baselines, GRU performs the best, which learns the low-dimensional representation of responsive tweets by capturing the textual and temporal information.", "This indicates the effectiveness of complex signals indicative of rumors beyond cue words or phrases (e.g., \"what?", "\", \"really?", "\", \"not sure\", etc.).", "This also justifies the good performance of BOW even though it only uses uni-grams for representation.", "Although DTR uses a set of regular expressions, we found only 19.59% and 22.21% tweets in our datasets containing these expressions.", "That is why the results of DTR are not satisfactory.", "SVM-TS and RFC are comparable because both of them utilize an extensive set of features especially focusing on temporal traits.", "But none of the models can directly incorporate structured propagation patterns for deep similarity compar- ison between propagation trees.", "SVM-RBF, although using a non-linear kernel, is based on traditional hand-crafted features instead of the structural kernel like ours.", "So, they performed obviously worse than our approach.", "Representation learning methods like GRU cannot easily utilize complex structural information for learning important features from our networked data.", "In contrast, our models can capture complex propagation patterns from structured data rich of linguistic, user and temporal signals.", "Therefore, the superiority of our models is clear: PTKwhich only uses text is already better than GRU, demonstrating the importance of propagation structures.", "PTK that combines text and user yields better results on both datasets, implying that both properties are complementary and PTK integrating flat and structured information is obviously more effective.", "It is also observed that cPTK outperforms PTK except for non-rumor class.", "This suggests the context-sensitive modeling based on PTK is effective for different types of rumors, but for non- The example subtree of a rumor captured by the algorithm at early stage of propagation rumors, it seems that considering context of propagation path is not always helpful.", "This might be due to the generally weak signals originated from node properties on the paths during non-rumor's diffusion since user distribution patterns in nonrumors do not seem as obvious as in rumors.", "This is not an issue in cPTKsince user information is not considered at all.", "Over all classes, cPTK achieves the highest accuracies on both datasets.", "Furthermore, we observe that all the baseline methods perform much better on non-rumors than on rumors.", "This is because the features of existing methods were defined for a binary (rumor vs. non-rumor) classification problem.", "So, they do not perform well for finer-grained classes.", "Our ap-proach can differentiate various classes much better by deep, detailed comparison of different patterns based on propagation structure.", "Early Detection Performance Detecting rumors at an early stage of propagation is very important so that preventive measures could be taken as quickly as possible.", "In early detection task, all the posts after a detection deadline are invisible during test.", "The earlier the deadline, the less propagation information can be available.", "Figure 4 shows the performances of our PTK and cPTK models versus RFC (the best system based on feature engineering), GRU (the best system based on RNN) and DTR (an early-detection-specific algorithm) against various deadlines.", "In the first few hours, our approach demonstrates superior early detection performance than other models.", "Particularly, cPTK achieve 75% accuracy on Twitter15 and 73% on Twitter16 within 24 hours, that is much faster than other models.", "Our analysis shows that rumors typically demonstrate more complex propagation substructures especially at early stage.", "Figure 5 shows a detected subtree of a false rumor spread in its first few hours, where influential users are somehow captured to boost its propagation and the information flows among the users with an obvious unpopular-to-popular-to-unpopular trend in terms of user popularity, but such pattern was not witnessed in non-rumors in early stage.", "Many textual signals (underlined) can also be observed in that early period.", "Our method can learn such structures and patterns naturally, but it is difficult to know and hand-craft them in feature engineering.", "Conclusion and Future Work We propose a novel approach for detecting rumors in microblog posts based on kernel learning method using propagation trees.", "A propagation tree encodes the spread of a hypothesis (i.e., a source tweet) with complex structured patterns and flat information regarding content, user and time associated with the tree nodes.", "Enlightened by tree kernel techniques, our kernel method learns discriminant clues for identifying rumors of finer-grained levels by directly measuring the similarity among propagation trees via kernel functions.", "Experiments on two Twitter datasets show that our approach outperforms stateof-the-art baselines with large margin for both general and early rumor detection tasks.", "Since kernel-based approach covers more structural information than feature-based methods, it allows kernel to further incorporate information from a high dimensional space for possibly better discrimination.", "In the future, we will focus on improving the rumor detection task by exploring network representation learning framework.", "Moreover, we plan to investigate unsupervised models considering massive unlabeled rumorous data from social media." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "4.1", "4.2", "4.3", "4.4", "5.1", "5.2", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Representation of Tweets Propagation", "Propagation Tree Kernel Modeling", "Background of Tree Kernel", "Our PTK Model", "Context-Sensitive Extension of PTK", "Rumor Detection via Kernel Learning", "Data Sets", "Experimental Setup", "Experimental Results", "Early Detection Performance", "Conclusion and Future Work" ] }
GEM-SciDuet-train-101#paper-1265#slide-19
Conclusion and future work
Apply kernel learning method for rumor debunking by utilizing the propagation tree structures. Propagation tree encodes the spread of a source tweet with complex structured patterns and flat information regarding content, user and time associated with the tree nodes. Our kernel are combined under supervised framework for identifying rumors of finer-grained levels by directly measuring the similarity among propagation trees. Explore network representation method to improve the rumor detection task. Develop unsupervised models due to massive unlabeled data from social media.
Apply kernel learning method for rumor debunking by utilizing the propagation tree structures. Propagation tree encodes the spread of a source tweet with complex structured patterns and flat information regarding content, user and time associated with the tree nodes. Our kernel are combined under supervised framework for identifying rumors of finer-grained levels by directly measuring the similarity among propagation trees. Explore network representation method to improve the rumor detection task. Develop unsupervised models due to massive unlabeled data from social media.
[]
GEM-SciDuet-train-102#paper-1266#slide-0
1266
Learning bilingual word embeddings with (almost) no bilingual data
Most methods to learn bilingual word embeddings rely on large parallel corpora, which is difficult to obtain for most language pairs. This has motivated an active research line to relax this requirement, with methods that use document-aligned corpora or bilingual dictionaries of a few thousand words instead. In this work, we further reduce the need of bilingual resources using a very simple self-learning approach that can be combined with any dictionary-based mapping technique. Our method exploits the structural similarity of embedding spaces, and works with as little bilingual evidence as a 25 word dictionary or even an automatically generated list of numerals, obtaining results comparable to those of systems that use richer resources.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197 ], "paper_content_text": [ "Introduction Multilingual word embeddings have attracted a lot of attention in recent times.", "In addition to having a direct application in inherently crosslingual tasks like machine translation (Zou et al., 2013) and crosslingual entity linking (Tsai and Roth, 2016) , they provide an excellent mechanism for transfer learning, where a model trained in a resource-rich language is transferred to a less-resourced one, as shown with part-of-speech tagging , parsing (Xiao and Guo, 2014) and document classification (Klementiev et al., 2012) .", "Most methods to learn these multilingual word embeddings make use of large parallel corpora (Gouws et al., 2015; Luong et al., 2015) , but there have been several proposals to relax this requirement, given its scarcity in most language pairs.", "A possible relaxation is to use document-aligned or label-aligned comparable corpora (SΓΈgaard et al., 2015; VuliΔ‡ and Moens, 2016; Mogadala and Rettinger, 2016) , but large amounts of such corpora are not always available for some language pairs.", "An alternative approach that we follow here is to independently train the embeddings for each language on monolingual corpora, and then learn a linear transformation to map the embeddings from one space into the other by minimizing the distances in a bilingual dictionary, usually in the range of a few thousand entries (Mikolov et al., 2013a; Artetxe et al., 2016) .", "However, dictionaries of that size are not readily available for many language pairs, specially those involving less-resourced languages.", "In this work, we reduce the need of large bilingual dictionaries to much smaller seed dictionaries.", "Our method can work with as little as 25 word pairs, which are straightforward to obtain assuming some basic knowledge of the languages involved.", "The method can also work with trivially generated seed dictionaries of numerals (i.e.", "1-1, 2-2, 3-3, 4-4...) making it possible to learn bilingual word embeddings without any real bilingual data.", "In either case, we obtain very competitive results, comparable to other state-of-the-art methods that make use of much richer bilingual resources.", "The proposed method is an extension of existing mapping techniques, where the dictionary is used to learn the embedding mapping and the embedding mapping is used to induce a new dictionary iteratively in a self-learning fashion (see Figure 1) .", "In spite of its simplicity, our analysis of the implicit optimization objective reveals that the method is exploiting the structural similarity of independently trained embeddings.", "We analyze previous work in Section 2.", "Section 3 describes the self-learning framework, while Section 4 presents the experiments.", "Section 5 analyzes the underlying optimization objective, and Section 6 presents an error analysis.", "Figure 1 : A general schema of the proposed self-learning framework.", "Previous works learn a mapping W based on the seed dictionary D, which is then used to learn the full dictionary.", "In our proposal we use the new dictionary to learn a new mapping, iterating until convergence.", "Related work We will first focus on bilingual embedding mappings, which are the basis of our proposals, and then on other unsupervised and weakly supervised methods to learn bilingual word embeddings.", "Bilingual embedding mappings Methods to induce bilingual mappings work by independently learning the embeddings in each language using monolingual corpora, and then learning a transformation from one embedding space into the other based on a bilingual dictionary.", "The first of such methods is due to Mikolov et al.", "(2013a) , who learn the linear transformation that minimizes the sum of squared Euclidean distances for the dictionary entries.", "The same optimization objective is used by , who constrain the transformation matrix to be orthogonal.", "Xing et al.", "(2015) incorporate length normalization in the training of word embeddings and maximize the cosine similarity instead, enforcing the orthogonality constraint to preserve the length normalization after the mapping.", "Finally, use max-margin optimization with intruder negative sampling.", "Instead of learning a single linear transformation from the source language into the target language, Faruqui and Dyer (2014) use canonical correlation analysis to map both languages to a shared vector space.", "Lu et al.", "(2015) extend this work and apply deep canonical correlation analysis to learn non-linear transformations.", "Artetxe et al.", "(2016) propose a general framework that clarifies the relation between Mikolov et al.", "(2013a) , Xing et al.", "(2015) , Faruqui and Dyer (2014) and as variants of the same core optimization objective, and show that a new variant is able to surpass them all.", "While most of the previous methods use gradient descent, Artetxe et al.", "(2016) propose an efficient analytical implementation for those same methods, recently extended by Smith et al.", "(2017) to incorporate dimensionality reduction.", "A prominent application of bilingual embedding mappings, with a direct application in machine translation (Zhao et al., 2015) , is bilingual lexicon extraction, which is also the main evaluation method.", "More specifically, the learned mapping is used to induce the translation of source language words that were missing in the original dictionary, usually by taking their nearest neighbor word in the target language according to cosine similarity, although and Smith et al.", "(2017) propose alternative retrieval methods to address the hubness problem.", "Unsupervised and weakly supervised bilingual embeddings As mentioned before, our method works with as little as 25 word pairs, while the methods discussed previously use thousands of pairs.", "The only exception in this regard is the work by , who only use 10 word pairs with good results on transfer learning for part-of-speech tagging.", "Our experiments will show that, although their method captures coarse-grained relations, it fails on finer-grained tasks like bilingual lexicon induction.", "Bootstrapping methods similar to ours have been previously proposed for traditional countbased vector space models (Peirsman and PadΓ³, 2010; VuliΔ‡ and Moens, 2013) .", "However, while previous techniques incrementally build a high- dimensional model where each axis encodes the co-occurrences with a specific word and its equivalent in the other language, our method works with low-dimensional pre-trained word embeddings, which are more widely used nowadays.", "A practical aspect for reducing the need of bilingual supervision is on the design of the seed dictionary.", "This is analyzed in depth by VuliΔ‡ and Korhonen (2016) , who propose using documentaligned corpora to extract the training dictionary.", "A more common approach is to rely on shared words and cognates (Peirsman and PadΓ³, 2010; Smith et al., 2017) , eliminating the need of bilingual data in practice.", "Our use of shared numerals exploits the same underlying idea, but relies on even less bilingual evidence and should thus generalize better to distant language pairs.", "Miceli Barone (2016) and Cao et al.", "(2016) go one step further and attempt to learn bilingual embeddings without any bilingual evidence.", "The former uses adversarial autoencoders (Makhzani et al., 2016) , combining an encoder that maps the source language embeddings into the target language, a decoder that reconstructs the original embeddings, and a discriminator that distinguishes mapped embeddings from real target language embeddings, whereas the latter adds a regularization term to the training of word embeddings that pushes the mean and variance of each dimension in different languages close to each other.", "Although promising, the reported performance in both cases is poor in comparison to other methods.", "Finally, the induction of bilingual knowledge from monolingual corpora is closely related to the decipherment scenario, for which models that incorporate word embeddings have also been proposed (Dou et al., 2015) .", "However, decipherment is only concerned with translating text from one language to another and relies on complex statistical models that are designed specifically for that purpose, while our approach is more general and learns task-independent multilingual embeddings.", "Algorithm 2 Proposed self-learning framework Input: X (source embeddings) Input: Z (target embeddings) Input: D (seed dictionary) 1: repeat 2: W ← LEARN MAPPING(X, Z, D) 3: D ← LEARN DICTIONARY(X, Z, W ) 4: until convergence criterion 5: EVALUATE DICTIONARY(D) 3 Proposed self-learning framework As discussed in Section 2.1, a common evaluation task (and practical application) of bilingual embedding mappings is to induce bilingual lexicons, that is, to obtain the translation of source words that were missing in the training dictionary, which are then compared to a gold standard test dictionary for evaluation.", "This way, one can say that the seed (train) dictionary is used to learn a mapping, which is then used to induce a better dictionary (at least in the sense that it is larger).", "Algorithm 1 summarizes this framework.", "Following this observation, we propose to use the output dictionary in Algorithm 1 as the input of the same system in a self-learning fashion which, assuming that the output dictionary was indeed better than the original one, should serve to learn a better mapping and, consequently, an even better dictionary the second time.", "The process can then be repeated iteratively to obtain a hopefully better mapping and dictionary each time until some convergence criterion is met.", "Algorithm 2 summarizes this alternative framework that we propose.", "Our method can be combined with any embedding mapping and dictionary induction technique (see Section 2.1).", "However, efficiency turns out to be critical for a variety of reasons.", "First of all, by enclosing the learning logic in a loop, the total training time is increased by the number of iterations.", "Even more importantly, our framework requires to explicitly build the entire dictionary at each iteration, whereas previous work tends to induce the translation of individual words ondemand later at runtime.", "Moreover, from the second iteration onwards, it is this induced, full dictionary that has to be used to learn the embedding mapping, and not the considerably smaller seed dictionary as it is typically done.", "In the following two subsections, we respectively describe the embedding mapping method and the dictionary in-duction method that we adopt in our work with these efficiency requirements in mind.", "Embedding mapping As discussed in Section 2.1, most previous methods to learn embedding mappings use variants of gradient descent.", "Among the more efficient exact alternatives, we decide to adopt the one by Artetxe et al.", "(2016) for its simplicity and good results as reported in their paper.", "We next present their method, adapting the formalization to explicitly incorporate the dictionary as required by our self-learning algorithm.", "Let X and Z denote the word embedding matrices in two languages so that X i * corresponds to the ith source language word embedding and Z j * corresponds to the jth target language embedding.", "While Artetxe et al.", "(2016) assume these two matrices are aligned according to the dictionary, we drop this assumption and represent the dictionary explicitly as a binary matrix D, so that D ij = 1 if the ith source language word is aligned with the jth target language word.", "The goal is then to find the optimal mapping matrix W * so that the sum of squared Euclidean distances between the mapped source embeddings X i * W and target embeddings Z j * for the dictionary entries D ij is minimized: W * = arg min W i j D ij ||X i * W βˆ’ Z j * || 2 Following Artetxe et al.", "(2016) , we length normalize and mean center the embedding matrices X and Z in a preprocessing step, and constrain W to be an orthogonal matrix (i.e.", "W W T = W T W = I), which serves to enforce monolingual invariance, preventing a degradation in monolingual performance while yielding to better bilingual mappings.", "Under such orthogonality constraint, minimizing the squared Euclidean distance becomes equivalent to maximizing the dot product, so the above optimization objective can be reformulated as follows: W * = arg max W Tr XW Z T D T where Tr (Β·) denotes the trace operator (the sum of all the elements in the main diagonal).", "The optimal orthogonal solution for this problem is given by W * = U V T , where X T DZ = U Ξ£V T is the singular value decomposition of X T DZ.", "Since the dictionary matrix D is sparse, this can be efficiently computed in linear time with respect to the number of dictionary entries.", "Dictionary induction As discussed in Section 2.1, practically all previous work uses nearest neighbor retrieval for word translation induction based on embedding mappings.", "In nearest neighbor retrieval, each source language word is assigned the closest word in the target language.", "In our work, we use the dot product between the mapped source language embeddings and the target language embeddings as the similarity measure, which is roughly equivalent to cosine similarity given that we apply length normalization followed by mean centering as a preprocessing step (see Section 3.1).", "This way, following the notation in Section 3.1, we set D ij = 1 if j = argmax k (X i * W ) Β· Z k * and D ij = 0 other- wise 1 .", "While we find that independently computing the similarity measure between all word pairs is prohibitively slow, the computation of the entire similarity matrix XW Z T can be easily vectorized using popular linear algebra libraries, obtaining big performance gains.", "However, the resulting similarity matrix is often too large to fit in memory when using large vocabularies.", "For that reason, instead of computing the entire similarity matrix XW Z T in a single step, we iteratively compute submatrices of it using vectorized matrix multiplication, find their corresponding maxima each time, and then combine the results.", "Experiments and results In this section, we experimentally test the proposed method in bilingual lexicon induction and crosslingual word similarity.", "Subsection 4.1 describes the experimental settings, while Subsections 4.2 and 4.3 present the results obtained in each of the tasks.", "The code and resources necessary to reproduce our experiments are available at https://github.com/artetxem/ vecmap.", "Experimental settings For easier comparison with related work, we evaluated our mappings on bilingual lexicon induction using the public English-Italian dataset by , which includes monolingual word embeddings in both languages together with a bilingual dictionary split in a training set and a test set 2 .", "The embeddings were trained with the word2vec toolkit with CBOW and negative sampling (Mikolov et al., 2013b ) 3 , using a 2.8 billion word corpus for English (ukWaC + Wikipedia + BNC) and a 1.6 billion word corpus for Italian (itWaC) .", "The training and test sets were derived from a dictionary built form Europarl word alignments and available at OPUS (Tiedemann, 2012) , taking 1,500 random entries uniformly distributed in 5 frequency bins as the test set and the 5,000 most frequent of the remaining word pairs as the training set.", "In addition to English-Italian, we selected two other languages from different language families with publicly available resources.", "We thus created analogous datasets for English-German and English-Finnish.", "In the case of German, the embeddings were trained on the 0.9 billion word corpus SdeWaC, which is part of the WaCky collection (Baroni et al., 2009 ) that was also used for English and Italian.", "Given that Finnish is not included in this collection, we used the 2.8 billion word Common Crawl corpus provided at WMT 2016 4 instead, which we tokenized using the Stanford Tokenizer (Manning et al., 2014) .", "In addition to that, we created training and test sets for both pairs from their respective Europarl dictionaries from OPUS following the exact same procedure used for English-Italian, and the word embeddings were also trained using the same configuration as .", "Given that the main focus of our work is on small seed dictionaries, we created random subsets of 2,500, 1,000, 500, 250, 100, 75, 50 and 25 entries from the original training dictionaries of 5,000 entries.", "This was done by shuffling once the training dictionaries and taking their first k entries, so it is guaranteed that each dictionary is a strict subset of the bigger dictionaries.", "In addition to that, we explored using automatically generated dictionaries as a shortcut to practical unsupervised learning.", "For that purpose, we created numeral dictionaries, consisting of words matching the [0-9]+ regular expression in both vocabularies (e.g.", "1-1, 2-2, 3-3, 1992-1992 etc.).", "The resulting dictionary had 2772 entries for English-Italian, 2148 for English-German, and 2345 for English-Finnish.", "While more sophisticated approaches are possible (e.g.", "involving the edit distance of all words), we believe that this method is general enough that should work with practically any language pair, as Arabic numerals are often used even in languages with a different writing system (e.g.", "Chinese and Russian).", "While bilingual lexicon induction is a standard evaluation task for seed dictionary based methods like ours, it is unsuitable for bilingual corpus based methods, as statistical word alignment already provides a reliable way to derive dictionaries from bilingual corpora and, in fact, this is how the test dictionary itself is built in our case.", "For that reason, we carried out some experiments in crosslingual word similarity as a way to test our method in a different task and allowing to compare it to systems that use richer bilingual data.", "There are no many crosslingual word similarity datasets, and we used the RG-65 and WordSim-353 crosslingual datasets for English-German and the WordSim-353 crosslingual dataset for English-Italian as published by Camacho-Collados et al.", "(2015) 5 .", "As for the convergence criterion, we decide to stop training when the improvement on the average dot product for the induced dictionary falls below a given threshold from one iteration to the next.", "After length normalization, the dot product ranges from -1 to 1, so we decide to set this threshold at 1e-6, which we find to be a very conservative value yet enough that training takes a reasonable amount of time.", "The curves in the next section confirm that this was a reasonable choice.", "This convergence criterion is usually met in less than 100 iterations, each of them taking 5 minutes on a modest desktop computer (Intel Core i5-4670 CPU with 8GiB of RAM), including the induction of a dictionary of 200,000 words at each iteration.", "Bilingual lexicon induction For the experiments on bilingual lexicon induction, we compared our method with those proposed by Mikolov et al.", "(2013a) , Xing et al.", "(2015) , and Artetxe et al.", "(2016) , all of them implemented as part of the framework proposed by the latter.", "The results ob- Table 1 : Accuracy (%) on bilingual lexicon induction for different seed dictionaries tained with the 5,000 entry, 25 entry and the numerals dictionaries for all the 3 language pairs are given in Table 1 .", "The results for the 5,000 entry dictionaries show that our method is comparable or even better than the other systems.", "As another reference, the best published results using nearest-neighbor retrieval are due to , who report an accuracy of 40.20% for the full English-Italian dictionary, almost at pair with our system (39.67%).", "In any case, the main focus of our work is on smaller dictionaries, and it is under this setting that our method really stands out.", "The 25 entry and numerals columns in Table 1 show the results for this setting, where all previous methods drop dramatically, falling below 1% accuracy in all cases.", "The method by also obtains poor results with small dictionaries, which reinforces our hypothesis in Section 2.2 that their method can only capture coarse-grain bilingual relations for small dictionaries.", "In contrast, our proposed method obtains very competitive results for all dictionaries, with a difference of only 1-2 points between the full dictionary and both the 25 entry dictionary and the numerals dictionary in all three languages.", "Figure 2 shows the curve of the English-Italian accuracy for different seed dictionary sizes, confirming this trend.", "Finally, it is worth mentioning that, even if all the three language pairs show the same general behavior, there are clear differences in their absolute accuracy numbers, which can be attributed to the linguistic proximity of the languages involved.", "In particular, the results for English-Finnish are about 10 points below the rest, which is explained by the fact that Finnish is a non-indoeuropean agglutinative language, making the task considerably more difficult for this language pair.", "In this regard, we believe that the good results with small dictionaries are a strong indication of the robustness of our method, showing that it is able to learn good bilingual mappings from very little bilingual ev-idence even for distant language pairs where the structural similarity of the embedding spaces is presumably weaker.", "Crosslingual word similarity In addition to the baseline systems in Section 4.2, in the crosslingual similarity experiments we also tested the method by Luong et al.", "(2015) , which is the state-of-the-art for bilingual word embeddings based on parallel corpora (Upadhyay et al., 2016) 6 .", "As this method is an extension of word2vec, we used the same hyperparameters as for the monolingual embeddings when possible (see Section 4.1), and leave the default ones otherwise.", "We used Europarl as our parallel corpus to train this method as done by the authors, which consists of nearly 2 million parallel sentences.", "As shown in the results in Table 2 , our method obtains the best results in all cases, surpassing the rest of the dictionary-based methods by 1-3 points depending on the dataset.", "But, most importantly, it does not suffer from any significant degradation for using smaller dictionaries and, in fact, our method gets better results using the 25 entry dictionary or the numeral list as the only bilingual evidence than any of the baseline systems using much richer resources.", "The relatively poor results of Luong et al.", "(2015) can be attributed to the fact that the dictionary based methods make use of much bigger monolingual corpora, while methods based on parallel corpora are restricted to smaller corpora.", "However, it is not clear how to introduce monolingual corpora on those methods.", "We did run some experiments with BilBOWA (Gouws et al., 2015) , which supports training in monolingual corpora in addition to bilingual corpora, but obtained very poor results 7 .", "All in all, our experiments show Figure 2 : Accuracy on English-Italian bilingual lexicon induction for different seed dictionaries that it is better to use large monolingual corpora in combination with very little bilingual data rather than a bilingual corpus of a standard size alone.", "Global optimization objective It might seem somehow surprising at first that, as seen in the previous section, our simple selflearning approach is able to learn high quality bilingual embeddings from small seed dictionaries instead of falling in degenerated solutions.", "In this section, we try to shed light on our approach, and give empirical evidence supporting our claim.", "More concretely, we argue that, for the embedding mapping and dictionary induction methods described in Section 3, the proposed selflearning framework is implicitly solving the following global optimization problem 8 : W * = arg max W i max j (X i * W ) Β· Z j * s.t.", "W W T = W T W = I Contrary to the optimization objective for W in Section 3.1, the global optimization objective does not refer to any dictionary, and maximizes the similarity between each source language word and its closest target language word.", "Intuitively, a random solution would map source language embeddings to seemingly random locations in the target language space, and it would thus be unlikely that BilBOWA.", "8 While we restrict our formal analysis to the embedding mapping and dictionary induction method that we use, the general reasoning should be valid for other choices as well.", ".628 .739 .604 Table 2 : Spearman correlations on English-Italian and English-German crosslingual word similarity they have any target language word nearby, making the optimization value small.", "In contrast, a good solution would map source language words close to their translation equivalents in the target language space, and they would thus have their corresponding embeddings nearby, making the optimization value large.", "While it is certainly possible to build degenerated solutions that take high optimization values for small subsets of the vocabulary, we think that the structural similarity between independently trained embedding spaces in different languages is strong enough that optimizing this function yields to meaningful bilingual mappings when the size of the vocabulary is much larger than the dimensionality of the embeddings.", "The reasoning for how the self-learning framework is optimizing this objective is as follows.", "At the end of each iteration, the dictionary D is updated to assign, for the current mapping W , each source language word to its closest target language word.", "This way, when we update W to maximize the average similarity of these dictionary entries at the beginning of the next iteration, it is guaranteed that the value of the optimization objective will improve (or at least remain the same).", "The reason is that the average similarity between each word and what were previously the closest words will be improved if possible, as this is what the updated W directly optimizes (see Section 3.1).", "In addition to that, it is also possible that, for some source words, some other target words get closer after the update.", "Thanks to this, our self-learning algorithm is guaranteed to converge to a local optimum of the above global objective, behaving like an alternating optimization algorithm for it.", "It is interesting to note that the above reasoning is valid no matter what the the initial solution is, and, in fact, the global optimization objective does not depend on the seed dictionary nor any other bilingual resource.", "For that reason, it should be possible to use a random initialization instead of a small seed dictionary.", "However, we empirically observe that this works poorly in practice, as our algorithm tends to get stuck in poor local optima when the initial solution is not good enough.", "The general behavior of our method is reflected in Figure 3 , which shows the learning curve for different seed dictionaries according to both the objective function and the accuracy on bilingual lexicon induction.", "As it can be seen, the objective function is improved from iteration to iteration and converges to a local optimum just as expected.", "At the same time, the learning curves show a strong correlation between the optimization objective and the accuracy, as it can be clearly observed that improving the former leads to an improvement of the latter, confirming our explanations.", "Regarding random initialization, the figure shows that the algorithm gets stuck in a poor local optimum of the objective function, which is the reason of the bad performance (0% accuracy) on bilingual lexicon induction, but the proposed optimization objective itself seems to be adequate.", "Finally, we empirically observe that our algorithm learns similar mappings no matter what the seed dictionary was.", "We first repeated our experiments on English-Italian bilingual lexicon induction for 5 different dictionaries of 25 entries, obtaining an average accuracy of 38.15% and a standard deviation of only 0.75%.", "In addition to that, we observe that the overlap between the predictions made when starting with the full dictionary and the numerals dictionary is 76.00% (60.00% for the 25 entry dictionary).", "At the same time, 37.00% of the test cases are correctly solved by both instances, and it is only 5.07% of the test cases that one of them gets right and the other wrong (34.00% and 8.94% for the 25 entry dictionary).", "This suggests that our algorithm tends to converge to similar solutions even for disjoint seed dictionaries, which is in line with our view that we are implicitly optimizing an objective that is independent from the seed dictionary, yet a seed dictionary is necessary to build a good enough initial solution to avoid getting stuck in poor local optima.", "For that reason, it is likely that better methods to tackle this optimization problem would allow learning bilingual word embeddings without any bilingual evidence at all and, in this regard, we believe that our work opens exciting opportunities for future research.", "Error analysis So as to better understand the behavior of our system, we performed an error analysis of its output in English-Italian bilingual lexicon induction when starting with the 5,000 entry, the 25 entry and the numeral dictionaries in comparison with the baseline method of Artetxe et al.", "(2016) Our analysis first reveals that, in all the cases, about a third of the translations taken as erroneous according to the gold standard are not so in real-ity.", "This corresponds to both different morphological variants of the gold standard translations (e.g.", "dichiarato/dichiarΓ²) and other valid translations that were missing in the gold standard (e.g.", "climb β†’ salita instead of the gold standard scalato).", "This phenomenon is considerably more pronounced in the first frequency bins, which already have a much higher accuracy according to the gold standard.", "As for the actual errors, we observe that nearly a third of them correspond to named entities for all the different variants.", "Interestingly, the vast majority of the proposed translations in these cases are also named entities (e.g.", "Ryan β†’ Jason, John β†’ Paolo), which are often highly related to the original ones (e.g.", "Volvo β†’ BMW, Olympus β†’ Nikon).", "While these are clear errors, it is understandable that these methods are unable to discriminate between named entities to this degree based solely on the distributional hypothesis, in particular when it comes to common proper names (e.g.", "John, Andy), and one could design alternative strategies to address this issue like taking the edit distance as an additional signal.", "For the remaining errors, all systems tend to propose translations that have some degree of relationship with the correct ones, including nearsynonyms (e.g.", "guidelines β†’ raccomandazioni), antonyms (e.g.", "sender β†’ destinatario) and words in the same semantic field (e.g.", "nominalism β†’ intuizionismo / innatismo, which are all philosophical doctrines).", "However, there are also a few instances where the relationship is weak or unclear (e.g.", "loch β†’ giardini, sweep β†’ serrare).", "We also observe a few errors that are related to multiwords or collocations (e.g.", "carrier β†’ aereo, presumably related to the multiword air carrier / linea aerea), as well as some rare word that is repeated across many translations (Ferruzzi), which could be attributed to the hubness problem .", "All in all, our error analysis reveals that the baseline method of Artetxe et al.", "(2016) and the proposed algorithm tend to make the same kind of errors regardless of the seed dictionary used by the latter, which reinforces our interpretation in the previous section regarding an underlying optimization objective that is independent from any training dictionary.", "Moreover, it shows that the quality of the learned mappings is much better than what the raw accuracy numbers might sug-gest, encouraging the incorporation of these techniques in other applications.", "Conclusions and future work In this work, we propose a simple self-learning framework to learn bilingual word embedding mappings in combination with any embedding mapping and dictionary induction technique.", "Our experiments on bilingual lexicon induction and crosslingual word similarity show that our method is able to learn high quality bilingual embeddings from as little bilingual evidence as a 25 word dictionary or an automatically generated list of numerals, obtaining results that are competitive with state-of-the-art systems using much richer bilingual resources like larger dictionaries or parallel corpora.", "In spite of its simplicity, a more detailed analysis shows that our method is implicitly optimizing a meaningful objective function that is independent from any bilingual data which, with a better optimization method, might allow to learn bilingual word embeddings in a completely unsupervised manner.", "In the future, we would like to delve deeper into this direction and fine-tune our method so it can reliably learn high quality bilingual word embeddings without any bilingual evidence at all.", "In addition to that, we would like to explore non-linear transformations (Lu et al., 2015) and alternative dictionary induction methods Smith et al., 2017) .", "Finally, we would like to apply our model in the decipherment scenario (Dou et al., 2015) ." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "4", "4.1", "4.2", "4.3", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Related work", "Bilingual embedding mappings", "Unsupervised and weakly supervised bilingual embeddings", "Embedding mapping", "Dictionary induction", "Experiments and results", "Experimental settings", "Bilingual lexicon induction", "Crosslingual word similarity", "Global optimization objective", "Error analysis", "Conclusions and future work" ] }
GEM-SciDuet-train-102#paper-1266#slide-0
Who cares
word embeddings are useful! - inherently crosslingual tasks - crosslingual transfer learning bilingual signal for training - parallel corpora bilingual signal for training - comparable corpora - numerals (1, 2, 3) Previous work This talk - parallel corpora - 25 word dictionary bilingual signal
word embeddings are useful! - inherently crosslingual tasks - crosslingual transfer learning bilingual signal for training - parallel corpora bilingual signal for training - comparable corpora - numerals (1, 2, 3) Previous work This talk - parallel corpora - 25 word dictionary bilingual signal
[]
GEM-SciDuet-train-102#paper-1266#slide-1
1266
Learning bilingual word embeddings with (almost) no bilingual data
Most methods to learn bilingual word embeddings rely on large parallel corpora, which is difficult to obtain for most language pairs. This has motivated an active research line to relax this requirement, with methods that use document-aligned corpora or bilingual dictionaries of a few thousand words instead. In this work, we further reduce the need of bilingual resources using a very simple self-learning approach that can be combined with any dictionary-based mapping technique. Our method exploits the structural similarity of embedding spaces, and works with as little bilingual evidence as a 25 word dictionary or even an automatically generated list of numerals, obtaining results comparable to those of systems that use richer resources.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197 ], "paper_content_text": [ "Introduction Multilingual word embeddings have attracted a lot of attention in recent times.", "In addition to having a direct application in inherently crosslingual tasks like machine translation (Zou et al., 2013) and crosslingual entity linking (Tsai and Roth, 2016) , they provide an excellent mechanism for transfer learning, where a model trained in a resource-rich language is transferred to a less-resourced one, as shown with part-of-speech tagging , parsing (Xiao and Guo, 2014) and document classification (Klementiev et al., 2012) .", "Most methods to learn these multilingual word embeddings make use of large parallel corpora (Gouws et al., 2015; Luong et al., 2015) , but there have been several proposals to relax this requirement, given its scarcity in most language pairs.", "A possible relaxation is to use document-aligned or label-aligned comparable corpora (SΓΈgaard et al., 2015; VuliΔ‡ and Moens, 2016; Mogadala and Rettinger, 2016) , but large amounts of such corpora are not always available for some language pairs.", "An alternative approach that we follow here is to independently train the embeddings for each language on monolingual corpora, and then learn a linear transformation to map the embeddings from one space into the other by minimizing the distances in a bilingual dictionary, usually in the range of a few thousand entries (Mikolov et al., 2013a; Artetxe et al., 2016) .", "However, dictionaries of that size are not readily available for many language pairs, specially those involving less-resourced languages.", "In this work, we reduce the need of large bilingual dictionaries to much smaller seed dictionaries.", "Our method can work with as little as 25 word pairs, which are straightforward to obtain assuming some basic knowledge of the languages involved.", "The method can also work with trivially generated seed dictionaries of numerals (i.e.", "1-1, 2-2, 3-3, 4-4...) making it possible to learn bilingual word embeddings without any real bilingual data.", "In either case, we obtain very competitive results, comparable to other state-of-the-art methods that make use of much richer bilingual resources.", "The proposed method is an extension of existing mapping techniques, where the dictionary is used to learn the embedding mapping and the embedding mapping is used to induce a new dictionary iteratively in a self-learning fashion (see Figure 1) .", "In spite of its simplicity, our analysis of the implicit optimization objective reveals that the method is exploiting the structural similarity of independently trained embeddings.", "We analyze previous work in Section 2.", "Section 3 describes the self-learning framework, while Section 4 presents the experiments.", "Section 5 analyzes the underlying optimization objective, and Section 6 presents an error analysis.", "Figure 1 : A general schema of the proposed self-learning framework.", "Previous works learn a mapping W based on the seed dictionary D, which is then used to learn the full dictionary.", "In our proposal we use the new dictionary to learn a new mapping, iterating until convergence.", "Related work We will first focus on bilingual embedding mappings, which are the basis of our proposals, and then on other unsupervised and weakly supervised methods to learn bilingual word embeddings.", "Bilingual embedding mappings Methods to induce bilingual mappings work by independently learning the embeddings in each language using monolingual corpora, and then learning a transformation from one embedding space into the other based on a bilingual dictionary.", "The first of such methods is due to Mikolov et al.", "(2013a) , who learn the linear transformation that minimizes the sum of squared Euclidean distances for the dictionary entries.", "The same optimization objective is used by , who constrain the transformation matrix to be orthogonal.", "Xing et al.", "(2015) incorporate length normalization in the training of word embeddings and maximize the cosine similarity instead, enforcing the orthogonality constraint to preserve the length normalization after the mapping.", "Finally, use max-margin optimization with intruder negative sampling.", "Instead of learning a single linear transformation from the source language into the target language, Faruqui and Dyer (2014) use canonical correlation analysis to map both languages to a shared vector space.", "Lu et al.", "(2015) extend this work and apply deep canonical correlation analysis to learn non-linear transformations.", "Artetxe et al.", "(2016) propose a general framework that clarifies the relation between Mikolov et al.", "(2013a) , Xing et al.", "(2015) , Faruqui and Dyer (2014) and as variants of the same core optimization objective, and show that a new variant is able to surpass them all.", "While most of the previous methods use gradient descent, Artetxe et al.", "(2016) propose an efficient analytical implementation for those same methods, recently extended by Smith et al.", "(2017) to incorporate dimensionality reduction.", "A prominent application of bilingual embedding mappings, with a direct application in machine translation (Zhao et al., 2015) , is bilingual lexicon extraction, which is also the main evaluation method.", "More specifically, the learned mapping is used to induce the translation of source language words that were missing in the original dictionary, usually by taking their nearest neighbor word in the target language according to cosine similarity, although and Smith et al.", "(2017) propose alternative retrieval methods to address the hubness problem.", "Unsupervised and weakly supervised bilingual embeddings As mentioned before, our method works with as little as 25 word pairs, while the methods discussed previously use thousands of pairs.", "The only exception in this regard is the work by , who only use 10 word pairs with good results on transfer learning for part-of-speech tagging.", "Our experiments will show that, although their method captures coarse-grained relations, it fails on finer-grained tasks like bilingual lexicon induction.", "Bootstrapping methods similar to ours have been previously proposed for traditional countbased vector space models (Peirsman and PadΓ³, 2010; VuliΔ‡ and Moens, 2013) .", "However, while previous techniques incrementally build a high- dimensional model where each axis encodes the co-occurrences with a specific word and its equivalent in the other language, our method works with low-dimensional pre-trained word embeddings, which are more widely used nowadays.", "A practical aspect for reducing the need of bilingual supervision is on the design of the seed dictionary.", "This is analyzed in depth by VuliΔ‡ and Korhonen (2016) , who propose using documentaligned corpora to extract the training dictionary.", "A more common approach is to rely on shared words and cognates (Peirsman and PadΓ³, 2010; Smith et al., 2017) , eliminating the need of bilingual data in practice.", "Our use of shared numerals exploits the same underlying idea, but relies on even less bilingual evidence and should thus generalize better to distant language pairs.", "Miceli Barone (2016) and Cao et al.", "(2016) go one step further and attempt to learn bilingual embeddings without any bilingual evidence.", "The former uses adversarial autoencoders (Makhzani et al., 2016) , combining an encoder that maps the source language embeddings into the target language, a decoder that reconstructs the original embeddings, and a discriminator that distinguishes mapped embeddings from real target language embeddings, whereas the latter adds a regularization term to the training of word embeddings that pushes the mean and variance of each dimension in different languages close to each other.", "Although promising, the reported performance in both cases is poor in comparison to other methods.", "Finally, the induction of bilingual knowledge from monolingual corpora is closely related to the decipherment scenario, for which models that incorporate word embeddings have also been proposed (Dou et al., 2015) .", "However, decipherment is only concerned with translating text from one language to another and relies on complex statistical models that are designed specifically for that purpose, while our approach is more general and learns task-independent multilingual embeddings.", "Algorithm 2 Proposed self-learning framework Input: X (source embeddings) Input: Z (target embeddings) Input: D (seed dictionary) 1: repeat 2: W ← LEARN MAPPING(X, Z, D) 3: D ← LEARN DICTIONARY(X, Z, W ) 4: until convergence criterion 5: EVALUATE DICTIONARY(D) 3 Proposed self-learning framework As discussed in Section 2.1, a common evaluation task (and practical application) of bilingual embedding mappings is to induce bilingual lexicons, that is, to obtain the translation of source words that were missing in the training dictionary, which are then compared to a gold standard test dictionary for evaluation.", "This way, one can say that the seed (train) dictionary is used to learn a mapping, which is then used to induce a better dictionary (at least in the sense that it is larger).", "Algorithm 1 summarizes this framework.", "Following this observation, we propose to use the output dictionary in Algorithm 1 as the input of the same system in a self-learning fashion which, assuming that the output dictionary was indeed better than the original one, should serve to learn a better mapping and, consequently, an even better dictionary the second time.", "The process can then be repeated iteratively to obtain a hopefully better mapping and dictionary each time until some convergence criterion is met.", "Algorithm 2 summarizes this alternative framework that we propose.", "Our method can be combined with any embedding mapping and dictionary induction technique (see Section 2.1).", "However, efficiency turns out to be critical for a variety of reasons.", "First of all, by enclosing the learning logic in a loop, the total training time is increased by the number of iterations.", "Even more importantly, our framework requires to explicitly build the entire dictionary at each iteration, whereas previous work tends to induce the translation of individual words ondemand later at runtime.", "Moreover, from the second iteration onwards, it is this induced, full dictionary that has to be used to learn the embedding mapping, and not the considerably smaller seed dictionary as it is typically done.", "In the following two subsections, we respectively describe the embedding mapping method and the dictionary in-duction method that we adopt in our work with these efficiency requirements in mind.", "Embedding mapping As discussed in Section 2.1, most previous methods to learn embedding mappings use variants of gradient descent.", "Among the more efficient exact alternatives, we decide to adopt the one by Artetxe et al.", "(2016) for its simplicity and good results as reported in their paper.", "We next present their method, adapting the formalization to explicitly incorporate the dictionary as required by our self-learning algorithm.", "Let X and Z denote the word embedding matrices in two languages so that X i * corresponds to the ith source language word embedding and Z j * corresponds to the jth target language embedding.", "While Artetxe et al.", "(2016) assume these two matrices are aligned according to the dictionary, we drop this assumption and represent the dictionary explicitly as a binary matrix D, so that D ij = 1 if the ith source language word is aligned with the jth target language word.", "The goal is then to find the optimal mapping matrix W * so that the sum of squared Euclidean distances between the mapped source embeddings X i * W and target embeddings Z j * for the dictionary entries D ij is minimized: W * = arg min W i j D ij ||X i * W βˆ’ Z j * || 2 Following Artetxe et al.", "(2016) , we length normalize and mean center the embedding matrices X and Z in a preprocessing step, and constrain W to be an orthogonal matrix (i.e.", "W W T = W T W = I), which serves to enforce monolingual invariance, preventing a degradation in monolingual performance while yielding to better bilingual mappings.", "Under such orthogonality constraint, minimizing the squared Euclidean distance becomes equivalent to maximizing the dot product, so the above optimization objective can be reformulated as follows: W * = arg max W Tr XW Z T D T where Tr (Β·) denotes the trace operator (the sum of all the elements in the main diagonal).", "The optimal orthogonal solution for this problem is given by W * = U V T , where X T DZ = U Ξ£V T is the singular value decomposition of X T DZ.", "Since the dictionary matrix D is sparse, this can be efficiently computed in linear time with respect to the number of dictionary entries.", "Dictionary induction As discussed in Section 2.1, practically all previous work uses nearest neighbor retrieval for word translation induction based on embedding mappings.", "In nearest neighbor retrieval, each source language word is assigned the closest word in the target language.", "In our work, we use the dot product between the mapped source language embeddings and the target language embeddings as the similarity measure, which is roughly equivalent to cosine similarity given that we apply length normalization followed by mean centering as a preprocessing step (see Section 3.1).", "This way, following the notation in Section 3.1, we set D ij = 1 if j = argmax k (X i * W ) Β· Z k * and D ij = 0 other- wise 1 .", "While we find that independently computing the similarity measure between all word pairs is prohibitively slow, the computation of the entire similarity matrix XW Z T can be easily vectorized using popular linear algebra libraries, obtaining big performance gains.", "However, the resulting similarity matrix is often too large to fit in memory when using large vocabularies.", "For that reason, instead of computing the entire similarity matrix XW Z T in a single step, we iteratively compute submatrices of it using vectorized matrix multiplication, find their corresponding maxima each time, and then combine the results.", "Experiments and results In this section, we experimentally test the proposed method in bilingual lexicon induction and crosslingual word similarity.", "Subsection 4.1 describes the experimental settings, while Subsections 4.2 and 4.3 present the results obtained in each of the tasks.", "The code and resources necessary to reproduce our experiments are available at https://github.com/artetxem/ vecmap.", "Experimental settings For easier comparison with related work, we evaluated our mappings on bilingual lexicon induction using the public English-Italian dataset by , which includes monolingual word embeddings in both languages together with a bilingual dictionary split in a training set and a test set 2 .", "The embeddings were trained with the word2vec toolkit with CBOW and negative sampling (Mikolov et al., 2013b ) 3 , using a 2.8 billion word corpus for English (ukWaC + Wikipedia + BNC) and a 1.6 billion word corpus for Italian (itWaC) .", "The training and test sets were derived from a dictionary built form Europarl word alignments and available at OPUS (Tiedemann, 2012) , taking 1,500 random entries uniformly distributed in 5 frequency bins as the test set and the 5,000 most frequent of the remaining word pairs as the training set.", "In addition to English-Italian, we selected two other languages from different language families with publicly available resources.", "We thus created analogous datasets for English-German and English-Finnish.", "In the case of German, the embeddings were trained on the 0.9 billion word corpus SdeWaC, which is part of the WaCky collection (Baroni et al., 2009 ) that was also used for English and Italian.", "Given that Finnish is not included in this collection, we used the 2.8 billion word Common Crawl corpus provided at WMT 2016 4 instead, which we tokenized using the Stanford Tokenizer (Manning et al., 2014) .", "In addition to that, we created training and test sets for both pairs from their respective Europarl dictionaries from OPUS following the exact same procedure used for English-Italian, and the word embeddings were also trained using the same configuration as .", "Given that the main focus of our work is on small seed dictionaries, we created random subsets of 2,500, 1,000, 500, 250, 100, 75, 50 and 25 entries from the original training dictionaries of 5,000 entries.", "This was done by shuffling once the training dictionaries and taking their first k entries, so it is guaranteed that each dictionary is a strict subset of the bigger dictionaries.", "In addition to that, we explored using automatically generated dictionaries as a shortcut to practical unsupervised learning.", "For that purpose, we created numeral dictionaries, consisting of words matching the [0-9]+ regular expression in both vocabularies (e.g.", "1-1, 2-2, 3-3, 1992-1992 etc.).", "The resulting dictionary had 2772 entries for English-Italian, 2148 for English-German, and 2345 for English-Finnish.", "While more sophisticated approaches are possible (e.g.", "involving the edit distance of all words), we believe that this method is general enough that should work with practically any language pair, as Arabic numerals are often used even in languages with a different writing system (e.g.", "Chinese and Russian).", "While bilingual lexicon induction is a standard evaluation task for seed dictionary based methods like ours, it is unsuitable for bilingual corpus based methods, as statistical word alignment already provides a reliable way to derive dictionaries from bilingual corpora and, in fact, this is how the test dictionary itself is built in our case.", "For that reason, we carried out some experiments in crosslingual word similarity as a way to test our method in a different task and allowing to compare it to systems that use richer bilingual data.", "There are no many crosslingual word similarity datasets, and we used the RG-65 and WordSim-353 crosslingual datasets for English-German and the WordSim-353 crosslingual dataset for English-Italian as published by Camacho-Collados et al.", "(2015) 5 .", "As for the convergence criterion, we decide to stop training when the improvement on the average dot product for the induced dictionary falls below a given threshold from one iteration to the next.", "After length normalization, the dot product ranges from -1 to 1, so we decide to set this threshold at 1e-6, which we find to be a very conservative value yet enough that training takes a reasonable amount of time.", "The curves in the next section confirm that this was a reasonable choice.", "This convergence criterion is usually met in less than 100 iterations, each of them taking 5 minutes on a modest desktop computer (Intel Core i5-4670 CPU with 8GiB of RAM), including the induction of a dictionary of 200,000 words at each iteration.", "Bilingual lexicon induction For the experiments on bilingual lexicon induction, we compared our method with those proposed by Mikolov et al.", "(2013a) , Xing et al.", "(2015) , and Artetxe et al.", "(2016) , all of them implemented as part of the framework proposed by the latter.", "The results ob- Table 1 : Accuracy (%) on bilingual lexicon induction for different seed dictionaries tained with the 5,000 entry, 25 entry and the numerals dictionaries for all the 3 language pairs are given in Table 1 .", "The results for the 5,000 entry dictionaries show that our method is comparable or even better than the other systems.", "As another reference, the best published results using nearest-neighbor retrieval are due to , who report an accuracy of 40.20% for the full English-Italian dictionary, almost at pair with our system (39.67%).", "In any case, the main focus of our work is on smaller dictionaries, and it is under this setting that our method really stands out.", "The 25 entry and numerals columns in Table 1 show the results for this setting, where all previous methods drop dramatically, falling below 1% accuracy in all cases.", "The method by also obtains poor results with small dictionaries, which reinforces our hypothesis in Section 2.2 that their method can only capture coarse-grain bilingual relations for small dictionaries.", "In contrast, our proposed method obtains very competitive results for all dictionaries, with a difference of only 1-2 points between the full dictionary and both the 25 entry dictionary and the numerals dictionary in all three languages.", "Figure 2 shows the curve of the English-Italian accuracy for different seed dictionary sizes, confirming this trend.", "Finally, it is worth mentioning that, even if all the three language pairs show the same general behavior, there are clear differences in their absolute accuracy numbers, which can be attributed to the linguistic proximity of the languages involved.", "In particular, the results for English-Finnish are about 10 points below the rest, which is explained by the fact that Finnish is a non-indoeuropean agglutinative language, making the task considerably more difficult for this language pair.", "In this regard, we believe that the good results with small dictionaries are a strong indication of the robustness of our method, showing that it is able to learn good bilingual mappings from very little bilingual ev-idence even for distant language pairs where the structural similarity of the embedding spaces is presumably weaker.", "Crosslingual word similarity In addition to the baseline systems in Section 4.2, in the crosslingual similarity experiments we also tested the method by Luong et al.", "(2015) , which is the state-of-the-art for bilingual word embeddings based on parallel corpora (Upadhyay et al., 2016) 6 .", "As this method is an extension of word2vec, we used the same hyperparameters as for the monolingual embeddings when possible (see Section 4.1), and leave the default ones otherwise.", "We used Europarl as our parallel corpus to train this method as done by the authors, which consists of nearly 2 million parallel sentences.", "As shown in the results in Table 2 , our method obtains the best results in all cases, surpassing the rest of the dictionary-based methods by 1-3 points depending on the dataset.", "But, most importantly, it does not suffer from any significant degradation for using smaller dictionaries and, in fact, our method gets better results using the 25 entry dictionary or the numeral list as the only bilingual evidence than any of the baseline systems using much richer resources.", "The relatively poor results of Luong et al.", "(2015) can be attributed to the fact that the dictionary based methods make use of much bigger monolingual corpora, while methods based on parallel corpora are restricted to smaller corpora.", "However, it is not clear how to introduce monolingual corpora on those methods.", "We did run some experiments with BilBOWA (Gouws et al., 2015) , which supports training in monolingual corpora in addition to bilingual corpora, but obtained very poor results 7 .", "All in all, our experiments show Figure 2 : Accuracy on English-Italian bilingual lexicon induction for different seed dictionaries that it is better to use large monolingual corpora in combination with very little bilingual data rather than a bilingual corpus of a standard size alone.", "Global optimization objective It might seem somehow surprising at first that, as seen in the previous section, our simple selflearning approach is able to learn high quality bilingual embeddings from small seed dictionaries instead of falling in degenerated solutions.", "In this section, we try to shed light on our approach, and give empirical evidence supporting our claim.", "More concretely, we argue that, for the embedding mapping and dictionary induction methods described in Section 3, the proposed selflearning framework is implicitly solving the following global optimization problem 8 : W * = arg max W i max j (X i * W ) Β· Z j * s.t.", "W W T = W T W = I Contrary to the optimization objective for W in Section 3.1, the global optimization objective does not refer to any dictionary, and maximizes the similarity between each source language word and its closest target language word.", "Intuitively, a random solution would map source language embeddings to seemingly random locations in the target language space, and it would thus be unlikely that BilBOWA.", "8 While we restrict our formal analysis to the embedding mapping and dictionary induction method that we use, the general reasoning should be valid for other choices as well.", ".628 .739 .604 Table 2 : Spearman correlations on English-Italian and English-German crosslingual word similarity they have any target language word nearby, making the optimization value small.", "In contrast, a good solution would map source language words close to their translation equivalents in the target language space, and they would thus have their corresponding embeddings nearby, making the optimization value large.", "While it is certainly possible to build degenerated solutions that take high optimization values for small subsets of the vocabulary, we think that the structural similarity between independently trained embedding spaces in different languages is strong enough that optimizing this function yields to meaningful bilingual mappings when the size of the vocabulary is much larger than the dimensionality of the embeddings.", "The reasoning for how the self-learning framework is optimizing this objective is as follows.", "At the end of each iteration, the dictionary D is updated to assign, for the current mapping W , each source language word to its closest target language word.", "This way, when we update W to maximize the average similarity of these dictionary entries at the beginning of the next iteration, it is guaranteed that the value of the optimization objective will improve (or at least remain the same).", "The reason is that the average similarity between each word and what were previously the closest words will be improved if possible, as this is what the updated W directly optimizes (see Section 3.1).", "In addition to that, it is also possible that, for some source words, some other target words get closer after the update.", "Thanks to this, our self-learning algorithm is guaranteed to converge to a local optimum of the above global objective, behaving like an alternating optimization algorithm for it.", "It is interesting to note that the above reasoning is valid no matter what the the initial solution is, and, in fact, the global optimization objective does not depend on the seed dictionary nor any other bilingual resource.", "For that reason, it should be possible to use a random initialization instead of a small seed dictionary.", "However, we empirically observe that this works poorly in practice, as our algorithm tends to get stuck in poor local optima when the initial solution is not good enough.", "The general behavior of our method is reflected in Figure 3 , which shows the learning curve for different seed dictionaries according to both the objective function and the accuracy on bilingual lexicon induction.", "As it can be seen, the objective function is improved from iteration to iteration and converges to a local optimum just as expected.", "At the same time, the learning curves show a strong correlation between the optimization objective and the accuracy, as it can be clearly observed that improving the former leads to an improvement of the latter, confirming our explanations.", "Regarding random initialization, the figure shows that the algorithm gets stuck in a poor local optimum of the objective function, which is the reason of the bad performance (0% accuracy) on bilingual lexicon induction, but the proposed optimization objective itself seems to be adequate.", "Finally, we empirically observe that our algorithm learns similar mappings no matter what the seed dictionary was.", "We first repeated our experiments on English-Italian bilingual lexicon induction for 5 different dictionaries of 25 entries, obtaining an average accuracy of 38.15% and a standard deviation of only 0.75%.", "In addition to that, we observe that the overlap between the predictions made when starting with the full dictionary and the numerals dictionary is 76.00% (60.00% for the 25 entry dictionary).", "At the same time, 37.00% of the test cases are correctly solved by both instances, and it is only 5.07% of the test cases that one of them gets right and the other wrong (34.00% and 8.94% for the 25 entry dictionary).", "This suggests that our algorithm tends to converge to similar solutions even for disjoint seed dictionaries, which is in line with our view that we are implicitly optimizing an objective that is independent from the seed dictionary, yet a seed dictionary is necessary to build a good enough initial solution to avoid getting stuck in poor local optima.", "For that reason, it is likely that better methods to tackle this optimization problem would allow learning bilingual word embeddings without any bilingual evidence at all and, in this regard, we believe that our work opens exciting opportunities for future research.", "Error analysis So as to better understand the behavior of our system, we performed an error analysis of its output in English-Italian bilingual lexicon induction when starting with the 5,000 entry, the 25 entry and the numeral dictionaries in comparison with the baseline method of Artetxe et al.", "(2016) Our analysis first reveals that, in all the cases, about a third of the translations taken as erroneous according to the gold standard are not so in real-ity.", "This corresponds to both different morphological variants of the gold standard translations (e.g.", "dichiarato/dichiarΓ²) and other valid translations that were missing in the gold standard (e.g.", "climb β†’ salita instead of the gold standard scalato).", "This phenomenon is considerably more pronounced in the first frequency bins, which already have a much higher accuracy according to the gold standard.", "As for the actual errors, we observe that nearly a third of them correspond to named entities for all the different variants.", "Interestingly, the vast majority of the proposed translations in these cases are also named entities (e.g.", "Ryan β†’ Jason, John β†’ Paolo), which are often highly related to the original ones (e.g.", "Volvo β†’ BMW, Olympus β†’ Nikon).", "While these are clear errors, it is understandable that these methods are unable to discriminate between named entities to this degree based solely on the distributional hypothesis, in particular when it comes to common proper names (e.g.", "John, Andy), and one could design alternative strategies to address this issue like taking the edit distance as an additional signal.", "For the remaining errors, all systems tend to propose translations that have some degree of relationship with the correct ones, including nearsynonyms (e.g.", "guidelines β†’ raccomandazioni), antonyms (e.g.", "sender β†’ destinatario) and words in the same semantic field (e.g.", "nominalism β†’ intuizionismo / innatismo, which are all philosophical doctrines).", "However, there are also a few instances where the relationship is weak or unclear (e.g.", "loch β†’ giardini, sweep β†’ serrare).", "We also observe a few errors that are related to multiwords or collocations (e.g.", "carrier β†’ aereo, presumably related to the multiword air carrier / linea aerea), as well as some rare word that is repeated across many translations (Ferruzzi), which could be attributed to the hubness problem .", "All in all, our error analysis reveals that the baseline method of Artetxe et al.", "(2016) and the proposed algorithm tend to make the same kind of errors regardless of the seed dictionary used by the latter, which reinforces our interpretation in the previous section regarding an underlying optimization objective that is independent from any training dictionary.", "Moreover, it shows that the quality of the learned mappings is much better than what the raw accuracy numbers might sug-gest, encouraging the incorporation of these techniques in other applications.", "Conclusions and future work In this work, we propose a simple self-learning framework to learn bilingual word embedding mappings in combination with any embedding mapping and dictionary induction technique.", "Our experiments on bilingual lexicon induction and crosslingual word similarity show that our method is able to learn high quality bilingual embeddings from as little bilingual evidence as a 25 word dictionary or an automatically generated list of numerals, obtaining results that are competitive with state-of-the-art systems using much richer bilingual resources like larger dictionaries or parallel corpora.", "In spite of its simplicity, a more detailed analysis shows that our method is implicitly optimizing a meaningful objective function that is independent from any bilingual data which, with a better optimization method, might allow to learn bilingual word embeddings in a completely unsupervised manner.", "In the future, we would like to delve deeper into this direction and fine-tune our method so it can reliably learn high quality bilingual word embeddings without any bilingual evidence at all.", "In addition to that, we would like to explore non-linear transformations (Lu et al., 2015) and alternative dictionary induction methods Smith et al., 2017) .", "Finally, we would like to apply our model in the decipherment scenario (Dou et al., 2015) ." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "4", "4.1", "4.2", "4.3", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Related work", "Bilingual embedding mappings", "Unsupervised and weakly supervised bilingual embeddings", "Embedding mapping", "Dictionary induction", "Experiments and results", "Experimental settings", "Bilingual lexicon induction", "Crosslingual word similarity", "Global optimization objective", "Error analysis", "Conclusions and future work" ] }
GEM-SciDuet-train-102#paper-1266#slide-1
Bilingual embedding mappings
Basque English Seed dictionary Basque arg min English formalization and implementation details in the paper based on the mapping method of Artetxe et al. (2016) Too good to be true?
Basque English Seed dictionary Basque arg min English formalization and implementation details in the paper based on the mapping method of Artetxe et al. (2016) Too good to be true?
[]
GEM-SciDuet-train-102#paper-1266#slide-2
1266
Learning bilingual word embeddings with (almost) no bilingual data
Most methods to learn bilingual word embeddings rely on large parallel corpora, which is difficult to obtain for most language pairs. This has motivated an active research line to relax this requirement, with methods that use document-aligned corpora or bilingual dictionaries of a few thousand words instead. In this work, we further reduce the need of bilingual resources using a very simple self-learning approach that can be combined with any dictionary-based mapping technique. Our method exploits the structural similarity of embedding spaces, and works with as little bilingual evidence as a 25 word dictionary or even an automatically generated list of numerals, obtaining results comparable to those of systems that use richer resources.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197 ], "paper_content_text": [ "Introduction Multilingual word embeddings have attracted a lot of attention in recent times.", "In addition to having a direct application in inherently crosslingual tasks like machine translation (Zou et al., 2013) and crosslingual entity linking (Tsai and Roth, 2016) , they provide an excellent mechanism for transfer learning, where a model trained in a resource-rich language is transferred to a less-resourced one, as shown with part-of-speech tagging , parsing (Xiao and Guo, 2014) and document classification (Klementiev et al., 2012) .", "Most methods to learn these multilingual word embeddings make use of large parallel corpora (Gouws et al., 2015; Luong et al., 2015) , but there have been several proposals to relax this requirement, given its scarcity in most language pairs.", "A possible relaxation is to use document-aligned or label-aligned comparable corpora (SΓΈgaard et al., 2015; VuliΔ‡ and Moens, 2016; Mogadala and Rettinger, 2016) , but large amounts of such corpora are not always available for some language pairs.", "An alternative approach that we follow here is to independently train the embeddings for each language on monolingual corpora, and then learn a linear transformation to map the embeddings from one space into the other by minimizing the distances in a bilingual dictionary, usually in the range of a few thousand entries (Mikolov et al., 2013a; Artetxe et al., 2016) .", "However, dictionaries of that size are not readily available for many language pairs, specially those involving less-resourced languages.", "In this work, we reduce the need of large bilingual dictionaries to much smaller seed dictionaries.", "Our method can work with as little as 25 word pairs, which are straightforward to obtain assuming some basic knowledge of the languages involved.", "The method can also work with trivially generated seed dictionaries of numerals (i.e.", "1-1, 2-2, 3-3, 4-4...) making it possible to learn bilingual word embeddings without any real bilingual data.", "In either case, we obtain very competitive results, comparable to other state-of-the-art methods that make use of much richer bilingual resources.", "The proposed method is an extension of existing mapping techniques, where the dictionary is used to learn the embedding mapping and the embedding mapping is used to induce a new dictionary iteratively in a self-learning fashion (see Figure 1) .", "In spite of its simplicity, our analysis of the implicit optimization objective reveals that the method is exploiting the structural similarity of independently trained embeddings.", "We analyze previous work in Section 2.", "Section 3 describes the self-learning framework, while Section 4 presents the experiments.", "Section 5 analyzes the underlying optimization objective, and Section 6 presents an error analysis.", "Figure 1 : A general schema of the proposed self-learning framework.", "Previous works learn a mapping W based on the seed dictionary D, which is then used to learn the full dictionary.", "In our proposal we use the new dictionary to learn a new mapping, iterating until convergence.", "Related work We will first focus on bilingual embedding mappings, which are the basis of our proposals, and then on other unsupervised and weakly supervised methods to learn bilingual word embeddings.", "Bilingual embedding mappings Methods to induce bilingual mappings work by independently learning the embeddings in each language using monolingual corpora, and then learning a transformation from one embedding space into the other based on a bilingual dictionary.", "The first of such methods is due to Mikolov et al.", "(2013a) , who learn the linear transformation that minimizes the sum of squared Euclidean distances for the dictionary entries.", "The same optimization objective is used by , who constrain the transformation matrix to be orthogonal.", "Xing et al.", "(2015) incorporate length normalization in the training of word embeddings and maximize the cosine similarity instead, enforcing the orthogonality constraint to preserve the length normalization after the mapping.", "Finally, use max-margin optimization with intruder negative sampling.", "Instead of learning a single linear transformation from the source language into the target language, Faruqui and Dyer (2014) use canonical correlation analysis to map both languages to a shared vector space.", "Lu et al.", "(2015) extend this work and apply deep canonical correlation analysis to learn non-linear transformations.", "Artetxe et al.", "(2016) propose a general framework that clarifies the relation between Mikolov et al.", "(2013a) , Xing et al.", "(2015) , Faruqui and Dyer (2014) and as variants of the same core optimization objective, and show that a new variant is able to surpass them all.", "While most of the previous methods use gradient descent, Artetxe et al.", "(2016) propose an efficient analytical implementation for those same methods, recently extended by Smith et al.", "(2017) to incorporate dimensionality reduction.", "A prominent application of bilingual embedding mappings, with a direct application in machine translation (Zhao et al., 2015) , is bilingual lexicon extraction, which is also the main evaluation method.", "More specifically, the learned mapping is used to induce the translation of source language words that were missing in the original dictionary, usually by taking their nearest neighbor word in the target language according to cosine similarity, although and Smith et al.", "(2017) propose alternative retrieval methods to address the hubness problem.", "Unsupervised and weakly supervised bilingual embeddings As mentioned before, our method works with as little as 25 word pairs, while the methods discussed previously use thousands of pairs.", "The only exception in this regard is the work by , who only use 10 word pairs with good results on transfer learning for part-of-speech tagging.", "Our experiments will show that, although their method captures coarse-grained relations, it fails on finer-grained tasks like bilingual lexicon induction.", "Bootstrapping methods similar to ours have been previously proposed for traditional countbased vector space models (Peirsman and PadΓ³, 2010; VuliΔ‡ and Moens, 2013) .", "However, while previous techniques incrementally build a high- dimensional model where each axis encodes the co-occurrences with a specific word and its equivalent in the other language, our method works with low-dimensional pre-trained word embeddings, which are more widely used nowadays.", "A practical aspect for reducing the need of bilingual supervision is on the design of the seed dictionary.", "This is analyzed in depth by VuliΔ‡ and Korhonen (2016) , who propose using documentaligned corpora to extract the training dictionary.", "A more common approach is to rely on shared words and cognates (Peirsman and PadΓ³, 2010; Smith et al., 2017) , eliminating the need of bilingual data in practice.", "Our use of shared numerals exploits the same underlying idea, but relies on even less bilingual evidence and should thus generalize better to distant language pairs.", "Miceli Barone (2016) and Cao et al.", "(2016) go one step further and attempt to learn bilingual embeddings without any bilingual evidence.", "The former uses adversarial autoencoders (Makhzani et al., 2016) , combining an encoder that maps the source language embeddings into the target language, a decoder that reconstructs the original embeddings, and a discriminator that distinguishes mapped embeddings from real target language embeddings, whereas the latter adds a regularization term to the training of word embeddings that pushes the mean and variance of each dimension in different languages close to each other.", "Although promising, the reported performance in both cases is poor in comparison to other methods.", "Finally, the induction of bilingual knowledge from monolingual corpora is closely related to the decipherment scenario, for which models that incorporate word embeddings have also been proposed (Dou et al., 2015) .", "However, decipherment is only concerned with translating text from one language to another and relies on complex statistical models that are designed specifically for that purpose, while our approach is more general and learns task-independent multilingual embeddings.", "Algorithm 2 Proposed self-learning framework Input: X (source embeddings) Input: Z (target embeddings) Input: D (seed dictionary) 1: repeat 2: W ← LEARN MAPPING(X, Z, D) 3: D ← LEARN DICTIONARY(X, Z, W ) 4: until convergence criterion 5: EVALUATE DICTIONARY(D) 3 Proposed self-learning framework As discussed in Section 2.1, a common evaluation task (and practical application) of bilingual embedding mappings is to induce bilingual lexicons, that is, to obtain the translation of source words that were missing in the training dictionary, which are then compared to a gold standard test dictionary for evaluation.", "This way, one can say that the seed (train) dictionary is used to learn a mapping, which is then used to induce a better dictionary (at least in the sense that it is larger).", "Algorithm 1 summarizes this framework.", "Following this observation, we propose to use the output dictionary in Algorithm 1 as the input of the same system in a self-learning fashion which, assuming that the output dictionary was indeed better than the original one, should serve to learn a better mapping and, consequently, an even better dictionary the second time.", "The process can then be repeated iteratively to obtain a hopefully better mapping and dictionary each time until some convergence criterion is met.", "Algorithm 2 summarizes this alternative framework that we propose.", "Our method can be combined with any embedding mapping and dictionary induction technique (see Section 2.1).", "However, efficiency turns out to be critical for a variety of reasons.", "First of all, by enclosing the learning logic in a loop, the total training time is increased by the number of iterations.", "Even more importantly, our framework requires to explicitly build the entire dictionary at each iteration, whereas previous work tends to induce the translation of individual words ondemand later at runtime.", "Moreover, from the second iteration onwards, it is this induced, full dictionary that has to be used to learn the embedding mapping, and not the considerably smaller seed dictionary as it is typically done.", "In the following two subsections, we respectively describe the embedding mapping method and the dictionary in-duction method that we adopt in our work with these efficiency requirements in mind.", "Embedding mapping As discussed in Section 2.1, most previous methods to learn embedding mappings use variants of gradient descent.", "Among the more efficient exact alternatives, we decide to adopt the one by Artetxe et al.", "(2016) for its simplicity and good results as reported in their paper.", "We next present their method, adapting the formalization to explicitly incorporate the dictionary as required by our self-learning algorithm.", "Let X and Z denote the word embedding matrices in two languages so that X i * corresponds to the ith source language word embedding and Z j * corresponds to the jth target language embedding.", "While Artetxe et al.", "(2016) assume these two matrices are aligned according to the dictionary, we drop this assumption and represent the dictionary explicitly as a binary matrix D, so that D ij = 1 if the ith source language word is aligned with the jth target language word.", "The goal is then to find the optimal mapping matrix W * so that the sum of squared Euclidean distances between the mapped source embeddings X i * W and target embeddings Z j * for the dictionary entries D ij is minimized: W * = arg min W i j D ij ||X i * W βˆ’ Z j * || 2 Following Artetxe et al.", "(2016) , we length normalize and mean center the embedding matrices X and Z in a preprocessing step, and constrain W to be an orthogonal matrix (i.e.", "W W T = W T W = I), which serves to enforce monolingual invariance, preventing a degradation in monolingual performance while yielding to better bilingual mappings.", "Under such orthogonality constraint, minimizing the squared Euclidean distance becomes equivalent to maximizing the dot product, so the above optimization objective can be reformulated as follows: W * = arg max W Tr XW Z T D T where Tr (Β·) denotes the trace operator (the sum of all the elements in the main diagonal).", "The optimal orthogonal solution for this problem is given by W * = U V T , where X T DZ = U Ξ£V T is the singular value decomposition of X T DZ.", "Since the dictionary matrix D is sparse, this can be efficiently computed in linear time with respect to the number of dictionary entries.", "Dictionary induction As discussed in Section 2.1, practically all previous work uses nearest neighbor retrieval for word translation induction based on embedding mappings.", "In nearest neighbor retrieval, each source language word is assigned the closest word in the target language.", "In our work, we use the dot product between the mapped source language embeddings and the target language embeddings as the similarity measure, which is roughly equivalent to cosine similarity given that we apply length normalization followed by mean centering as a preprocessing step (see Section 3.1).", "This way, following the notation in Section 3.1, we set D ij = 1 if j = argmax k (X i * W ) Β· Z k * and D ij = 0 other- wise 1 .", "While we find that independently computing the similarity measure between all word pairs is prohibitively slow, the computation of the entire similarity matrix XW Z T can be easily vectorized using popular linear algebra libraries, obtaining big performance gains.", "However, the resulting similarity matrix is often too large to fit in memory when using large vocabularies.", "For that reason, instead of computing the entire similarity matrix XW Z T in a single step, we iteratively compute submatrices of it using vectorized matrix multiplication, find their corresponding maxima each time, and then combine the results.", "Experiments and results In this section, we experimentally test the proposed method in bilingual lexicon induction and crosslingual word similarity.", "Subsection 4.1 describes the experimental settings, while Subsections 4.2 and 4.3 present the results obtained in each of the tasks.", "The code and resources necessary to reproduce our experiments are available at https://github.com/artetxem/ vecmap.", "Experimental settings For easier comparison with related work, we evaluated our mappings on bilingual lexicon induction using the public English-Italian dataset by , which includes monolingual word embeddings in both languages together with a bilingual dictionary split in a training set and a test set 2 .", "The embeddings were trained with the word2vec toolkit with CBOW and negative sampling (Mikolov et al., 2013b ) 3 , using a 2.8 billion word corpus for English (ukWaC + Wikipedia + BNC) and a 1.6 billion word corpus for Italian (itWaC) .", "The training and test sets were derived from a dictionary built form Europarl word alignments and available at OPUS (Tiedemann, 2012) , taking 1,500 random entries uniformly distributed in 5 frequency bins as the test set and the 5,000 most frequent of the remaining word pairs as the training set.", "In addition to English-Italian, we selected two other languages from different language families with publicly available resources.", "We thus created analogous datasets for English-German and English-Finnish.", "In the case of German, the embeddings were trained on the 0.9 billion word corpus SdeWaC, which is part of the WaCky collection (Baroni et al., 2009 ) that was also used for English and Italian.", "Given that Finnish is not included in this collection, we used the 2.8 billion word Common Crawl corpus provided at WMT 2016 4 instead, which we tokenized using the Stanford Tokenizer (Manning et al., 2014) .", "In addition to that, we created training and test sets for both pairs from their respective Europarl dictionaries from OPUS following the exact same procedure used for English-Italian, and the word embeddings were also trained using the same configuration as .", "Given that the main focus of our work is on small seed dictionaries, we created random subsets of 2,500, 1,000, 500, 250, 100, 75, 50 and 25 entries from the original training dictionaries of 5,000 entries.", "This was done by shuffling once the training dictionaries and taking their first k entries, so it is guaranteed that each dictionary is a strict subset of the bigger dictionaries.", "In addition to that, we explored using automatically generated dictionaries as a shortcut to practical unsupervised learning.", "For that purpose, we created numeral dictionaries, consisting of words matching the [0-9]+ regular expression in both vocabularies (e.g.", "1-1, 2-2, 3-3, 1992-1992 etc.).", "The resulting dictionary had 2772 entries for English-Italian, 2148 for English-German, and 2345 for English-Finnish.", "While more sophisticated approaches are possible (e.g.", "involving the edit distance of all words), we believe that this method is general enough that should work with practically any language pair, as Arabic numerals are often used even in languages with a different writing system (e.g.", "Chinese and Russian).", "While bilingual lexicon induction is a standard evaluation task for seed dictionary based methods like ours, it is unsuitable for bilingual corpus based methods, as statistical word alignment already provides a reliable way to derive dictionaries from bilingual corpora and, in fact, this is how the test dictionary itself is built in our case.", "For that reason, we carried out some experiments in crosslingual word similarity as a way to test our method in a different task and allowing to compare it to systems that use richer bilingual data.", "There are no many crosslingual word similarity datasets, and we used the RG-65 and WordSim-353 crosslingual datasets for English-German and the WordSim-353 crosslingual dataset for English-Italian as published by Camacho-Collados et al.", "(2015) 5 .", "As for the convergence criterion, we decide to stop training when the improvement on the average dot product for the induced dictionary falls below a given threshold from one iteration to the next.", "After length normalization, the dot product ranges from -1 to 1, so we decide to set this threshold at 1e-6, which we find to be a very conservative value yet enough that training takes a reasonable amount of time.", "The curves in the next section confirm that this was a reasonable choice.", "This convergence criterion is usually met in less than 100 iterations, each of them taking 5 minutes on a modest desktop computer (Intel Core i5-4670 CPU with 8GiB of RAM), including the induction of a dictionary of 200,000 words at each iteration.", "Bilingual lexicon induction For the experiments on bilingual lexicon induction, we compared our method with those proposed by Mikolov et al.", "(2013a) , Xing et al.", "(2015) , and Artetxe et al.", "(2016) , all of them implemented as part of the framework proposed by the latter.", "The results ob- Table 1 : Accuracy (%) on bilingual lexicon induction for different seed dictionaries tained with the 5,000 entry, 25 entry and the numerals dictionaries for all the 3 language pairs are given in Table 1 .", "The results for the 5,000 entry dictionaries show that our method is comparable or even better than the other systems.", "As another reference, the best published results using nearest-neighbor retrieval are due to , who report an accuracy of 40.20% for the full English-Italian dictionary, almost at pair with our system (39.67%).", "In any case, the main focus of our work is on smaller dictionaries, and it is under this setting that our method really stands out.", "The 25 entry and numerals columns in Table 1 show the results for this setting, where all previous methods drop dramatically, falling below 1% accuracy in all cases.", "The method by also obtains poor results with small dictionaries, which reinforces our hypothesis in Section 2.2 that their method can only capture coarse-grain bilingual relations for small dictionaries.", "In contrast, our proposed method obtains very competitive results for all dictionaries, with a difference of only 1-2 points between the full dictionary and both the 25 entry dictionary and the numerals dictionary in all three languages.", "Figure 2 shows the curve of the English-Italian accuracy for different seed dictionary sizes, confirming this trend.", "Finally, it is worth mentioning that, even if all the three language pairs show the same general behavior, there are clear differences in their absolute accuracy numbers, which can be attributed to the linguistic proximity of the languages involved.", "In particular, the results for English-Finnish are about 10 points below the rest, which is explained by the fact that Finnish is a non-indoeuropean agglutinative language, making the task considerably more difficult for this language pair.", "In this regard, we believe that the good results with small dictionaries are a strong indication of the robustness of our method, showing that it is able to learn good bilingual mappings from very little bilingual ev-idence even for distant language pairs where the structural similarity of the embedding spaces is presumably weaker.", "Crosslingual word similarity In addition to the baseline systems in Section 4.2, in the crosslingual similarity experiments we also tested the method by Luong et al.", "(2015) , which is the state-of-the-art for bilingual word embeddings based on parallel corpora (Upadhyay et al., 2016) 6 .", "As this method is an extension of word2vec, we used the same hyperparameters as for the monolingual embeddings when possible (see Section 4.1), and leave the default ones otherwise.", "We used Europarl as our parallel corpus to train this method as done by the authors, which consists of nearly 2 million parallel sentences.", "As shown in the results in Table 2 , our method obtains the best results in all cases, surpassing the rest of the dictionary-based methods by 1-3 points depending on the dataset.", "But, most importantly, it does not suffer from any significant degradation for using smaller dictionaries and, in fact, our method gets better results using the 25 entry dictionary or the numeral list as the only bilingual evidence than any of the baseline systems using much richer resources.", "The relatively poor results of Luong et al.", "(2015) can be attributed to the fact that the dictionary based methods make use of much bigger monolingual corpora, while methods based on parallel corpora are restricted to smaller corpora.", "However, it is not clear how to introduce monolingual corpora on those methods.", "We did run some experiments with BilBOWA (Gouws et al., 2015) , which supports training in monolingual corpora in addition to bilingual corpora, but obtained very poor results 7 .", "All in all, our experiments show Figure 2 : Accuracy on English-Italian bilingual lexicon induction for different seed dictionaries that it is better to use large monolingual corpora in combination with very little bilingual data rather than a bilingual corpus of a standard size alone.", "Global optimization objective It might seem somehow surprising at first that, as seen in the previous section, our simple selflearning approach is able to learn high quality bilingual embeddings from small seed dictionaries instead of falling in degenerated solutions.", "In this section, we try to shed light on our approach, and give empirical evidence supporting our claim.", "More concretely, we argue that, for the embedding mapping and dictionary induction methods described in Section 3, the proposed selflearning framework is implicitly solving the following global optimization problem 8 : W * = arg max W i max j (X i * W ) Β· Z j * s.t.", "W W T = W T W = I Contrary to the optimization objective for W in Section 3.1, the global optimization objective does not refer to any dictionary, and maximizes the similarity between each source language word and its closest target language word.", "Intuitively, a random solution would map source language embeddings to seemingly random locations in the target language space, and it would thus be unlikely that BilBOWA.", "8 While we restrict our formal analysis to the embedding mapping and dictionary induction method that we use, the general reasoning should be valid for other choices as well.", ".628 .739 .604 Table 2 : Spearman correlations on English-Italian and English-German crosslingual word similarity they have any target language word nearby, making the optimization value small.", "In contrast, a good solution would map source language words close to their translation equivalents in the target language space, and they would thus have their corresponding embeddings nearby, making the optimization value large.", "While it is certainly possible to build degenerated solutions that take high optimization values for small subsets of the vocabulary, we think that the structural similarity between independently trained embedding spaces in different languages is strong enough that optimizing this function yields to meaningful bilingual mappings when the size of the vocabulary is much larger than the dimensionality of the embeddings.", "The reasoning for how the self-learning framework is optimizing this objective is as follows.", "At the end of each iteration, the dictionary D is updated to assign, for the current mapping W , each source language word to its closest target language word.", "This way, when we update W to maximize the average similarity of these dictionary entries at the beginning of the next iteration, it is guaranteed that the value of the optimization objective will improve (or at least remain the same).", "The reason is that the average similarity between each word and what were previously the closest words will be improved if possible, as this is what the updated W directly optimizes (see Section 3.1).", "In addition to that, it is also possible that, for some source words, some other target words get closer after the update.", "Thanks to this, our self-learning algorithm is guaranteed to converge to a local optimum of the above global objective, behaving like an alternating optimization algorithm for it.", "It is interesting to note that the above reasoning is valid no matter what the the initial solution is, and, in fact, the global optimization objective does not depend on the seed dictionary nor any other bilingual resource.", "For that reason, it should be possible to use a random initialization instead of a small seed dictionary.", "However, we empirically observe that this works poorly in practice, as our algorithm tends to get stuck in poor local optima when the initial solution is not good enough.", "The general behavior of our method is reflected in Figure 3 , which shows the learning curve for different seed dictionaries according to both the objective function and the accuracy on bilingual lexicon induction.", "As it can be seen, the objective function is improved from iteration to iteration and converges to a local optimum just as expected.", "At the same time, the learning curves show a strong correlation between the optimization objective and the accuracy, as it can be clearly observed that improving the former leads to an improvement of the latter, confirming our explanations.", "Regarding random initialization, the figure shows that the algorithm gets stuck in a poor local optimum of the objective function, which is the reason of the bad performance (0% accuracy) on bilingual lexicon induction, but the proposed optimization objective itself seems to be adequate.", "Finally, we empirically observe that our algorithm learns similar mappings no matter what the seed dictionary was.", "We first repeated our experiments on English-Italian bilingual lexicon induction for 5 different dictionaries of 25 entries, obtaining an average accuracy of 38.15% and a standard deviation of only 0.75%.", "In addition to that, we observe that the overlap between the predictions made when starting with the full dictionary and the numerals dictionary is 76.00% (60.00% for the 25 entry dictionary).", "At the same time, 37.00% of the test cases are correctly solved by both instances, and it is only 5.07% of the test cases that one of them gets right and the other wrong (34.00% and 8.94% for the 25 entry dictionary).", "This suggests that our algorithm tends to converge to similar solutions even for disjoint seed dictionaries, which is in line with our view that we are implicitly optimizing an objective that is independent from the seed dictionary, yet a seed dictionary is necessary to build a good enough initial solution to avoid getting stuck in poor local optima.", "For that reason, it is likely that better methods to tackle this optimization problem would allow learning bilingual word embeddings without any bilingual evidence at all and, in this regard, we believe that our work opens exciting opportunities for future research.", "Error analysis So as to better understand the behavior of our system, we performed an error analysis of its output in English-Italian bilingual lexicon induction when starting with the 5,000 entry, the 25 entry and the numeral dictionaries in comparison with the baseline method of Artetxe et al.", "(2016) Our analysis first reveals that, in all the cases, about a third of the translations taken as erroneous according to the gold standard are not so in real-ity.", "This corresponds to both different morphological variants of the gold standard translations (e.g.", "dichiarato/dichiarΓ²) and other valid translations that were missing in the gold standard (e.g.", "climb β†’ salita instead of the gold standard scalato).", "This phenomenon is considerably more pronounced in the first frequency bins, which already have a much higher accuracy according to the gold standard.", "As for the actual errors, we observe that nearly a third of them correspond to named entities for all the different variants.", "Interestingly, the vast majority of the proposed translations in these cases are also named entities (e.g.", "Ryan β†’ Jason, John β†’ Paolo), which are often highly related to the original ones (e.g.", "Volvo β†’ BMW, Olympus β†’ Nikon).", "While these are clear errors, it is understandable that these methods are unable to discriminate between named entities to this degree based solely on the distributional hypothesis, in particular when it comes to common proper names (e.g.", "John, Andy), and one could design alternative strategies to address this issue like taking the edit distance as an additional signal.", "For the remaining errors, all systems tend to propose translations that have some degree of relationship with the correct ones, including nearsynonyms (e.g.", "guidelines β†’ raccomandazioni), antonyms (e.g.", "sender β†’ destinatario) and words in the same semantic field (e.g.", "nominalism β†’ intuizionismo / innatismo, which are all philosophical doctrines).", "However, there are also a few instances where the relationship is weak or unclear (e.g.", "loch β†’ giardini, sweep β†’ serrare).", "We also observe a few errors that are related to multiwords or collocations (e.g.", "carrier β†’ aereo, presumably related to the multiword air carrier / linea aerea), as well as some rare word that is repeated across many translations (Ferruzzi), which could be attributed to the hubness problem .", "All in all, our error analysis reveals that the baseline method of Artetxe et al.", "(2016) and the proposed algorithm tend to make the same kind of errors regardless of the seed dictionary used by the latter, which reinforces our interpretation in the previous section regarding an underlying optimization objective that is independent from any training dictionary.", "Moreover, it shows that the quality of the learned mappings is much better than what the raw accuracy numbers might sug-gest, encouraging the incorporation of these techniques in other applications.", "Conclusions and future work In this work, we propose a simple self-learning framework to learn bilingual word embedding mappings in combination with any embedding mapping and dictionary induction technique.", "Our experiments on bilingual lexicon induction and crosslingual word similarity show that our method is able to learn high quality bilingual embeddings from as little bilingual evidence as a 25 word dictionary or an automatically generated list of numerals, obtaining results that are competitive with state-of-the-art systems using much richer bilingual resources like larger dictionaries or parallel corpora.", "In spite of its simplicity, a more detailed analysis shows that our method is implicitly optimizing a meaningful objective function that is independent from any bilingual data which, with a better optimization method, might allow to learn bilingual word embeddings in a completely unsupervised manner.", "In the future, we would like to delve deeper into this direction and fine-tune our method so it can reliably learn high quality bilingual word embeddings without any bilingual evidence at all.", "In addition to that, we would like to explore non-linear transformations (Lu et al., 2015) and alternative dictionary induction methods Smith et al., 2017) .", "Finally, we would like to apply our model in the decipherment scenario (Dou et al., 2015) ." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "4", "4.1", "4.2", "4.3", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Related work", "Bilingual embedding mappings", "Unsupervised and weakly supervised bilingual embeddings", "Embedding mapping", "Dictionary induction", "Experiments and results", "Experimental settings", "Bilingual lexicon induction", "Crosslingual word similarity", "Global optimization objective", "Error analysis", "Conclusions and future work" ] }
GEM-SciDuet-train-102#paper-1266#slide-2
Experiments
Dataset by Dinu et al. (2015) extended to German and Finnish Monolingual embeddings (CBOW + negative sampling) Seed dictionary: 5,000 word pairs / 25 word pairs / numerals Test dictionary: 1,500 word pairs Bi. data WS RG WS
Dataset by Dinu et al. (2015) extended to German and Finnish Monolingual embeddings (CBOW + negative sampling) Seed dictionary: 5,000 word pairs / 25 word pairs / numerals Test dictionary: 1,500 word pairs Bi. data WS RG WS
[]
GEM-SciDuet-train-102#paper-1266#slide-3
1266
Learning bilingual word embeddings with (almost) no bilingual data
Most methods to learn bilingual word embeddings rely on large parallel corpora, which is difficult to obtain for most language pairs. This has motivated an active research line to relax this requirement, with methods that use document-aligned corpora or bilingual dictionaries of a few thousand words instead. In this work, we further reduce the need of bilingual resources using a very simple self-learning approach that can be combined with any dictionary-based mapping technique. Our method exploits the structural similarity of embedding spaces, and works with as little bilingual evidence as a 25 word dictionary or even an automatically generated list of numerals, obtaining results comparable to those of systems that use richer resources.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197 ], "paper_content_text": [ "Introduction Multilingual word embeddings have attracted a lot of attention in recent times.", "In addition to having a direct application in inherently crosslingual tasks like machine translation (Zou et al., 2013) and crosslingual entity linking (Tsai and Roth, 2016) , they provide an excellent mechanism for transfer learning, where a model trained in a resource-rich language is transferred to a less-resourced one, as shown with part-of-speech tagging , parsing (Xiao and Guo, 2014) and document classification (Klementiev et al., 2012) .", "Most methods to learn these multilingual word embeddings make use of large parallel corpora (Gouws et al., 2015; Luong et al., 2015) , but there have been several proposals to relax this requirement, given its scarcity in most language pairs.", "A possible relaxation is to use document-aligned or label-aligned comparable corpora (SΓΈgaard et al., 2015; VuliΔ‡ and Moens, 2016; Mogadala and Rettinger, 2016) , but large amounts of such corpora are not always available for some language pairs.", "An alternative approach that we follow here is to independently train the embeddings for each language on monolingual corpora, and then learn a linear transformation to map the embeddings from one space into the other by minimizing the distances in a bilingual dictionary, usually in the range of a few thousand entries (Mikolov et al., 2013a; Artetxe et al., 2016) .", "However, dictionaries of that size are not readily available for many language pairs, specially those involving less-resourced languages.", "In this work, we reduce the need of large bilingual dictionaries to much smaller seed dictionaries.", "Our method can work with as little as 25 word pairs, which are straightforward to obtain assuming some basic knowledge of the languages involved.", "The method can also work with trivially generated seed dictionaries of numerals (i.e.", "1-1, 2-2, 3-3, 4-4...) making it possible to learn bilingual word embeddings without any real bilingual data.", "In either case, we obtain very competitive results, comparable to other state-of-the-art methods that make use of much richer bilingual resources.", "The proposed method is an extension of existing mapping techniques, where the dictionary is used to learn the embedding mapping and the embedding mapping is used to induce a new dictionary iteratively in a self-learning fashion (see Figure 1) .", "In spite of its simplicity, our analysis of the implicit optimization objective reveals that the method is exploiting the structural similarity of independently trained embeddings.", "We analyze previous work in Section 2.", "Section 3 describes the self-learning framework, while Section 4 presents the experiments.", "Section 5 analyzes the underlying optimization objective, and Section 6 presents an error analysis.", "Figure 1 : A general schema of the proposed self-learning framework.", "Previous works learn a mapping W based on the seed dictionary D, which is then used to learn the full dictionary.", "In our proposal we use the new dictionary to learn a new mapping, iterating until convergence.", "Related work We will first focus on bilingual embedding mappings, which are the basis of our proposals, and then on other unsupervised and weakly supervised methods to learn bilingual word embeddings.", "Bilingual embedding mappings Methods to induce bilingual mappings work by independently learning the embeddings in each language using monolingual corpora, and then learning a transformation from one embedding space into the other based on a bilingual dictionary.", "The first of such methods is due to Mikolov et al.", "(2013a) , who learn the linear transformation that minimizes the sum of squared Euclidean distances for the dictionary entries.", "The same optimization objective is used by , who constrain the transformation matrix to be orthogonal.", "Xing et al.", "(2015) incorporate length normalization in the training of word embeddings and maximize the cosine similarity instead, enforcing the orthogonality constraint to preserve the length normalization after the mapping.", "Finally, use max-margin optimization with intruder negative sampling.", "Instead of learning a single linear transformation from the source language into the target language, Faruqui and Dyer (2014) use canonical correlation analysis to map both languages to a shared vector space.", "Lu et al.", "(2015) extend this work and apply deep canonical correlation analysis to learn non-linear transformations.", "Artetxe et al.", "(2016) propose a general framework that clarifies the relation between Mikolov et al.", "(2013a) , Xing et al.", "(2015) , Faruqui and Dyer (2014) and as variants of the same core optimization objective, and show that a new variant is able to surpass them all.", "While most of the previous methods use gradient descent, Artetxe et al.", "(2016) propose an efficient analytical implementation for those same methods, recently extended by Smith et al.", "(2017) to incorporate dimensionality reduction.", "A prominent application of bilingual embedding mappings, with a direct application in machine translation (Zhao et al., 2015) , is bilingual lexicon extraction, which is also the main evaluation method.", "More specifically, the learned mapping is used to induce the translation of source language words that were missing in the original dictionary, usually by taking their nearest neighbor word in the target language according to cosine similarity, although and Smith et al.", "(2017) propose alternative retrieval methods to address the hubness problem.", "Unsupervised and weakly supervised bilingual embeddings As mentioned before, our method works with as little as 25 word pairs, while the methods discussed previously use thousands of pairs.", "The only exception in this regard is the work by , who only use 10 word pairs with good results on transfer learning for part-of-speech tagging.", "Our experiments will show that, although their method captures coarse-grained relations, it fails on finer-grained tasks like bilingual lexicon induction.", "Bootstrapping methods similar to ours have been previously proposed for traditional countbased vector space models (Peirsman and PadΓ³, 2010; VuliΔ‡ and Moens, 2013) .", "However, while previous techniques incrementally build a high- dimensional model where each axis encodes the co-occurrences with a specific word and its equivalent in the other language, our method works with low-dimensional pre-trained word embeddings, which are more widely used nowadays.", "A practical aspect for reducing the need of bilingual supervision is on the design of the seed dictionary.", "This is analyzed in depth by VuliΔ‡ and Korhonen (2016) , who propose using documentaligned corpora to extract the training dictionary.", "A more common approach is to rely on shared words and cognates (Peirsman and PadΓ³, 2010; Smith et al., 2017) , eliminating the need of bilingual data in practice.", "Our use of shared numerals exploits the same underlying idea, but relies on even less bilingual evidence and should thus generalize better to distant language pairs.", "Miceli Barone (2016) and Cao et al.", "(2016) go one step further and attempt to learn bilingual embeddings without any bilingual evidence.", "The former uses adversarial autoencoders (Makhzani et al., 2016) , combining an encoder that maps the source language embeddings into the target language, a decoder that reconstructs the original embeddings, and a discriminator that distinguishes mapped embeddings from real target language embeddings, whereas the latter adds a regularization term to the training of word embeddings that pushes the mean and variance of each dimension in different languages close to each other.", "Although promising, the reported performance in both cases is poor in comparison to other methods.", "Finally, the induction of bilingual knowledge from monolingual corpora is closely related to the decipherment scenario, for which models that incorporate word embeddings have also been proposed (Dou et al., 2015) .", "However, decipherment is only concerned with translating text from one language to another and relies on complex statistical models that are designed specifically for that purpose, while our approach is more general and learns task-independent multilingual embeddings.", "Algorithm 2 Proposed self-learning framework Input: X (source embeddings) Input: Z (target embeddings) Input: D (seed dictionary) 1: repeat 2: W ← LEARN MAPPING(X, Z, D) 3: D ← LEARN DICTIONARY(X, Z, W ) 4: until convergence criterion 5: EVALUATE DICTIONARY(D) 3 Proposed self-learning framework As discussed in Section 2.1, a common evaluation task (and practical application) of bilingual embedding mappings is to induce bilingual lexicons, that is, to obtain the translation of source words that were missing in the training dictionary, which are then compared to a gold standard test dictionary for evaluation.", "This way, one can say that the seed (train) dictionary is used to learn a mapping, which is then used to induce a better dictionary (at least in the sense that it is larger).", "Algorithm 1 summarizes this framework.", "Following this observation, we propose to use the output dictionary in Algorithm 1 as the input of the same system in a self-learning fashion which, assuming that the output dictionary was indeed better than the original one, should serve to learn a better mapping and, consequently, an even better dictionary the second time.", "The process can then be repeated iteratively to obtain a hopefully better mapping and dictionary each time until some convergence criterion is met.", "Algorithm 2 summarizes this alternative framework that we propose.", "Our method can be combined with any embedding mapping and dictionary induction technique (see Section 2.1).", "However, efficiency turns out to be critical for a variety of reasons.", "First of all, by enclosing the learning logic in a loop, the total training time is increased by the number of iterations.", "Even more importantly, our framework requires to explicitly build the entire dictionary at each iteration, whereas previous work tends to induce the translation of individual words ondemand later at runtime.", "Moreover, from the second iteration onwards, it is this induced, full dictionary that has to be used to learn the embedding mapping, and not the considerably smaller seed dictionary as it is typically done.", "In the following two subsections, we respectively describe the embedding mapping method and the dictionary in-duction method that we adopt in our work with these efficiency requirements in mind.", "Embedding mapping As discussed in Section 2.1, most previous methods to learn embedding mappings use variants of gradient descent.", "Among the more efficient exact alternatives, we decide to adopt the one by Artetxe et al.", "(2016) for its simplicity and good results as reported in their paper.", "We next present their method, adapting the formalization to explicitly incorporate the dictionary as required by our self-learning algorithm.", "Let X and Z denote the word embedding matrices in two languages so that X i * corresponds to the ith source language word embedding and Z j * corresponds to the jth target language embedding.", "While Artetxe et al.", "(2016) assume these two matrices are aligned according to the dictionary, we drop this assumption and represent the dictionary explicitly as a binary matrix D, so that D ij = 1 if the ith source language word is aligned with the jth target language word.", "The goal is then to find the optimal mapping matrix W * so that the sum of squared Euclidean distances between the mapped source embeddings X i * W and target embeddings Z j * for the dictionary entries D ij is minimized: W * = arg min W i j D ij ||X i * W βˆ’ Z j * || 2 Following Artetxe et al.", "(2016) , we length normalize and mean center the embedding matrices X and Z in a preprocessing step, and constrain W to be an orthogonal matrix (i.e.", "W W T = W T W = I), which serves to enforce monolingual invariance, preventing a degradation in monolingual performance while yielding to better bilingual mappings.", "Under such orthogonality constraint, minimizing the squared Euclidean distance becomes equivalent to maximizing the dot product, so the above optimization objective can be reformulated as follows: W * = arg max W Tr XW Z T D T where Tr (Β·) denotes the trace operator (the sum of all the elements in the main diagonal).", "The optimal orthogonal solution for this problem is given by W * = U V T , where X T DZ = U Ξ£V T is the singular value decomposition of X T DZ.", "Since the dictionary matrix D is sparse, this can be efficiently computed in linear time with respect to the number of dictionary entries.", "Dictionary induction As discussed in Section 2.1, practically all previous work uses nearest neighbor retrieval for word translation induction based on embedding mappings.", "In nearest neighbor retrieval, each source language word is assigned the closest word in the target language.", "In our work, we use the dot product between the mapped source language embeddings and the target language embeddings as the similarity measure, which is roughly equivalent to cosine similarity given that we apply length normalization followed by mean centering as a preprocessing step (see Section 3.1).", "This way, following the notation in Section 3.1, we set D ij = 1 if j = argmax k (X i * W ) Β· Z k * and D ij = 0 other- wise 1 .", "While we find that independently computing the similarity measure between all word pairs is prohibitively slow, the computation of the entire similarity matrix XW Z T can be easily vectorized using popular linear algebra libraries, obtaining big performance gains.", "However, the resulting similarity matrix is often too large to fit in memory when using large vocabularies.", "For that reason, instead of computing the entire similarity matrix XW Z T in a single step, we iteratively compute submatrices of it using vectorized matrix multiplication, find their corresponding maxima each time, and then combine the results.", "Experiments and results In this section, we experimentally test the proposed method in bilingual lexicon induction and crosslingual word similarity.", "Subsection 4.1 describes the experimental settings, while Subsections 4.2 and 4.3 present the results obtained in each of the tasks.", "The code and resources necessary to reproduce our experiments are available at https://github.com/artetxem/ vecmap.", "Experimental settings For easier comparison with related work, we evaluated our mappings on bilingual lexicon induction using the public English-Italian dataset by , which includes monolingual word embeddings in both languages together with a bilingual dictionary split in a training set and a test set 2 .", "The embeddings were trained with the word2vec toolkit with CBOW and negative sampling (Mikolov et al., 2013b ) 3 , using a 2.8 billion word corpus for English (ukWaC + Wikipedia + BNC) and a 1.6 billion word corpus for Italian (itWaC) .", "The training and test sets were derived from a dictionary built form Europarl word alignments and available at OPUS (Tiedemann, 2012) , taking 1,500 random entries uniformly distributed in 5 frequency bins as the test set and the 5,000 most frequent of the remaining word pairs as the training set.", "In addition to English-Italian, we selected two other languages from different language families with publicly available resources.", "We thus created analogous datasets for English-German and English-Finnish.", "In the case of German, the embeddings were trained on the 0.9 billion word corpus SdeWaC, which is part of the WaCky collection (Baroni et al., 2009 ) that was also used for English and Italian.", "Given that Finnish is not included in this collection, we used the 2.8 billion word Common Crawl corpus provided at WMT 2016 4 instead, which we tokenized using the Stanford Tokenizer (Manning et al., 2014) .", "In addition to that, we created training and test sets for both pairs from their respective Europarl dictionaries from OPUS following the exact same procedure used for English-Italian, and the word embeddings were also trained using the same configuration as .", "Given that the main focus of our work is on small seed dictionaries, we created random subsets of 2,500, 1,000, 500, 250, 100, 75, 50 and 25 entries from the original training dictionaries of 5,000 entries.", "This was done by shuffling once the training dictionaries and taking their first k entries, so it is guaranteed that each dictionary is a strict subset of the bigger dictionaries.", "In addition to that, we explored using automatically generated dictionaries as a shortcut to practical unsupervised learning.", "For that purpose, we created numeral dictionaries, consisting of words matching the [0-9]+ regular expression in both vocabularies (e.g.", "1-1, 2-2, 3-3, 1992-1992 etc.).", "The resulting dictionary had 2772 entries for English-Italian, 2148 for English-German, and 2345 for English-Finnish.", "While more sophisticated approaches are possible (e.g.", "involving the edit distance of all words), we believe that this method is general enough that should work with practically any language pair, as Arabic numerals are often used even in languages with a different writing system (e.g.", "Chinese and Russian).", "While bilingual lexicon induction is a standard evaluation task for seed dictionary based methods like ours, it is unsuitable for bilingual corpus based methods, as statistical word alignment already provides a reliable way to derive dictionaries from bilingual corpora and, in fact, this is how the test dictionary itself is built in our case.", "For that reason, we carried out some experiments in crosslingual word similarity as a way to test our method in a different task and allowing to compare it to systems that use richer bilingual data.", "There are no many crosslingual word similarity datasets, and we used the RG-65 and WordSim-353 crosslingual datasets for English-German and the WordSim-353 crosslingual dataset for English-Italian as published by Camacho-Collados et al.", "(2015) 5 .", "As for the convergence criterion, we decide to stop training when the improvement on the average dot product for the induced dictionary falls below a given threshold from one iteration to the next.", "After length normalization, the dot product ranges from -1 to 1, so we decide to set this threshold at 1e-6, which we find to be a very conservative value yet enough that training takes a reasonable amount of time.", "The curves in the next section confirm that this was a reasonable choice.", "This convergence criterion is usually met in less than 100 iterations, each of them taking 5 minutes on a modest desktop computer (Intel Core i5-4670 CPU with 8GiB of RAM), including the induction of a dictionary of 200,000 words at each iteration.", "Bilingual lexicon induction For the experiments on bilingual lexicon induction, we compared our method with those proposed by Mikolov et al.", "(2013a) , Xing et al.", "(2015) , and Artetxe et al.", "(2016) , all of them implemented as part of the framework proposed by the latter.", "The results ob- Table 1 : Accuracy (%) on bilingual lexicon induction for different seed dictionaries tained with the 5,000 entry, 25 entry and the numerals dictionaries for all the 3 language pairs are given in Table 1 .", "The results for the 5,000 entry dictionaries show that our method is comparable or even better than the other systems.", "As another reference, the best published results using nearest-neighbor retrieval are due to , who report an accuracy of 40.20% for the full English-Italian dictionary, almost at pair with our system (39.67%).", "In any case, the main focus of our work is on smaller dictionaries, and it is under this setting that our method really stands out.", "The 25 entry and numerals columns in Table 1 show the results for this setting, where all previous methods drop dramatically, falling below 1% accuracy in all cases.", "The method by also obtains poor results with small dictionaries, which reinforces our hypothesis in Section 2.2 that their method can only capture coarse-grain bilingual relations for small dictionaries.", "In contrast, our proposed method obtains very competitive results for all dictionaries, with a difference of only 1-2 points between the full dictionary and both the 25 entry dictionary and the numerals dictionary in all three languages.", "Figure 2 shows the curve of the English-Italian accuracy for different seed dictionary sizes, confirming this trend.", "Finally, it is worth mentioning that, even if all the three language pairs show the same general behavior, there are clear differences in their absolute accuracy numbers, which can be attributed to the linguistic proximity of the languages involved.", "In particular, the results for English-Finnish are about 10 points below the rest, which is explained by the fact that Finnish is a non-indoeuropean agglutinative language, making the task considerably more difficult for this language pair.", "In this regard, we believe that the good results with small dictionaries are a strong indication of the robustness of our method, showing that it is able to learn good bilingual mappings from very little bilingual ev-idence even for distant language pairs where the structural similarity of the embedding spaces is presumably weaker.", "Crosslingual word similarity In addition to the baseline systems in Section 4.2, in the crosslingual similarity experiments we also tested the method by Luong et al.", "(2015) , which is the state-of-the-art for bilingual word embeddings based on parallel corpora (Upadhyay et al., 2016) 6 .", "As this method is an extension of word2vec, we used the same hyperparameters as for the monolingual embeddings when possible (see Section 4.1), and leave the default ones otherwise.", "We used Europarl as our parallel corpus to train this method as done by the authors, which consists of nearly 2 million parallel sentences.", "As shown in the results in Table 2 , our method obtains the best results in all cases, surpassing the rest of the dictionary-based methods by 1-3 points depending on the dataset.", "But, most importantly, it does not suffer from any significant degradation for using smaller dictionaries and, in fact, our method gets better results using the 25 entry dictionary or the numeral list as the only bilingual evidence than any of the baseline systems using much richer resources.", "The relatively poor results of Luong et al.", "(2015) can be attributed to the fact that the dictionary based methods make use of much bigger monolingual corpora, while methods based on parallel corpora are restricted to smaller corpora.", "However, it is not clear how to introduce monolingual corpora on those methods.", "We did run some experiments with BilBOWA (Gouws et al., 2015) , which supports training in monolingual corpora in addition to bilingual corpora, but obtained very poor results 7 .", "All in all, our experiments show Figure 2 : Accuracy on English-Italian bilingual lexicon induction for different seed dictionaries that it is better to use large monolingual corpora in combination with very little bilingual data rather than a bilingual corpus of a standard size alone.", "Global optimization objective It might seem somehow surprising at first that, as seen in the previous section, our simple selflearning approach is able to learn high quality bilingual embeddings from small seed dictionaries instead of falling in degenerated solutions.", "In this section, we try to shed light on our approach, and give empirical evidence supporting our claim.", "More concretely, we argue that, for the embedding mapping and dictionary induction methods described in Section 3, the proposed selflearning framework is implicitly solving the following global optimization problem 8 : W * = arg max W i max j (X i * W ) Β· Z j * s.t.", "W W T = W T W = I Contrary to the optimization objective for W in Section 3.1, the global optimization objective does not refer to any dictionary, and maximizes the similarity between each source language word and its closest target language word.", "Intuitively, a random solution would map source language embeddings to seemingly random locations in the target language space, and it would thus be unlikely that BilBOWA.", "8 While we restrict our formal analysis to the embedding mapping and dictionary induction method that we use, the general reasoning should be valid for other choices as well.", ".628 .739 .604 Table 2 : Spearman correlations on English-Italian and English-German crosslingual word similarity they have any target language word nearby, making the optimization value small.", "In contrast, a good solution would map source language words close to their translation equivalents in the target language space, and they would thus have their corresponding embeddings nearby, making the optimization value large.", "While it is certainly possible to build degenerated solutions that take high optimization values for small subsets of the vocabulary, we think that the structural similarity between independently trained embedding spaces in different languages is strong enough that optimizing this function yields to meaningful bilingual mappings when the size of the vocabulary is much larger than the dimensionality of the embeddings.", "The reasoning for how the self-learning framework is optimizing this objective is as follows.", "At the end of each iteration, the dictionary D is updated to assign, for the current mapping W , each source language word to its closest target language word.", "This way, when we update W to maximize the average similarity of these dictionary entries at the beginning of the next iteration, it is guaranteed that the value of the optimization objective will improve (or at least remain the same).", "The reason is that the average similarity between each word and what were previously the closest words will be improved if possible, as this is what the updated W directly optimizes (see Section 3.1).", "In addition to that, it is also possible that, for some source words, some other target words get closer after the update.", "Thanks to this, our self-learning algorithm is guaranteed to converge to a local optimum of the above global objective, behaving like an alternating optimization algorithm for it.", "It is interesting to note that the above reasoning is valid no matter what the the initial solution is, and, in fact, the global optimization objective does not depend on the seed dictionary nor any other bilingual resource.", "For that reason, it should be possible to use a random initialization instead of a small seed dictionary.", "However, we empirically observe that this works poorly in practice, as our algorithm tends to get stuck in poor local optima when the initial solution is not good enough.", "The general behavior of our method is reflected in Figure 3 , which shows the learning curve for different seed dictionaries according to both the objective function and the accuracy on bilingual lexicon induction.", "As it can be seen, the objective function is improved from iteration to iteration and converges to a local optimum just as expected.", "At the same time, the learning curves show a strong correlation between the optimization objective and the accuracy, as it can be clearly observed that improving the former leads to an improvement of the latter, confirming our explanations.", "Regarding random initialization, the figure shows that the algorithm gets stuck in a poor local optimum of the objective function, which is the reason of the bad performance (0% accuracy) on bilingual lexicon induction, but the proposed optimization objective itself seems to be adequate.", "Finally, we empirically observe that our algorithm learns similar mappings no matter what the seed dictionary was.", "We first repeated our experiments on English-Italian bilingual lexicon induction for 5 different dictionaries of 25 entries, obtaining an average accuracy of 38.15% and a standard deviation of only 0.75%.", "In addition to that, we observe that the overlap between the predictions made when starting with the full dictionary and the numerals dictionary is 76.00% (60.00% for the 25 entry dictionary).", "At the same time, 37.00% of the test cases are correctly solved by both instances, and it is only 5.07% of the test cases that one of them gets right and the other wrong (34.00% and 8.94% for the 25 entry dictionary).", "This suggests that our algorithm tends to converge to similar solutions even for disjoint seed dictionaries, which is in line with our view that we are implicitly optimizing an objective that is independent from the seed dictionary, yet a seed dictionary is necessary to build a good enough initial solution to avoid getting stuck in poor local optima.", "For that reason, it is likely that better methods to tackle this optimization problem would allow learning bilingual word embeddings without any bilingual evidence at all and, in this regard, we believe that our work opens exciting opportunities for future research.", "Error analysis So as to better understand the behavior of our system, we performed an error analysis of its output in English-Italian bilingual lexicon induction when starting with the 5,000 entry, the 25 entry and the numeral dictionaries in comparison with the baseline method of Artetxe et al.", "(2016) Our analysis first reveals that, in all the cases, about a third of the translations taken as erroneous according to the gold standard are not so in real-ity.", "This corresponds to both different morphological variants of the gold standard translations (e.g.", "dichiarato/dichiarΓ²) and other valid translations that were missing in the gold standard (e.g.", "climb β†’ salita instead of the gold standard scalato).", "This phenomenon is considerably more pronounced in the first frequency bins, which already have a much higher accuracy according to the gold standard.", "As for the actual errors, we observe that nearly a third of them correspond to named entities for all the different variants.", "Interestingly, the vast majority of the proposed translations in these cases are also named entities (e.g.", "Ryan β†’ Jason, John β†’ Paolo), which are often highly related to the original ones (e.g.", "Volvo β†’ BMW, Olympus β†’ Nikon).", "While these are clear errors, it is understandable that these methods are unable to discriminate between named entities to this degree based solely on the distributional hypothesis, in particular when it comes to common proper names (e.g.", "John, Andy), and one could design alternative strategies to address this issue like taking the edit distance as an additional signal.", "For the remaining errors, all systems tend to propose translations that have some degree of relationship with the correct ones, including nearsynonyms (e.g.", "guidelines β†’ raccomandazioni), antonyms (e.g.", "sender β†’ destinatario) and words in the same semantic field (e.g.", "nominalism β†’ intuizionismo / innatismo, which are all philosophical doctrines).", "However, there are also a few instances where the relationship is weak or unclear (e.g.", "loch β†’ giardini, sweep β†’ serrare).", "We also observe a few errors that are related to multiwords or collocations (e.g.", "carrier β†’ aereo, presumably related to the multiword air carrier / linea aerea), as well as some rare word that is repeated across many translations (Ferruzzi), which could be attributed to the hubness problem .", "All in all, our error analysis reveals that the baseline method of Artetxe et al.", "(2016) and the proposed algorithm tend to make the same kind of errors regardless of the seed dictionary used by the latter, which reinforces our interpretation in the previous section regarding an underlying optimization objective that is independent from any training dictionary.", "Moreover, it shows that the quality of the learned mappings is much better than what the raw accuracy numbers might sug-gest, encouraging the incorporation of these techniques in other applications.", "Conclusions and future work In this work, we propose a simple self-learning framework to learn bilingual word embedding mappings in combination with any embedding mapping and dictionary induction technique.", "Our experiments on bilingual lexicon induction and crosslingual word similarity show that our method is able to learn high quality bilingual embeddings from as little bilingual evidence as a 25 word dictionary or an automatically generated list of numerals, obtaining results that are competitive with state-of-the-art systems using much richer bilingual resources like larger dictionaries or parallel corpora.", "In spite of its simplicity, a more detailed analysis shows that our method is implicitly optimizing a meaningful objective function that is independent from any bilingual data which, with a better optimization method, might allow to learn bilingual word embeddings in a completely unsupervised manner.", "In the future, we would like to delve deeper into this direction and fine-tune our method so it can reliably learn high quality bilingual word embeddings without any bilingual evidence at all.", "In addition to that, we would like to explore non-linear transformations (Lu et al., 2015) and alternative dictionary induction methods Smith et al., 2017) .", "Finally, we would like to apply our model in the decipherment scenario (Dou et al., 2015) ." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "4", "4.1", "4.2", "4.3", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Related work", "Bilingual embedding mappings", "Unsupervised and weakly supervised bilingual embeddings", "Embedding mapping", "Dictionary induction", "Experiments and results", "Experimental settings", "Bilingual lexicon induction", "Crosslingual word similarity", "Global optimization objective", "Error analysis", "Conclusions and future work" ] }
GEM-SciDuet-train-102#paper-1266#slide-3
Why does it work
Implicit objective: = arg max max s.t. Independent from seed dictionary! So why do we need a seed dictionary? Avoid poor local optima!
Implicit objective: = arg max max s.t. Independent from seed dictionary! So why do we need a seed dictionary? Avoid poor local optima!
[]
GEM-SciDuet-train-102#paper-1266#slide-4
1266
Learning bilingual word embeddings with (almost) no bilingual data
Most methods to learn bilingual word embeddings rely on large parallel corpora, which is difficult to obtain for most language pairs. This has motivated an active research line to relax this requirement, with methods that use document-aligned corpora or bilingual dictionaries of a few thousand words instead. In this work, we further reduce the need of bilingual resources using a very simple self-learning approach that can be combined with any dictionary-based mapping technique. Our method exploits the structural similarity of embedding spaces, and works with as little bilingual evidence as a 25 word dictionary or even an automatically generated list of numerals, obtaining results comparable to those of systems that use richer resources.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197 ], "paper_content_text": [ "Introduction Multilingual word embeddings have attracted a lot of attention in recent times.", "In addition to having a direct application in inherently crosslingual tasks like machine translation (Zou et al., 2013) and crosslingual entity linking (Tsai and Roth, 2016) , they provide an excellent mechanism for transfer learning, where a model trained in a resource-rich language is transferred to a less-resourced one, as shown with part-of-speech tagging , parsing (Xiao and Guo, 2014) and document classification (Klementiev et al., 2012) .", "Most methods to learn these multilingual word embeddings make use of large parallel corpora (Gouws et al., 2015; Luong et al., 2015) , but there have been several proposals to relax this requirement, given its scarcity in most language pairs.", "A possible relaxation is to use document-aligned or label-aligned comparable corpora (SΓΈgaard et al., 2015; VuliΔ‡ and Moens, 2016; Mogadala and Rettinger, 2016) , but large amounts of such corpora are not always available for some language pairs.", "An alternative approach that we follow here is to independently train the embeddings for each language on monolingual corpora, and then learn a linear transformation to map the embeddings from one space into the other by minimizing the distances in a bilingual dictionary, usually in the range of a few thousand entries (Mikolov et al., 2013a; Artetxe et al., 2016) .", "However, dictionaries of that size are not readily available for many language pairs, specially those involving less-resourced languages.", "In this work, we reduce the need of large bilingual dictionaries to much smaller seed dictionaries.", "Our method can work with as little as 25 word pairs, which are straightforward to obtain assuming some basic knowledge of the languages involved.", "The method can also work with trivially generated seed dictionaries of numerals (i.e.", "1-1, 2-2, 3-3, 4-4...) making it possible to learn bilingual word embeddings without any real bilingual data.", "In either case, we obtain very competitive results, comparable to other state-of-the-art methods that make use of much richer bilingual resources.", "The proposed method is an extension of existing mapping techniques, where the dictionary is used to learn the embedding mapping and the embedding mapping is used to induce a new dictionary iteratively in a self-learning fashion (see Figure 1) .", "In spite of its simplicity, our analysis of the implicit optimization objective reveals that the method is exploiting the structural similarity of independently trained embeddings.", "We analyze previous work in Section 2.", "Section 3 describes the self-learning framework, while Section 4 presents the experiments.", "Section 5 analyzes the underlying optimization objective, and Section 6 presents an error analysis.", "Figure 1 : A general schema of the proposed self-learning framework.", "Previous works learn a mapping W based on the seed dictionary D, which is then used to learn the full dictionary.", "In our proposal we use the new dictionary to learn a new mapping, iterating until convergence.", "Related work We will first focus on bilingual embedding mappings, which are the basis of our proposals, and then on other unsupervised and weakly supervised methods to learn bilingual word embeddings.", "Bilingual embedding mappings Methods to induce bilingual mappings work by independently learning the embeddings in each language using monolingual corpora, and then learning a transformation from one embedding space into the other based on a bilingual dictionary.", "The first of such methods is due to Mikolov et al.", "(2013a) , who learn the linear transformation that minimizes the sum of squared Euclidean distances for the dictionary entries.", "The same optimization objective is used by , who constrain the transformation matrix to be orthogonal.", "Xing et al.", "(2015) incorporate length normalization in the training of word embeddings and maximize the cosine similarity instead, enforcing the orthogonality constraint to preserve the length normalization after the mapping.", "Finally, use max-margin optimization with intruder negative sampling.", "Instead of learning a single linear transformation from the source language into the target language, Faruqui and Dyer (2014) use canonical correlation analysis to map both languages to a shared vector space.", "Lu et al.", "(2015) extend this work and apply deep canonical correlation analysis to learn non-linear transformations.", "Artetxe et al.", "(2016) propose a general framework that clarifies the relation between Mikolov et al.", "(2013a) , Xing et al.", "(2015) , Faruqui and Dyer (2014) and as variants of the same core optimization objective, and show that a new variant is able to surpass them all.", "While most of the previous methods use gradient descent, Artetxe et al.", "(2016) propose an efficient analytical implementation for those same methods, recently extended by Smith et al.", "(2017) to incorporate dimensionality reduction.", "A prominent application of bilingual embedding mappings, with a direct application in machine translation (Zhao et al., 2015) , is bilingual lexicon extraction, which is also the main evaluation method.", "More specifically, the learned mapping is used to induce the translation of source language words that were missing in the original dictionary, usually by taking their nearest neighbor word in the target language according to cosine similarity, although and Smith et al.", "(2017) propose alternative retrieval methods to address the hubness problem.", "Unsupervised and weakly supervised bilingual embeddings As mentioned before, our method works with as little as 25 word pairs, while the methods discussed previously use thousands of pairs.", "The only exception in this regard is the work by , who only use 10 word pairs with good results on transfer learning for part-of-speech tagging.", "Our experiments will show that, although their method captures coarse-grained relations, it fails on finer-grained tasks like bilingual lexicon induction.", "Bootstrapping methods similar to ours have been previously proposed for traditional countbased vector space models (Peirsman and PadΓ³, 2010; VuliΔ‡ and Moens, 2013) .", "However, while previous techniques incrementally build a high- dimensional model where each axis encodes the co-occurrences with a specific word and its equivalent in the other language, our method works with low-dimensional pre-trained word embeddings, which are more widely used nowadays.", "A practical aspect for reducing the need of bilingual supervision is on the design of the seed dictionary.", "This is analyzed in depth by VuliΔ‡ and Korhonen (2016) , who propose using documentaligned corpora to extract the training dictionary.", "A more common approach is to rely on shared words and cognates (Peirsman and PadΓ³, 2010; Smith et al., 2017) , eliminating the need of bilingual data in practice.", "Our use of shared numerals exploits the same underlying idea, but relies on even less bilingual evidence and should thus generalize better to distant language pairs.", "Miceli Barone (2016) and Cao et al.", "(2016) go one step further and attempt to learn bilingual embeddings without any bilingual evidence.", "The former uses adversarial autoencoders (Makhzani et al., 2016) , combining an encoder that maps the source language embeddings into the target language, a decoder that reconstructs the original embeddings, and a discriminator that distinguishes mapped embeddings from real target language embeddings, whereas the latter adds a regularization term to the training of word embeddings that pushes the mean and variance of each dimension in different languages close to each other.", "Although promising, the reported performance in both cases is poor in comparison to other methods.", "Finally, the induction of bilingual knowledge from monolingual corpora is closely related to the decipherment scenario, for which models that incorporate word embeddings have also been proposed (Dou et al., 2015) .", "However, decipherment is only concerned with translating text from one language to another and relies on complex statistical models that are designed specifically for that purpose, while our approach is more general and learns task-independent multilingual embeddings.", "Algorithm 2 Proposed self-learning framework Input: X (source embeddings) Input: Z (target embeddings) Input: D (seed dictionary) 1: repeat 2: W ← LEARN MAPPING(X, Z, D) 3: D ← LEARN DICTIONARY(X, Z, W ) 4: until convergence criterion 5: EVALUATE DICTIONARY(D) 3 Proposed self-learning framework As discussed in Section 2.1, a common evaluation task (and practical application) of bilingual embedding mappings is to induce bilingual lexicons, that is, to obtain the translation of source words that were missing in the training dictionary, which are then compared to a gold standard test dictionary for evaluation.", "This way, one can say that the seed (train) dictionary is used to learn a mapping, which is then used to induce a better dictionary (at least in the sense that it is larger).", "Algorithm 1 summarizes this framework.", "Following this observation, we propose to use the output dictionary in Algorithm 1 as the input of the same system in a self-learning fashion which, assuming that the output dictionary was indeed better than the original one, should serve to learn a better mapping and, consequently, an even better dictionary the second time.", "The process can then be repeated iteratively to obtain a hopefully better mapping and dictionary each time until some convergence criterion is met.", "Algorithm 2 summarizes this alternative framework that we propose.", "Our method can be combined with any embedding mapping and dictionary induction technique (see Section 2.1).", "However, efficiency turns out to be critical for a variety of reasons.", "First of all, by enclosing the learning logic in a loop, the total training time is increased by the number of iterations.", "Even more importantly, our framework requires to explicitly build the entire dictionary at each iteration, whereas previous work tends to induce the translation of individual words ondemand later at runtime.", "Moreover, from the second iteration onwards, it is this induced, full dictionary that has to be used to learn the embedding mapping, and not the considerably smaller seed dictionary as it is typically done.", "In the following two subsections, we respectively describe the embedding mapping method and the dictionary in-duction method that we adopt in our work with these efficiency requirements in mind.", "Embedding mapping As discussed in Section 2.1, most previous methods to learn embedding mappings use variants of gradient descent.", "Among the more efficient exact alternatives, we decide to adopt the one by Artetxe et al.", "(2016) for its simplicity and good results as reported in their paper.", "We next present their method, adapting the formalization to explicitly incorporate the dictionary as required by our self-learning algorithm.", "Let X and Z denote the word embedding matrices in two languages so that X i * corresponds to the ith source language word embedding and Z j * corresponds to the jth target language embedding.", "While Artetxe et al.", "(2016) assume these two matrices are aligned according to the dictionary, we drop this assumption and represent the dictionary explicitly as a binary matrix D, so that D ij = 1 if the ith source language word is aligned with the jth target language word.", "The goal is then to find the optimal mapping matrix W * so that the sum of squared Euclidean distances between the mapped source embeddings X i * W and target embeddings Z j * for the dictionary entries D ij is minimized: W * = arg min W i j D ij ||X i * W βˆ’ Z j * || 2 Following Artetxe et al.", "(2016) , we length normalize and mean center the embedding matrices X and Z in a preprocessing step, and constrain W to be an orthogonal matrix (i.e.", "W W T = W T W = I), which serves to enforce monolingual invariance, preventing a degradation in monolingual performance while yielding to better bilingual mappings.", "Under such orthogonality constraint, minimizing the squared Euclidean distance becomes equivalent to maximizing the dot product, so the above optimization objective can be reformulated as follows: W * = arg max W Tr XW Z T D T where Tr (Β·) denotes the trace operator (the sum of all the elements in the main diagonal).", "The optimal orthogonal solution for this problem is given by W * = U V T , where X T DZ = U Ξ£V T is the singular value decomposition of X T DZ.", "Since the dictionary matrix D is sparse, this can be efficiently computed in linear time with respect to the number of dictionary entries.", "Dictionary induction As discussed in Section 2.1, practically all previous work uses nearest neighbor retrieval for word translation induction based on embedding mappings.", "In nearest neighbor retrieval, each source language word is assigned the closest word in the target language.", "In our work, we use the dot product between the mapped source language embeddings and the target language embeddings as the similarity measure, which is roughly equivalent to cosine similarity given that we apply length normalization followed by mean centering as a preprocessing step (see Section 3.1).", "This way, following the notation in Section 3.1, we set D ij = 1 if j = argmax k (X i * W ) Β· Z k * and D ij = 0 other- wise 1 .", "While we find that independently computing the similarity measure between all word pairs is prohibitively slow, the computation of the entire similarity matrix XW Z T can be easily vectorized using popular linear algebra libraries, obtaining big performance gains.", "However, the resulting similarity matrix is often too large to fit in memory when using large vocabularies.", "For that reason, instead of computing the entire similarity matrix XW Z T in a single step, we iteratively compute submatrices of it using vectorized matrix multiplication, find their corresponding maxima each time, and then combine the results.", "Experiments and results In this section, we experimentally test the proposed method in bilingual lexicon induction and crosslingual word similarity.", "Subsection 4.1 describes the experimental settings, while Subsections 4.2 and 4.3 present the results obtained in each of the tasks.", "The code and resources necessary to reproduce our experiments are available at https://github.com/artetxem/ vecmap.", "Experimental settings For easier comparison with related work, we evaluated our mappings on bilingual lexicon induction using the public English-Italian dataset by , which includes monolingual word embeddings in both languages together with a bilingual dictionary split in a training set and a test set 2 .", "The embeddings were trained with the word2vec toolkit with CBOW and negative sampling (Mikolov et al., 2013b ) 3 , using a 2.8 billion word corpus for English (ukWaC + Wikipedia + BNC) and a 1.6 billion word corpus for Italian (itWaC) .", "The training and test sets were derived from a dictionary built form Europarl word alignments and available at OPUS (Tiedemann, 2012) , taking 1,500 random entries uniformly distributed in 5 frequency bins as the test set and the 5,000 most frequent of the remaining word pairs as the training set.", "In addition to English-Italian, we selected two other languages from different language families with publicly available resources.", "We thus created analogous datasets for English-German and English-Finnish.", "In the case of German, the embeddings were trained on the 0.9 billion word corpus SdeWaC, which is part of the WaCky collection (Baroni et al., 2009 ) that was also used for English and Italian.", "Given that Finnish is not included in this collection, we used the 2.8 billion word Common Crawl corpus provided at WMT 2016 4 instead, which we tokenized using the Stanford Tokenizer (Manning et al., 2014) .", "In addition to that, we created training and test sets for both pairs from their respective Europarl dictionaries from OPUS following the exact same procedure used for English-Italian, and the word embeddings were also trained using the same configuration as .", "Given that the main focus of our work is on small seed dictionaries, we created random subsets of 2,500, 1,000, 500, 250, 100, 75, 50 and 25 entries from the original training dictionaries of 5,000 entries.", "This was done by shuffling once the training dictionaries and taking their first k entries, so it is guaranteed that each dictionary is a strict subset of the bigger dictionaries.", "In addition to that, we explored using automatically generated dictionaries as a shortcut to practical unsupervised learning.", "For that purpose, we created numeral dictionaries, consisting of words matching the [0-9]+ regular expression in both vocabularies (e.g.", "1-1, 2-2, 3-3, 1992-1992 etc.).", "The resulting dictionary had 2772 entries for English-Italian, 2148 for English-German, and 2345 for English-Finnish.", "While more sophisticated approaches are possible (e.g.", "involving the edit distance of all words), we believe that this method is general enough that should work with practically any language pair, as Arabic numerals are often used even in languages with a different writing system (e.g.", "Chinese and Russian).", "While bilingual lexicon induction is a standard evaluation task for seed dictionary based methods like ours, it is unsuitable for bilingual corpus based methods, as statistical word alignment already provides a reliable way to derive dictionaries from bilingual corpora and, in fact, this is how the test dictionary itself is built in our case.", "For that reason, we carried out some experiments in crosslingual word similarity as a way to test our method in a different task and allowing to compare it to systems that use richer bilingual data.", "There are no many crosslingual word similarity datasets, and we used the RG-65 and WordSim-353 crosslingual datasets for English-German and the WordSim-353 crosslingual dataset for English-Italian as published by Camacho-Collados et al.", "(2015) 5 .", "As for the convergence criterion, we decide to stop training when the improvement on the average dot product for the induced dictionary falls below a given threshold from one iteration to the next.", "After length normalization, the dot product ranges from -1 to 1, so we decide to set this threshold at 1e-6, which we find to be a very conservative value yet enough that training takes a reasonable amount of time.", "The curves in the next section confirm that this was a reasonable choice.", "This convergence criterion is usually met in less than 100 iterations, each of them taking 5 minutes on a modest desktop computer (Intel Core i5-4670 CPU with 8GiB of RAM), including the induction of a dictionary of 200,000 words at each iteration.", "Bilingual lexicon induction For the experiments on bilingual lexicon induction, we compared our method with those proposed by Mikolov et al.", "(2013a) , Xing et al.", "(2015) , and Artetxe et al.", "(2016) , all of them implemented as part of the framework proposed by the latter.", "The results ob- Table 1 : Accuracy (%) on bilingual lexicon induction for different seed dictionaries tained with the 5,000 entry, 25 entry and the numerals dictionaries for all the 3 language pairs are given in Table 1 .", "The results for the 5,000 entry dictionaries show that our method is comparable or even better than the other systems.", "As another reference, the best published results using nearest-neighbor retrieval are due to , who report an accuracy of 40.20% for the full English-Italian dictionary, almost at pair with our system (39.67%).", "In any case, the main focus of our work is on smaller dictionaries, and it is under this setting that our method really stands out.", "The 25 entry and numerals columns in Table 1 show the results for this setting, where all previous methods drop dramatically, falling below 1% accuracy in all cases.", "The method by also obtains poor results with small dictionaries, which reinforces our hypothesis in Section 2.2 that their method can only capture coarse-grain bilingual relations for small dictionaries.", "In contrast, our proposed method obtains very competitive results for all dictionaries, with a difference of only 1-2 points between the full dictionary and both the 25 entry dictionary and the numerals dictionary in all three languages.", "Figure 2 shows the curve of the English-Italian accuracy for different seed dictionary sizes, confirming this trend.", "Finally, it is worth mentioning that, even if all the three language pairs show the same general behavior, there are clear differences in their absolute accuracy numbers, which can be attributed to the linguistic proximity of the languages involved.", "In particular, the results for English-Finnish are about 10 points below the rest, which is explained by the fact that Finnish is a non-indoeuropean agglutinative language, making the task considerably more difficult for this language pair.", "In this regard, we believe that the good results with small dictionaries are a strong indication of the robustness of our method, showing that it is able to learn good bilingual mappings from very little bilingual ev-idence even for distant language pairs where the structural similarity of the embedding spaces is presumably weaker.", "Crosslingual word similarity In addition to the baseline systems in Section 4.2, in the crosslingual similarity experiments we also tested the method by Luong et al.", "(2015) , which is the state-of-the-art for bilingual word embeddings based on parallel corpora (Upadhyay et al., 2016) 6 .", "As this method is an extension of word2vec, we used the same hyperparameters as for the monolingual embeddings when possible (see Section 4.1), and leave the default ones otherwise.", "We used Europarl as our parallel corpus to train this method as done by the authors, which consists of nearly 2 million parallel sentences.", "As shown in the results in Table 2 , our method obtains the best results in all cases, surpassing the rest of the dictionary-based methods by 1-3 points depending on the dataset.", "But, most importantly, it does not suffer from any significant degradation for using smaller dictionaries and, in fact, our method gets better results using the 25 entry dictionary or the numeral list as the only bilingual evidence than any of the baseline systems using much richer resources.", "The relatively poor results of Luong et al.", "(2015) can be attributed to the fact that the dictionary based methods make use of much bigger monolingual corpora, while methods based on parallel corpora are restricted to smaller corpora.", "However, it is not clear how to introduce monolingual corpora on those methods.", "We did run some experiments with BilBOWA (Gouws et al., 2015) , which supports training in monolingual corpora in addition to bilingual corpora, but obtained very poor results 7 .", "All in all, our experiments show Figure 2 : Accuracy on English-Italian bilingual lexicon induction for different seed dictionaries that it is better to use large monolingual corpora in combination with very little bilingual data rather than a bilingual corpus of a standard size alone.", "Global optimization objective It might seem somehow surprising at first that, as seen in the previous section, our simple selflearning approach is able to learn high quality bilingual embeddings from small seed dictionaries instead of falling in degenerated solutions.", "In this section, we try to shed light on our approach, and give empirical evidence supporting our claim.", "More concretely, we argue that, for the embedding mapping and dictionary induction methods described in Section 3, the proposed selflearning framework is implicitly solving the following global optimization problem 8 : W * = arg max W i max j (X i * W ) Β· Z j * s.t.", "W W T = W T W = I Contrary to the optimization objective for W in Section 3.1, the global optimization objective does not refer to any dictionary, and maximizes the similarity between each source language word and its closest target language word.", "Intuitively, a random solution would map source language embeddings to seemingly random locations in the target language space, and it would thus be unlikely that BilBOWA.", "8 While we restrict our formal analysis to the embedding mapping and dictionary induction method that we use, the general reasoning should be valid for other choices as well.", ".628 .739 .604 Table 2 : Spearman correlations on English-Italian and English-German crosslingual word similarity they have any target language word nearby, making the optimization value small.", "In contrast, a good solution would map source language words close to their translation equivalents in the target language space, and they would thus have their corresponding embeddings nearby, making the optimization value large.", "While it is certainly possible to build degenerated solutions that take high optimization values for small subsets of the vocabulary, we think that the structural similarity between independently trained embedding spaces in different languages is strong enough that optimizing this function yields to meaningful bilingual mappings when the size of the vocabulary is much larger than the dimensionality of the embeddings.", "The reasoning for how the self-learning framework is optimizing this objective is as follows.", "At the end of each iteration, the dictionary D is updated to assign, for the current mapping W , each source language word to its closest target language word.", "This way, when we update W to maximize the average similarity of these dictionary entries at the beginning of the next iteration, it is guaranteed that the value of the optimization objective will improve (or at least remain the same).", "The reason is that the average similarity between each word and what were previously the closest words will be improved if possible, as this is what the updated W directly optimizes (see Section 3.1).", "In addition to that, it is also possible that, for some source words, some other target words get closer after the update.", "Thanks to this, our self-learning algorithm is guaranteed to converge to a local optimum of the above global objective, behaving like an alternating optimization algorithm for it.", "It is interesting to note that the above reasoning is valid no matter what the the initial solution is, and, in fact, the global optimization objective does not depend on the seed dictionary nor any other bilingual resource.", "For that reason, it should be possible to use a random initialization instead of a small seed dictionary.", "However, we empirically observe that this works poorly in practice, as our algorithm tends to get stuck in poor local optima when the initial solution is not good enough.", "The general behavior of our method is reflected in Figure 3 , which shows the learning curve for different seed dictionaries according to both the objective function and the accuracy on bilingual lexicon induction.", "As it can be seen, the objective function is improved from iteration to iteration and converges to a local optimum just as expected.", "At the same time, the learning curves show a strong correlation between the optimization objective and the accuracy, as it can be clearly observed that improving the former leads to an improvement of the latter, confirming our explanations.", "Regarding random initialization, the figure shows that the algorithm gets stuck in a poor local optimum of the objective function, which is the reason of the bad performance (0% accuracy) on bilingual lexicon induction, but the proposed optimization objective itself seems to be adequate.", "Finally, we empirically observe that our algorithm learns similar mappings no matter what the seed dictionary was.", "We first repeated our experiments on English-Italian bilingual lexicon induction for 5 different dictionaries of 25 entries, obtaining an average accuracy of 38.15% and a standard deviation of only 0.75%.", "In addition to that, we observe that the overlap between the predictions made when starting with the full dictionary and the numerals dictionary is 76.00% (60.00% for the 25 entry dictionary).", "At the same time, 37.00% of the test cases are correctly solved by both instances, and it is only 5.07% of the test cases that one of them gets right and the other wrong (34.00% and 8.94% for the 25 entry dictionary).", "This suggests that our algorithm tends to converge to similar solutions even for disjoint seed dictionaries, which is in line with our view that we are implicitly optimizing an objective that is independent from the seed dictionary, yet a seed dictionary is necessary to build a good enough initial solution to avoid getting stuck in poor local optima.", "For that reason, it is likely that better methods to tackle this optimization problem would allow learning bilingual word embeddings without any bilingual evidence at all and, in this regard, we believe that our work opens exciting opportunities for future research.", "Error analysis So as to better understand the behavior of our system, we performed an error analysis of its output in English-Italian bilingual lexicon induction when starting with the 5,000 entry, the 25 entry and the numeral dictionaries in comparison with the baseline method of Artetxe et al.", "(2016) Our analysis first reveals that, in all the cases, about a third of the translations taken as erroneous according to the gold standard are not so in real-ity.", "This corresponds to both different morphological variants of the gold standard translations (e.g.", "dichiarato/dichiarΓ²) and other valid translations that were missing in the gold standard (e.g.", "climb β†’ salita instead of the gold standard scalato).", "This phenomenon is considerably more pronounced in the first frequency bins, which already have a much higher accuracy according to the gold standard.", "As for the actual errors, we observe that nearly a third of them correspond to named entities for all the different variants.", "Interestingly, the vast majority of the proposed translations in these cases are also named entities (e.g.", "Ryan β†’ Jason, John β†’ Paolo), which are often highly related to the original ones (e.g.", "Volvo β†’ BMW, Olympus β†’ Nikon).", "While these are clear errors, it is understandable that these methods are unable to discriminate between named entities to this degree based solely on the distributional hypothesis, in particular when it comes to common proper names (e.g.", "John, Andy), and one could design alternative strategies to address this issue like taking the edit distance as an additional signal.", "For the remaining errors, all systems tend to propose translations that have some degree of relationship with the correct ones, including nearsynonyms (e.g.", "guidelines β†’ raccomandazioni), antonyms (e.g.", "sender β†’ destinatario) and words in the same semantic field (e.g.", "nominalism β†’ intuizionismo / innatismo, which are all philosophical doctrines).", "However, there are also a few instances where the relationship is weak or unclear (e.g.", "loch β†’ giardini, sweep β†’ serrare).", "We also observe a few errors that are related to multiwords or collocations (e.g.", "carrier β†’ aereo, presumably related to the multiword air carrier / linea aerea), as well as some rare word that is repeated across many translations (Ferruzzi), which could be attributed to the hubness problem .", "All in all, our error analysis reveals that the baseline method of Artetxe et al.", "(2016) and the proposed algorithm tend to make the same kind of errors regardless of the seed dictionary used by the latter, which reinforces our interpretation in the previous section regarding an underlying optimization objective that is independent from any training dictionary.", "Moreover, it shows that the quality of the learned mappings is much better than what the raw accuracy numbers might sug-gest, encouraging the incorporation of these techniques in other applications.", "Conclusions and future work In this work, we propose a simple self-learning framework to learn bilingual word embedding mappings in combination with any embedding mapping and dictionary induction technique.", "Our experiments on bilingual lexicon induction and crosslingual word similarity show that our method is able to learn high quality bilingual embeddings from as little bilingual evidence as a 25 word dictionary or an automatically generated list of numerals, obtaining results that are competitive with state-of-the-art systems using much richer bilingual resources like larger dictionaries or parallel corpora.", "In spite of its simplicity, a more detailed analysis shows that our method is implicitly optimizing a meaningful objective function that is independent from any bilingual data which, with a better optimization method, might allow to learn bilingual word embeddings in a completely unsupervised manner.", "In the future, we would like to delve deeper into this direction and fine-tune our method so it can reliably learn high quality bilingual word embeddings without any bilingual evidence at all.", "In addition to that, we would like to explore non-linear transformations (Lu et al., 2015) and alternative dictionary induction methods Smith et al., 2017) .", "Finally, we would like to apply our model in the decipherment scenario (Dou et al., 2015) ." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3.1", "3.2", "4", "4.1", "4.2", "4.3", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Related work", "Bilingual embedding mappings", "Unsupervised and weakly supervised bilingual embeddings", "Embedding mapping", "Dictionary induction", "Experiments and results", "Experimental settings", "Bilingual lexicon induction", "Crosslingual word similarity", "Global optimization objective", "Error analysis", "Conclusions and future work" ] }
GEM-SciDuet-train-102#paper-1266#slide-4
Conclusions
Simple self-learning method to train bilingual embedding mappings High quality results with almost no supervision (25 words, numerals) Implicit optimization objective independent from seed dictionary Seed dictionary necessary to avoid poor local optima Future work: fully unsupervised training
Simple self-learning method to train bilingual embedding mappings High quality results with almost no supervision (25 words, numerals) Implicit optimization objective independent from seed dictionary Seed dictionary necessary to avoid poor local optima Future work: fully unsupervised training
[]
GEM-SciDuet-train-103#paper-1267#slide-0
1267
Policy Shaping and Generalized Update Equations for Semantic Parsing from Denotations
Semantic parsing from denotations faces two key challenges in model training: (1) given only the denotations (e.g., answers), search for good candidate semantic parses, and (2) choose the best model update algorithm. We propose effective and general solutions to each of them. Using policy shaping, we bias the search procedure towards semantic parses that are more compatible to the text, which provide better supervision signals for training. In addition, we propose an update equation that generalizes three different families of learning algorithms, which enables fast model exploration. When experimented on a recently proposed sequential question answering dataset, our framework leads to a new state-of-theart model that outperforms previous work by 5.0% absolute on exact match accuracy.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233 ], "paper_content_text": [ "Introduction Semantic parsing from denotations (SpFD) is the problem of mapping text to executable formal representations (or program) in a situated environment and executing them to generate denotations (or answer), in the absence of access to correct representations.", "Several problems have been handled within this framework, including question answering (Berant et al., 2013; Iyyer et al., 2017) and instructions for robots (Artzi and Zettlemoyer, 2013; Misra et al., 2015) .", "Consider the example in Figure 1 .", "Given the question and a table environment, a semantic parser maps the question to an executable program, in this case a SQL query, and then executes the query on the environment to generate the answer England.", "In the SpFD setting, the training data does not contain the correct programs.", "Thus, the existing learning approaches for SpFD perform two steps for every training example, a search step that explores the space of programs Select Nation Where Points is Maximum Program: Answer: Environment: England Figure 1 : An example of semantic parsing from denotations.", "Given the table environment, map the question to an executable program that evaluates to the answer.", "and finds suitable candidates, and an update step that uses these programs to update the model.", "Figure 2 shows the two step training procedure for the above example.", "In this paper, we address two key challenges in model training for SpFD by proposing a novel learning framework, improving both the search and update steps.", "The first challenge, the existence of spurious programs, lies in the search step.", "More specifically, while the success of the search step relies on its ability to find programs that are semantically correct, we can only verify if the program can generate correct answers, given that no gold programs are presented.", "The search step is complicated by spurious programs, which happen to evaluate to the correct answer but do not represent accurately the meaning of the natural language question.", "For example, for the environment in Figure 1 , the program Select Nation Where Name = Karen Andrew is spurious.", "Selecting spurious programs as positive examples can greatly affect the performance of semantic parsers as these programs generally do not gen- Our contributions in this work are twofold.", "To address the first challenge, we propose a policy shaping (Griffith et al., 2013) method that incorporates simple, lightweight domain knowledge, such as a small set of lexical pairs of tokens in the question and program, in the form of a critique policy ( Β§ 3).", "This helps bias the search towards the correct program, an important step to improve supervision signals, which benefits learning regardless of the choice of algorithm.", "To address the second challenge, we prove that the parameter update step in several algorithms are similar and can be viewed as special cases of a generalized update equation ( Β§ 4).", "The equation contains two variable terms that govern the update behavior.", "Changing these two terms effectively defines an infinite class of learning algorithms where different values lead to significantly different results.", "We study this effect and propose a novel learning framework that improves over existing methods.", "We evaluate our methods using the sequential question answering (SQA) dataset (Iyyer et al., 2017) , and show that our proposed improvements to the search and update steps consistently enhance existing approaches.", "The proposed algorithm achieves new state-of-the-art and outperforms existing parsers by 5.0%.", "Background We give a formal problem definition of the semantic parsing task, followed by the general learning framework for solving it.", "The Semantic Parsing Task The problem discussed in this paper can be formally defined as follows.", "Let X be the set of all possible questions, Y programs (e.g., SQL-like queries), T tables (i.e., the structured data in this work) and Z answers.", "We further assume access to an executor : Y β‡₯ T !", "Z, that given a program y 2 Y and a table t 2 T , generates an answer (y, t) 2 Z.", "We assume that the executor and all tables are deterministic and the executor can be called as many times as possible.", "To facilitate discussion in the following sections, we define an environment function e t : Y !", "Z, by applying the executor to the program as e t (y) = (y, t).", "Given a question x and an environment e t , our aim is to generate a program y ⇀ 2 Y and then execute it to produce the answer e t (y ⇀ ).", "Assume that for any y 2 Y, the score of y being a correct program for x is score βœ“ (y, x, t), parameterized by βœ“.", "The inference task is thus: y ⇀ = arg max y2Y score βœ“ (y, x, t) (1) As the size of Y is exponential to the length of the program, a generic search procedure is typically employed for Eq.", "(1), as efficient dynamic algorithms typically do not exist.", "These search procedures generally maintain a beam of program states sorted according to some scoring function, where each program state represents an incomplete program.", "The search then generates a new program state from an existing state by performing an action.", "Each action adds a set of tokens (e.g., Nation) and keyword (e.g., Select) to a program state.", "For example, in order to generate the program in Figure 1 , the DynSP parser (Iyyer et al., 2017) will take the first action as adding the SQL expression Select Nation.", "Notice that score βœ“ can be used in either probabilistic or nonprobabilistic models.", "For probabilistic models, we assume that it is a Boltzmann policy, meaning that p βœ“ (y | x, t) / exp{score βœ“ (y, x, t)}.", "Learning Learning a semantic parser is equivalent to learning the parameters βœ“ in the scoring function, which is a structured learning problem, due to the large, structured output space Y.", "Structured learning algorithms generally consist of two major components: search and update.", "When the gold programs are available during training, the search procedure finds a set of high-scoring incorrect programs.", "These programs are used by the update step to derive loss for updating parameters.", "For example, these programs are used for approximating the partition-function in maximum-likelihood objective (Liang et al., 2011) and finding set of programs causing margin violation in margin based methods (DaumΓ© III and Marcu, 2005) .", "Depending on the exact algorithm being used, these two components are not necessarily separated into isolated steps.", "For instance, parameters can be updated in the middle of search (e.g., Huang et al., 2012) .", "For learning semantic parsers from denotations, where we assume only answers are available in a training set {( x i , t i , z i )} N i=1 of N examples, the basic construction of the learning algorithms remains the same.", "However, the problems that search needs to handle in SpFD is more challenging.", "In addition to finding a set of high-scoring incorrect programs, the search procedure also needs to guess the correct program(s) evaluating to the gold answer z i .", "This problem is further complicated by the presence of spurious programs, which generate the correct answer but are semantically incompatible with the question.", "For example, although all programs in Figure 2 evaluate to the same answer, only one of them is correct.", "The issue of the spurious programs also affects the design of model update.", "For instance, maximum marginal likelihood methods treat all the programs that evaluate to the gold answer equally, while maximum margin reward networks use model score to break tie and pick one of the programs as the correct reference.", "Addressing Spurious Programs: Policy Shaping Given a training example (x, t, z), the aim of the search step is to find a set K(x, t, z) of programs consisting of correct programs that evaluate to z and high-scoring incorrect programs.", "The search step should avoid picking up spurious programs for learning since such programs typically do not generalize.", "For example, in Figure 2, the spurious program Select Nation Where Index is Min will evaluate to an incorrect answer if the indices of the first two rows are swapped 1 .", "This problem is challenging since among the programs that evaluate to the correct answer, most of them are spurious.", "The search step can be viewed as following an exploration policy b βœ“ (y|x, t, z) to explore the set of programs Y.", "This exploration is often performed by beam search and at each step, we either sample from b βœ“ or take the top scoring programs.", "The set K(x, t, z) is then used by the update step for parameter update.", "Most search strategies use an exploration policy which is based on the score function, for example b βœ“ (y|x, t, z) / exp{score βœ“ (y, t)}.", "However, this approach can suffer from a divergence phenomenon whereby the score of spurious programs picked up by the search in the first epoch increases, making it more likely for the search to pick them up in the future.", "Such divergence issues are common with latent-variable learning and often require careful initialization to overcome (Rose, 1998) .", "Unfortunately such initialization schemes are not applicable for deep neural networks which form the model of most successful semantic parsers today (Jia and Liang, 2016; Misra and Artzi, 2016; Iyyer et al., 2017) .", "Prior work, such as ✏-greedy exploration (Guu et al., 2017) , has reduced the severity of this problem by introducing random noise in the search procedure to avoid saturating the search on high-scoring spurious programs.", "However, random noise need not bias the search towards the correct program(s).", "In this paper, we introduce a simple policy-shaping method to guide the search.", "This approach allows incorporating prior knowledge in the exploration policy and can bias the search away from spurious programs.", "Β» Compute generalized gradient updates 6: βœ“ = βœ“ + Β΅ (K) 7: return βœ“ Policy Shaping Policy shaping is a method to introduce prior knowledge into a policy (Griffith et al., 2013) .", "Formally, let the current behavior policy be b βœ“ (y|x, t, z) and a predefined critique policy, the prior knowledge, be p c (y|x, t).", "Policy shaping defines a new shaped behavior policy p b (y|x, t) given by: p b (y|x, t) = b βœ“ (y|x, t, z)p c (y|x, t) P y 0 2Y b βœ“ (y 0 |x, t, z)p c (y 0 |x, t) .", "(2) Using the shaped policy for exploration biases the search towards the critique policy's preference.", "We next describe a simple critique policy that we use in this paper.", "Lexical Policy Shaping We qualitatively observed that correct programs often contains tokens which are also present in the question.", "For example, the correct program in Figure 2 contains the token Points, which is also present in the question.", "We therefore, define a simple surface form similarity feature match(x, y) that computes the ratio of number of non-keyword tokens in the program y that are also present in the question x.", "However, surface-form similarity is often not enough.", "For example, both the first and fourth program in Figure 2 contain the token Points but only the fourth program is correct.", "Therefore, we also use a simple co-occurrence feature that triggers on frequently co-occurring pairs of tokens in the program and instruction.", "For example, the token most is highly likely to co-occur with a correct program containing the keyword Max.", "This happens for the example in Figure 2 .", "Similarly the token not may co-occur with the keyword NotEqual.", "We assume access to a lexicon ⇀ = {(w j , !", "j )} k j=1 containing k lexical pairs of tokens and keywords.", "Each lexical pair (w, !)", "maps the token w in a text to a keyword !", "in a program.", "For a given program y and question x, we define a co-occurrence score as co_occur(y, x) = P (w,!", ")2⇀ {w 2 x^!", "2 y}}.", "We define critique score critique(y, x) as the sum of the match and co_occur scores.", "The critique policy is given by: p c (y|x, t) / exp (⌘ ⇀ critique(y, x)) , (3) where ⌘ is a single scalar hyper-parameter denoting the confidence in the critique policy.", "Addressing Update Strategy Selection: Generalized Update Equation Given the set of programs generated by the search step, one can use many objectives to update the parameters.", "For example, previous work have utilized maximum marginal likelihood (Krishnamurthy et al., 2017; Guu et al., 2017) , reinforcement learning (Zhong et al., 2017; Guu et al., 2017) and margin based methods (Iyyer et al., 2017) .", "It could be difficult to choose the suitable algorithm from these options.", "In this section, we propose a principle and general update equation such that previous update algorithms can be considered as special cases to this equation.", "Having a general update is important for the following reasons.", "First, it allows us to understand existing algorithms better by examining their basic properties.", "Second, the generalized update equation also makes it easy to implement and experiment with various different algorithms.", "Moreover, it provides a framework that enables the development of new variations or extensions of existing learning methods.", "In the following, we describe how the commonly used algorithms are in fact very similartheir update rules can all be viewed as special cases of the proposed generalized update equation.", "Algorithm 1 shows the meta-learning framework.", "For every training example, we first find a set of candidates using an exploration policy (line 4).", "We use the program candidates to update the parameters (line 6).", "Commonly Used Learning Algorithms We briefly describe three algorithms: maximum marginalized likelihood, policy gradient and maximum margin reward.", "Maximum Marginalized Likelihood The maximum marginalized likelihood method maximizes the log-likelihood of the training data by marginalizing over the set of programs.", "J MML = log p(z i |x i , t i ) = log X y2Y p(z i |y, t i )p(y|x i , t i ) (4) Because an answer is deterministically computed given a program and a table, we define p(z | y, t) as 1 or 0 depending upon whether the y evaluates to z given t, or not.", "Let Gen(z, t) βœ“ Y be the set of compatible programs that evaluate to z given the table t. The objective can then be expressed as: J MML = log X y2Gen(zi,ti) p(y|x i , t i ) (5) In practice, the summation over Gen(.)", "is approximated by only using the compatible programs in the set K generated by the search step.", "Policy Gradient Methods Most reinforcement learning approaches for semantic parsing assume access to a reward function R : Y β‡₯ X β‡₯ Z !", "R, giving a scalar reward R(y, z) for a given program y and the correct answer z.", "2 We can further assume without loss of generality that the reward is always in [0, 1].", "Reinforcement learning approaches maximize the expected reward J RL : J RL = X y2Y p(y|x i , t i )R(y, z i ) (6) J RL is hard to approximate using numerical integration since the reward for all programs may not be known a priori.", "Policy gradient methods solve this by approximating the derivative using a sample from the policy.", "When the search space is large, the policy may fail to sample a correct program, which can greatly slow down the learning.", "Therefore, off-policy methods are sometimes introduced to bias the sampling towards high-reward yielding programs.", "In those methods, an additional exploration policy u(y|x i , t i , z i ) is used to improve sampling.", "Importance weights are used to make the gradient unbiased (see Appendix for derivation).", "Maximum Margin Reward For every training example (x i , t i , z i ), the maximum margin reward method finds the highest scoring program y i that evaluates to z i , as the reference program, from the set K of programs generated by the search.", "With a margin function : Y β‡₯Y β‡₯Z !", "R and reference program y, the set of programs V that violate the margin constraint can thus be defined as: V = {y 0 | y 0 2 Y and score βœ“ (y, x, t) ο£Ώ score βœ“ (y 0 , x, t) + (y, y 0 , z)}, (7) where (y, y 0 , z) = R(y, z) R(y 0 , z).", "Similarly, the program that most violates the constraint can be written as: y = arg max y 0 2Y {score βœ“ (y 0 , x, t) + (y, y 0 , z) score βœ“ (y, x, t)} (8) The most-violation margin objective (negative margin loss) is thus defined as: J MMR = max{0, score βœ“ (Θ³, x i , t i ) score βœ“ (y i , x i , t i ) + (y i ,Θ³, z i )} Unlike the previous two learning algorithms, margin methods only update the score of the reference program and the program that violates the margin.", "Generalized Update Equation Although the algorithms described in Β§4.1 seem very different on the surface, the gradients of their loss functions can in fact be described in the same generalized form, given in Eq.", "(9) 3 .", "In addition to the gradient of the model scoring function, this equation has two variable terms, w(Β·), q(Β·).", "We call the first term w(y, x, t, z) intensity, which is a positive scalar value and the second term q(y|x, t) the competing distribution, which is a probability distribution over programs.", "Varying them makes the equation equivalent to the update rule of the algorithms we discussed, as shown in Table 1 .", "We also consider meritocratic update policy which uses a hyperparameter to sharpen or smooth the intensity of maximum marginal likelihood (Guu et al., 2017) .", "Intuitively, w(y, x, t, z) defines the positive part of the update equation, which defines how aggressively the update favors program y.", "Likewise, q(y|x, t) defines the negative part of the learning Generalized Update Equation: (7) ) andΘ³ is the most violating program (cf.", "Eq.", "(8) ).", "For REINFORCE,Ε· is sampled from K using p(.)", "whereas for Off-Policy Policy Gradient,Ε· is sampled using u(.).", "(K) = X y2K w(y, x, t, z) 0 @ r βœ“ score βœ“ (y, x, t) X y 0 2Y q(y 0 |x, t)r βœ“ score βœ“ (y 0 , x, t) algorithm, namely how aggressively the update penalizes the members of the program set.", "The generalized update equation provides a tool for better understanding individual algorithm, and helps shed some light on when a particular method may perform better.", "Intensity versus Search Quality In SpFD, the effectiveness of the algorithms for SpFD is closely related to the quality of the search results given that the gold program is not available.", "Intuitively, if the search quality is good, the update algorithm could be aggressive on updating the model parameters.", "When the search quality is poor, the algorithm should be conservative.", "The intensity w(Β·) is closely related to the aggressiveness of the algorithm.", "For example, the maximum marginal likelihood is less aggressive given that it produces a non-zero intensity over all programs in the program set K that evaluate to the correct answer.", "The intensity for a particular correct program y is proportional to its probability p(y|x, t).", "Further, meritocratic update becomes more aggressive as becomes larger.", "In contrast, REINFORCE and maximum margin reward both have a non-zero intensity only on a single program in K. This value is 1.0 for maximum margin reward, while for reinforcement learning, this value is the reward.", "Maximum margin reward therefore updates most aggressively in favor of its selection while maximum marginal likelihood tends to hedge its bet.", "Therefore, the maximum margin methods should benefit the most when the search quality improves.", "Stability The general equation also allows us to investigate the stability of a model update algorithm.", "In general, the variance of update direction can be high, hence less stable, if the model update algorithm has peaky competing distribution, or it puts all of its intensity on a single program.", "For example, REINFORCE only samples one program and puts non-zero intensity only on that program, so it could be unstable depending on the sampling results.", "The competing distribution affects the stability of the algorithm.", "For example, maximum margin reward penalizes only the most violating program and is benign to other incorrect programs.", "Therefore, the MMR algorithm could be unstable during training.", "New Model Update Algorithm The general equation provides a framework that enables the development of new variations or extensions of existing learning methods.", "For example, in order to improve the stability of the MMR algorithm, we propose a simple variant of maximum margin reward, which penalizes all violating programs instead of only the most violating one.", "We call this approach maximum margin average violation reward (MAVER), which is included in Table 1 as well.", "Given that MAVER effectively considers more negative examples during each update, we expect that it is more stable compared to the MMR algorithm.", "Experiments We describe the setup in Β§5.1 and results in Β§5.2.", "Setup Dataset We use the sequential question answering (SQA) dataset (Iyyer et al., 2017) for our experiments.", "SQA contains 6,066 sequences and each sequence contains up to 3 questions, with 17,553 questions in total.", "The data is partitioned into training (83%) and test (17%) splits.", "We use 4/5 of the original train split as our training set and the remaining 1/5 as the dev set.", "We evaluate using exact match on answer.", "Previous state-of-theart result on the SQA dataset is 44.7% accuracy, using maximum margin reward learning.", "Semantic Parser Our semantic parser is based on DynSP (Iyyer et al., 2017) , which contains a set of SQL actions, such as adding a clause (e.g., Select Column) or adding an operator (e.g., Max).", "Each action has an associated neural network module that generates the score for the action based on the instruction, the table and the list of past actions.", "The score of the entire program is given by the sum of scores of all actions.", "We modified DynSP to improve its representational capacity.", "We refer to the new parser as DynSP++.", "Most notably, we included new features and introduced two additional parser actions.", "See Appendix 8.2 for more details.", "While these improvements help us achieve state-of-the-art results, the majority of the gain comes from the learning contributions described in this paper.", "Hyperparameters For each experiment, we train the model for 30 epochs.", "We find the optimal stopping epoch by evaluating the model on the dev set.", "We then train on train+dev set till the stopping epoch and evaluate the model on the held-out test set.", "Model parameters are trained using stochastic gradient descent with learning rate of 0.1.", "We set the hyperparameter ⌘ for policy shaping to 5.", "All hyperparameters were tuned on the dev set.", "We use 40 lexical pairs for defining the co-occur score.", "We used common English superlatives (e.g., highest, most) and comparators (e.g., more, larger) and did not fit the lexical pairs based on the dataset.", "Given the model parameter βœ“, we use a base exploration policy defined in (Iyyer et al., 2017) .", "This exploration policy is given by b βœ“ (y | x, t, z) / exp( Β· R(y, z) + score βœ“ (y, βœ“, z)).", "R(y, z) is the reward function of the incomplete program y, given the answer z.", "We use a reward function R(y, z) given by the Jaccard similarity of the gold answer z and the answer generated by the program y.", "The value of is set to infinity, which essentially is equivalent to sorting the programs based on the reward and using the current model score for tie breaking.", "Further, we prune all syntactically invalid programs.", "For more details, we refer the reader to (Iyyer et al., 2017) .", "Table 2 contains the dev and test results when using our algorithm on the SQA dataset.", "We observe that margin based methods perform better than maximum likelihood methods and policy gradient in our experiment.", "Policy shaping in general improves the performance across different algorithms.", "Our best test results outperform previous SOTA by 5.0%.", "Results Policy Gradient vs Off-Policy Gradient RE-INFORCE, a simple policy gradient method, achieved extremely poor performance.", "This likely due to the problem of exploration and having to sample from a large space of programs.", "This is further corroborated from observing the much superior performance of off-policy policy gradient methods.", "Thus, the sampling policy is an important factor to consider for policy gradient methods.", "The Effect of Policy Shaping We observe that the improvement due to policy shaping is 6.0% on the SQA dataset for MAVER and only 1.3% for maximum marginal likelihood.", "We also observe that as increases, the improvement due to policy shaping for meritocratic update increases.", "This supports our hypothesis that aggressive updates of margin based methods is beneficial when the search method is more accurate as compared to maximum marginal likelihood which hedges its bet between all programs that evaluate to the right answer.", "Stability of MMR In Section 4, the general update equation helps us point out that MMR could be unstable due to the peaky competing distribution.", "MAVER was proposed to increase the stability of the algorithm.", "To measure stability, we cal- Table 4 .", "Policy Shaping vs Model Shaping Critique policy contains useful information that can bias the search away from spurious programs.", "Therefore, one can also consider making the critique policy as part of the model.", "We call this model shaping.", "We define our model to be the shaped policy and train and test using the new model.", "Using MAVER updates, we found that the dev accuracy dropped to 37.1%.", "We conjecture that the strong prior in the critique policy can hinder generalization in model shaping.", "Related Work Semantic Parsing from Denotation Mapping natural language text to formal meaning representation was first studied by Montague (1970) .", "Early work on learning semantic parsers rely on labeled formal representations as the supervision signals Collins, 2005, 2007; Zelle and Mooney, 1993) .", "However, because getting access to gold formal representation generally requires expensive annotations by an expert, distant supervision approaches, where semantic parsers are learned from denotation only, have become the main learning paradigm (e.g., Clarke et al., 2010; Liang et al., 2011; Artzi and Zettlemoyer, 2013; Berant et al., 2013; Iyyer et al., 2017; Krishnamurthy et al., 2017) .", "Guu et al.", "(2017) studied the problem of spurious programs and considered adding noise to diversify the search procedure and introduced meritocratic updates.", "Reinforcement Learning Algorithms Reinforcement learning algorithms have been applied to various NLP problems including dialogue (Li et al., 2016 ), text-based games (Narasimhan et al., 2015) , information extraction (Narasimhan et al., 2016) , coreference resolution (Clark and Man- ning, 2016), semantic parsing (Guu et al., 2017) and instruction following (Misra et al., 2017) .", "Guu et al.", "(2017) show that policy gradient methods underperform maximum marginal likelihood approaches.", "Our result on the SQA dataset supports their observation.", "However, we show that using off-policy sampling, policy gradient methods can provide superior performance to maximum marginal likelihood methods.", "Margin-based Learning Margin-based methods have been considered in the context of SVM learning.", "In the NLP literature, margin based learning has been applied to parsing (Taskar et al., 2004; McDonald et al., 2005) , text classification (Taskar et al., 2003) , machine translation (Watanabe et al., 2007) and semantic parsing (Iyyer et al., 2017) .", "Kummerfeld et al.", "(2015) found that max-margin based methods generally outperform likelihood maximization on a range of tasks.", "Previous work have studied connections between margin based method and likelihood maximization for supervised learning setting.", "We show them as special cases of our unified update equation for distant supervision learning.", "Similar to this work, Lee et al.", "(2016) also found that in the context of supervised learning, margin-based algorithms which update all violated examples perform better than the one that only updates the most violated example.", "Latent Variable Modeling Learning semantic parsers from denotation can be viewed as a latent variable modeling problem, where the program is the latent variable.", "Probabilistic latent variable models have been studied using EM-algorithm and its variant (Dempster et al., 1977) .", "The graphical model literature has studied latent variable learning on margin-based methods (Yu and Joachims, 2009 ) and probabilistic models (Quattoni et al., 2007) .", "Samdani et al.", "(2012) studied various vari-ants of EM algorithm and showed that all of them are special cases of a unified framework.", "Our generalized update framework is similar in spirit.", "Conclusion In this paper, we propose a general update equation from semantic parsing from denotation and propose a policy shaping method for addressing the spurious program challenge.", "For the future, we plan to apply the proposed learning framework to more semantic parsing tasks and consider new methods for policy shaping." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "4", "4.1", "4.2", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Background", "The Semantic Parsing Task", "Learning", "Addressing Spurious Programs: Policy Shaping", "Addressing Update Strategy Selection:", "Commonly Used Learning Algorithms", "Generalized Update Equation", "Experiments", "Setup", "Results", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-103#paper-1267#slide-0
Semantic Parsing with Execution
What nation scored the most points? Where Points is Max Christelle Le Duff France Charlotte Barras England England
What nation scored the most points? Where Points is Max Christelle Le Duff France Charlotte Barras England England
[]
GEM-SciDuet-train-103#paper-1267#slide-1
1267
Policy Shaping and Generalized Update Equations for Semantic Parsing from Denotations
Semantic parsing from denotations faces two key challenges in model training: (1) given only the denotations (e.g., answers), search for good candidate semantic parses, and (2) choose the best model update algorithm. We propose effective and general solutions to each of them. Using policy shaping, we bias the search procedure towards semantic parses that are more compatible to the text, which provide better supervision signals for training. In addition, we propose an update equation that generalizes three different families of learning algorithms, which enables fast model exploration. When experimented on a recently proposed sequential question answering dataset, our framework leads to a new state-of-theart model that outperforms previous work by 5.0% absolute on exact match accuracy.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233 ], "paper_content_text": [ "Introduction Semantic parsing from denotations (SpFD) is the problem of mapping text to executable formal representations (or program) in a situated environment and executing them to generate denotations (or answer), in the absence of access to correct representations.", "Several problems have been handled within this framework, including question answering (Berant et al., 2013; Iyyer et al., 2017) and instructions for robots (Artzi and Zettlemoyer, 2013; Misra et al., 2015) .", "Consider the example in Figure 1 .", "Given the question and a table environment, a semantic parser maps the question to an executable program, in this case a SQL query, and then executes the query on the environment to generate the answer England.", "In the SpFD setting, the training data does not contain the correct programs.", "Thus, the existing learning approaches for SpFD perform two steps for every training example, a search step that explores the space of programs Select Nation Where Points is Maximum Program: Answer: Environment: England Figure 1 : An example of semantic parsing from denotations.", "Given the table environment, map the question to an executable program that evaluates to the answer.", "and finds suitable candidates, and an update step that uses these programs to update the model.", "Figure 2 shows the two step training procedure for the above example.", "In this paper, we address two key challenges in model training for SpFD by proposing a novel learning framework, improving both the search and update steps.", "The first challenge, the existence of spurious programs, lies in the search step.", "More specifically, while the success of the search step relies on its ability to find programs that are semantically correct, we can only verify if the program can generate correct answers, given that no gold programs are presented.", "The search step is complicated by spurious programs, which happen to evaluate to the correct answer but do not represent accurately the meaning of the natural language question.", "For example, for the environment in Figure 1 , the program Select Nation Where Name = Karen Andrew is spurious.", "Selecting spurious programs as positive examples can greatly affect the performance of semantic parsers as these programs generally do not gen- Our contributions in this work are twofold.", "To address the first challenge, we propose a policy shaping (Griffith et al., 2013) method that incorporates simple, lightweight domain knowledge, such as a small set of lexical pairs of tokens in the question and program, in the form of a critique policy ( Β§ 3).", "This helps bias the search towards the correct program, an important step to improve supervision signals, which benefits learning regardless of the choice of algorithm.", "To address the second challenge, we prove that the parameter update step in several algorithms are similar and can be viewed as special cases of a generalized update equation ( Β§ 4).", "The equation contains two variable terms that govern the update behavior.", "Changing these two terms effectively defines an infinite class of learning algorithms where different values lead to significantly different results.", "We study this effect and propose a novel learning framework that improves over existing methods.", "We evaluate our methods using the sequential question answering (SQA) dataset (Iyyer et al., 2017) , and show that our proposed improvements to the search and update steps consistently enhance existing approaches.", "The proposed algorithm achieves new state-of-the-art and outperforms existing parsers by 5.0%.", "Background We give a formal problem definition of the semantic parsing task, followed by the general learning framework for solving it.", "The Semantic Parsing Task The problem discussed in this paper can be formally defined as follows.", "Let X be the set of all possible questions, Y programs (e.g., SQL-like queries), T tables (i.e., the structured data in this work) and Z answers.", "We further assume access to an executor : Y β‡₯ T !", "Z, that given a program y 2 Y and a table t 2 T , generates an answer (y, t) 2 Z.", "We assume that the executor and all tables are deterministic and the executor can be called as many times as possible.", "To facilitate discussion in the following sections, we define an environment function e t : Y !", "Z, by applying the executor to the program as e t (y) = (y, t).", "Given a question x and an environment e t , our aim is to generate a program y ⇀ 2 Y and then execute it to produce the answer e t (y ⇀ ).", "Assume that for any y 2 Y, the score of y being a correct program for x is score βœ“ (y, x, t), parameterized by βœ“.", "The inference task is thus: y ⇀ = arg max y2Y score βœ“ (y, x, t) (1) As the size of Y is exponential to the length of the program, a generic search procedure is typically employed for Eq.", "(1), as efficient dynamic algorithms typically do not exist.", "These search procedures generally maintain a beam of program states sorted according to some scoring function, where each program state represents an incomplete program.", "The search then generates a new program state from an existing state by performing an action.", "Each action adds a set of tokens (e.g., Nation) and keyword (e.g., Select) to a program state.", "For example, in order to generate the program in Figure 1 , the DynSP parser (Iyyer et al., 2017) will take the first action as adding the SQL expression Select Nation.", "Notice that score βœ“ can be used in either probabilistic or nonprobabilistic models.", "For probabilistic models, we assume that it is a Boltzmann policy, meaning that p βœ“ (y | x, t) / exp{score βœ“ (y, x, t)}.", "Learning Learning a semantic parser is equivalent to learning the parameters βœ“ in the scoring function, which is a structured learning problem, due to the large, structured output space Y.", "Structured learning algorithms generally consist of two major components: search and update.", "When the gold programs are available during training, the search procedure finds a set of high-scoring incorrect programs.", "These programs are used by the update step to derive loss for updating parameters.", "For example, these programs are used for approximating the partition-function in maximum-likelihood objective (Liang et al., 2011) and finding set of programs causing margin violation in margin based methods (DaumΓ© III and Marcu, 2005) .", "Depending on the exact algorithm being used, these two components are not necessarily separated into isolated steps.", "For instance, parameters can be updated in the middle of search (e.g., Huang et al., 2012) .", "For learning semantic parsers from denotations, where we assume only answers are available in a training set {( x i , t i , z i )} N i=1 of N examples, the basic construction of the learning algorithms remains the same.", "However, the problems that search needs to handle in SpFD is more challenging.", "In addition to finding a set of high-scoring incorrect programs, the search procedure also needs to guess the correct program(s) evaluating to the gold answer z i .", "This problem is further complicated by the presence of spurious programs, which generate the correct answer but are semantically incompatible with the question.", "For example, although all programs in Figure 2 evaluate to the same answer, only one of them is correct.", "The issue of the spurious programs also affects the design of model update.", "For instance, maximum marginal likelihood methods treat all the programs that evaluate to the gold answer equally, while maximum margin reward networks use model score to break tie and pick one of the programs as the correct reference.", "Addressing Spurious Programs: Policy Shaping Given a training example (x, t, z), the aim of the search step is to find a set K(x, t, z) of programs consisting of correct programs that evaluate to z and high-scoring incorrect programs.", "The search step should avoid picking up spurious programs for learning since such programs typically do not generalize.", "For example, in Figure 2, the spurious program Select Nation Where Index is Min will evaluate to an incorrect answer if the indices of the first two rows are swapped 1 .", "This problem is challenging since among the programs that evaluate to the correct answer, most of them are spurious.", "The search step can be viewed as following an exploration policy b βœ“ (y|x, t, z) to explore the set of programs Y.", "This exploration is often performed by beam search and at each step, we either sample from b βœ“ or take the top scoring programs.", "The set K(x, t, z) is then used by the update step for parameter update.", "Most search strategies use an exploration policy which is based on the score function, for example b βœ“ (y|x, t, z) / exp{score βœ“ (y, t)}.", "However, this approach can suffer from a divergence phenomenon whereby the score of spurious programs picked up by the search in the first epoch increases, making it more likely for the search to pick them up in the future.", "Such divergence issues are common with latent-variable learning and often require careful initialization to overcome (Rose, 1998) .", "Unfortunately such initialization schemes are not applicable for deep neural networks which form the model of most successful semantic parsers today (Jia and Liang, 2016; Misra and Artzi, 2016; Iyyer et al., 2017) .", "Prior work, such as ✏-greedy exploration (Guu et al., 2017) , has reduced the severity of this problem by introducing random noise in the search procedure to avoid saturating the search on high-scoring spurious programs.", "However, random noise need not bias the search towards the correct program(s).", "In this paper, we introduce a simple policy-shaping method to guide the search.", "This approach allows incorporating prior knowledge in the exploration policy and can bias the search away from spurious programs.", "Β» Compute generalized gradient updates 6: βœ“ = βœ“ + Β΅ (K) 7: return βœ“ Policy Shaping Policy shaping is a method to introduce prior knowledge into a policy (Griffith et al., 2013) .", "Formally, let the current behavior policy be b βœ“ (y|x, t, z) and a predefined critique policy, the prior knowledge, be p c (y|x, t).", "Policy shaping defines a new shaped behavior policy p b (y|x, t) given by: p b (y|x, t) = b βœ“ (y|x, t, z)p c (y|x, t) P y 0 2Y b βœ“ (y 0 |x, t, z)p c (y 0 |x, t) .", "(2) Using the shaped policy for exploration biases the search towards the critique policy's preference.", "We next describe a simple critique policy that we use in this paper.", "Lexical Policy Shaping We qualitatively observed that correct programs often contains tokens which are also present in the question.", "For example, the correct program in Figure 2 contains the token Points, which is also present in the question.", "We therefore, define a simple surface form similarity feature match(x, y) that computes the ratio of number of non-keyword tokens in the program y that are also present in the question x.", "However, surface-form similarity is often not enough.", "For example, both the first and fourth program in Figure 2 contain the token Points but only the fourth program is correct.", "Therefore, we also use a simple co-occurrence feature that triggers on frequently co-occurring pairs of tokens in the program and instruction.", "For example, the token most is highly likely to co-occur with a correct program containing the keyword Max.", "This happens for the example in Figure 2 .", "Similarly the token not may co-occur with the keyword NotEqual.", "We assume access to a lexicon ⇀ = {(w j , !", "j )} k j=1 containing k lexical pairs of tokens and keywords.", "Each lexical pair (w, !)", "maps the token w in a text to a keyword !", "in a program.", "For a given program y and question x, we define a co-occurrence score as co_occur(y, x) = P (w,!", ")2⇀ {w 2 x^!", "2 y}}.", "We define critique score critique(y, x) as the sum of the match and co_occur scores.", "The critique policy is given by: p c (y|x, t) / exp (⌘ ⇀ critique(y, x)) , (3) where ⌘ is a single scalar hyper-parameter denoting the confidence in the critique policy.", "Addressing Update Strategy Selection: Generalized Update Equation Given the set of programs generated by the search step, one can use many objectives to update the parameters.", "For example, previous work have utilized maximum marginal likelihood (Krishnamurthy et al., 2017; Guu et al., 2017) , reinforcement learning (Zhong et al., 2017; Guu et al., 2017) and margin based methods (Iyyer et al., 2017) .", "It could be difficult to choose the suitable algorithm from these options.", "In this section, we propose a principle and general update equation such that previous update algorithms can be considered as special cases to this equation.", "Having a general update is important for the following reasons.", "First, it allows us to understand existing algorithms better by examining their basic properties.", "Second, the generalized update equation also makes it easy to implement and experiment with various different algorithms.", "Moreover, it provides a framework that enables the development of new variations or extensions of existing learning methods.", "In the following, we describe how the commonly used algorithms are in fact very similartheir update rules can all be viewed as special cases of the proposed generalized update equation.", "Algorithm 1 shows the meta-learning framework.", "For every training example, we first find a set of candidates using an exploration policy (line 4).", "We use the program candidates to update the parameters (line 6).", "Commonly Used Learning Algorithms We briefly describe three algorithms: maximum marginalized likelihood, policy gradient and maximum margin reward.", "Maximum Marginalized Likelihood The maximum marginalized likelihood method maximizes the log-likelihood of the training data by marginalizing over the set of programs.", "J MML = log p(z i |x i , t i ) = log X y2Y p(z i |y, t i )p(y|x i , t i ) (4) Because an answer is deterministically computed given a program and a table, we define p(z | y, t) as 1 or 0 depending upon whether the y evaluates to z given t, or not.", "Let Gen(z, t) βœ“ Y be the set of compatible programs that evaluate to z given the table t. The objective can then be expressed as: J MML = log X y2Gen(zi,ti) p(y|x i , t i ) (5) In practice, the summation over Gen(.)", "is approximated by only using the compatible programs in the set K generated by the search step.", "Policy Gradient Methods Most reinforcement learning approaches for semantic parsing assume access to a reward function R : Y β‡₯ X β‡₯ Z !", "R, giving a scalar reward R(y, z) for a given program y and the correct answer z.", "2 We can further assume without loss of generality that the reward is always in [0, 1].", "Reinforcement learning approaches maximize the expected reward J RL : J RL = X y2Y p(y|x i , t i )R(y, z i ) (6) J RL is hard to approximate using numerical integration since the reward for all programs may not be known a priori.", "Policy gradient methods solve this by approximating the derivative using a sample from the policy.", "When the search space is large, the policy may fail to sample a correct program, which can greatly slow down the learning.", "Therefore, off-policy methods are sometimes introduced to bias the sampling towards high-reward yielding programs.", "In those methods, an additional exploration policy u(y|x i , t i , z i ) is used to improve sampling.", "Importance weights are used to make the gradient unbiased (see Appendix for derivation).", "Maximum Margin Reward For every training example (x i , t i , z i ), the maximum margin reward method finds the highest scoring program y i that evaluates to z i , as the reference program, from the set K of programs generated by the search.", "With a margin function : Y β‡₯Y β‡₯Z !", "R and reference program y, the set of programs V that violate the margin constraint can thus be defined as: V = {y 0 | y 0 2 Y and score βœ“ (y, x, t) ο£Ώ score βœ“ (y 0 , x, t) + (y, y 0 , z)}, (7) where (y, y 0 , z) = R(y, z) R(y 0 , z).", "Similarly, the program that most violates the constraint can be written as: y = arg max y 0 2Y {score βœ“ (y 0 , x, t) + (y, y 0 , z) score βœ“ (y, x, t)} (8) The most-violation margin objective (negative margin loss) is thus defined as: J MMR = max{0, score βœ“ (Θ³, x i , t i ) score βœ“ (y i , x i , t i ) + (y i ,Θ³, z i )} Unlike the previous two learning algorithms, margin methods only update the score of the reference program and the program that violates the margin.", "Generalized Update Equation Although the algorithms described in Β§4.1 seem very different on the surface, the gradients of their loss functions can in fact be described in the same generalized form, given in Eq.", "(9) 3 .", "In addition to the gradient of the model scoring function, this equation has two variable terms, w(Β·), q(Β·).", "We call the first term w(y, x, t, z) intensity, which is a positive scalar value and the second term q(y|x, t) the competing distribution, which is a probability distribution over programs.", "Varying them makes the equation equivalent to the update rule of the algorithms we discussed, as shown in Table 1 .", "We also consider meritocratic update policy which uses a hyperparameter to sharpen or smooth the intensity of maximum marginal likelihood (Guu et al., 2017) .", "Intuitively, w(y, x, t, z) defines the positive part of the update equation, which defines how aggressively the update favors program y.", "Likewise, q(y|x, t) defines the negative part of the learning Generalized Update Equation: (7) ) andΘ³ is the most violating program (cf.", "Eq.", "(8) ).", "For REINFORCE,Ε· is sampled from K using p(.)", "whereas for Off-Policy Policy Gradient,Ε· is sampled using u(.).", "(K) = X y2K w(y, x, t, z) 0 @ r βœ“ score βœ“ (y, x, t) X y 0 2Y q(y 0 |x, t)r βœ“ score βœ“ (y 0 , x, t) algorithm, namely how aggressively the update penalizes the members of the program set.", "The generalized update equation provides a tool for better understanding individual algorithm, and helps shed some light on when a particular method may perform better.", "Intensity versus Search Quality In SpFD, the effectiveness of the algorithms for SpFD is closely related to the quality of the search results given that the gold program is not available.", "Intuitively, if the search quality is good, the update algorithm could be aggressive on updating the model parameters.", "When the search quality is poor, the algorithm should be conservative.", "The intensity w(Β·) is closely related to the aggressiveness of the algorithm.", "For example, the maximum marginal likelihood is less aggressive given that it produces a non-zero intensity over all programs in the program set K that evaluate to the correct answer.", "The intensity for a particular correct program y is proportional to its probability p(y|x, t).", "Further, meritocratic update becomes more aggressive as becomes larger.", "In contrast, REINFORCE and maximum margin reward both have a non-zero intensity only on a single program in K. This value is 1.0 for maximum margin reward, while for reinforcement learning, this value is the reward.", "Maximum margin reward therefore updates most aggressively in favor of its selection while maximum marginal likelihood tends to hedge its bet.", "Therefore, the maximum margin methods should benefit the most when the search quality improves.", "Stability The general equation also allows us to investigate the stability of a model update algorithm.", "In general, the variance of update direction can be high, hence less stable, if the model update algorithm has peaky competing distribution, or it puts all of its intensity on a single program.", "For example, REINFORCE only samples one program and puts non-zero intensity only on that program, so it could be unstable depending on the sampling results.", "The competing distribution affects the stability of the algorithm.", "For example, maximum margin reward penalizes only the most violating program and is benign to other incorrect programs.", "Therefore, the MMR algorithm could be unstable during training.", "New Model Update Algorithm The general equation provides a framework that enables the development of new variations or extensions of existing learning methods.", "For example, in order to improve the stability of the MMR algorithm, we propose a simple variant of maximum margin reward, which penalizes all violating programs instead of only the most violating one.", "We call this approach maximum margin average violation reward (MAVER), which is included in Table 1 as well.", "Given that MAVER effectively considers more negative examples during each update, we expect that it is more stable compared to the MMR algorithm.", "Experiments We describe the setup in Β§5.1 and results in Β§5.2.", "Setup Dataset We use the sequential question answering (SQA) dataset (Iyyer et al., 2017) for our experiments.", "SQA contains 6,066 sequences and each sequence contains up to 3 questions, with 17,553 questions in total.", "The data is partitioned into training (83%) and test (17%) splits.", "We use 4/5 of the original train split as our training set and the remaining 1/5 as the dev set.", "We evaluate using exact match on answer.", "Previous state-of-theart result on the SQA dataset is 44.7% accuracy, using maximum margin reward learning.", "Semantic Parser Our semantic parser is based on DynSP (Iyyer et al., 2017) , which contains a set of SQL actions, such as adding a clause (e.g., Select Column) or adding an operator (e.g., Max).", "Each action has an associated neural network module that generates the score for the action based on the instruction, the table and the list of past actions.", "The score of the entire program is given by the sum of scores of all actions.", "We modified DynSP to improve its representational capacity.", "We refer to the new parser as DynSP++.", "Most notably, we included new features and introduced two additional parser actions.", "See Appendix 8.2 for more details.", "While these improvements help us achieve state-of-the-art results, the majority of the gain comes from the learning contributions described in this paper.", "Hyperparameters For each experiment, we train the model for 30 epochs.", "We find the optimal stopping epoch by evaluating the model on the dev set.", "We then train on train+dev set till the stopping epoch and evaluate the model on the held-out test set.", "Model parameters are trained using stochastic gradient descent with learning rate of 0.1.", "We set the hyperparameter ⌘ for policy shaping to 5.", "All hyperparameters were tuned on the dev set.", "We use 40 lexical pairs for defining the co-occur score.", "We used common English superlatives (e.g., highest, most) and comparators (e.g., more, larger) and did not fit the lexical pairs based on the dataset.", "Given the model parameter βœ“, we use a base exploration policy defined in (Iyyer et al., 2017) .", "This exploration policy is given by b βœ“ (y | x, t, z) / exp( Β· R(y, z) + score βœ“ (y, βœ“, z)).", "R(y, z) is the reward function of the incomplete program y, given the answer z.", "We use a reward function R(y, z) given by the Jaccard similarity of the gold answer z and the answer generated by the program y.", "The value of is set to infinity, which essentially is equivalent to sorting the programs based on the reward and using the current model score for tie breaking.", "Further, we prune all syntactically invalid programs.", "For more details, we refer the reader to (Iyyer et al., 2017) .", "Table 2 contains the dev and test results when using our algorithm on the SQA dataset.", "We observe that margin based methods perform better than maximum likelihood methods and policy gradient in our experiment.", "Policy shaping in general improves the performance across different algorithms.", "Our best test results outperform previous SOTA by 5.0%.", "Results Policy Gradient vs Off-Policy Gradient RE-INFORCE, a simple policy gradient method, achieved extremely poor performance.", "This likely due to the problem of exploration and having to sample from a large space of programs.", "This is further corroborated from observing the much superior performance of off-policy policy gradient methods.", "Thus, the sampling policy is an important factor to consider for policy gradient methods.", "The Effect of Policy Shaping We observe that the improvement due to policy shaping is 6.0% on the SQA dataset for MAVER and only 1.3% for maximum marginal likelihood.", "We also observe that as increases, the improvement due to policy shaping for meritocratic update increases.", "This supports our hypothesis that aggressive updates of margin based methods is beneficial when the search method is more accurate as compared to maximum marginal likelihood which hedges its bet between all programs that evaluate to the right answer.", "Stability of MMR In Section 4, the general update equation helps us point out that MMR could be unstable due to the peaky competing distribution.", "MAVER was proposed to increase the stability of the algorithm.", "To measure stability, we cal- Table 4 .", "Policy Shaping vs Model Shaping Critique policy contains useful information that can bias the search away from spurious programs.", "Therefore, one can also consider making the critique policy as part of the model.", "We call this model shaping.", "We define our model to be the shaped policy and train and test using the new model.", "Using MAVER updates, we found that the dev accuracy dropped to 37.1%.", "We conjecture that the strong prior in the critique policy can hinder generalization in model shaping.", "Related Work Semantic Parsing from Denotation Mapping natural language text to formal meaning representation was first studied by Montague (1970) .", "Early work on learning semantic parsers rely on labeled formal representations as the supervision signals Collins, 2005, 2007; Zelle and Mooney, 1993) .", "However, because getting access to gold formal representation generally requires expensive annotations by an expert, distant supervision approaches, where semantic parsers are learned from denotation only, have become the main learning paradigm (e.g., Clarke et al., 2010; Liang et al., 2011; Artzi and Zettlemoyer, 2013; Berant et al., 2013; Iyyer et al., 2017; Krishnamurthy et al., 2017) .", "Guu et al.", "(2017) studied the problem of spurious programs and considered adding noise to diversify the search procedure and introduced meritocratic updates.", "Reinforcement Learning Algorithms Reinforcement learning algorithms have been applied to various NLP problems including dialogue (Li et al., 2016 ), text-based games (Narasimhan et al., 2015) , information extraction (Narasimhan et al., 2016) , coreference resolution (Clark and Man- ning, 2016), semantic parsing (Guu et al., 2017) and instruction following (Misra et al., 2017) .", "Guu et al.", "(2017) show that policy gradient methods underperform maximum marginal likelihood approaches.", "Our result on the SQA dataset supports their observation.", "However, we show that using off-policy sampling, policy gradient methods can provide superior performance to maximum marginal likelihood methods.", "Margin-based Learning Margin-based methods have been considered in the context of SVM learning.", "In the NLP literature, margin based learning has been applied to parsing (Taskar et al., 2004; McDonald et al., 2005) , text classification (Taskar et al., 2003) , machine translation (Watanabe et al., 2007) and semantic parsing (Iyyer et al., 2017) .", "Kummerfeld et al.", "(2015) found that max-margin based methods generally outperform likelihood maximization on a range of tasks.", "Previous work have studied connections between margin based method and likelihood maximization for supervised learning setting.", "We show them as special cases of our unified update equation for distant supervision learning.", "Similar to this work, Lee et al.", "(2016) also found that in the context of supervised learning, margin-based algorithms which update all violated examples perform better than the one that only updates the most violated example.", "Latent Variable Modeling Learning semantic parsers from denotation can be viewed as a latent variable modeling problem, where the program is the latent variable.", "Probabilistic latent variable models have been studied using EM-algorithm and its variant (Dempster et al., 1977) .", "The graphical model literature has studied latent variable learning on margin-based methods (Yu and Joachims, 2009 ) and probabilistic models (Quattoni et al., 2007) .", "Samdani et al.", "(2012) studied various vari-ants of EM algorithm and showed that all of them are special cases of a unified framework.", "Our generalized update framework is similar in spirit.", "Conclusion In this paper, we propose a general update equation from semantic parsing from denotation and propose a policy shaping method for addressing the spurious program challenge.", "For the future, we plan to apply the proposed learning framework to more semantic parsing tasks and consider new methods for policy shaping." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "4", "4.1", "4.2", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Background", "The Semantic Parsing Task", "Learning", "Addressing Spurious Programs: Policy Shaping", "Addressing Update Strategy Selection:", "Commonly Used Learning Algorithms", "Generalized Update Equation", "Experiments", "Setup", "Results", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-103#paper-1267#slide-1
Indirect Supervision
No gold programs during training What nation scored the most points? Where Points is Max Christelle Le Duff France Charlotte Barras England England
No gold programs during training What nation scored the most points? Where Points is Max Christelle Le Duff France Charlotte Barras England England
[]
GEM-SciDuet-train-103#paper-1267#slide-2
1267
Policy Shaping and Generalized Update Equations for Semantic Parsing from Denotations
Semantic parsing from denotations faces two key challenges in model training: (1) given only the denotations (e.g., answers), search for good candidate semantic parses, and (2) choose the best model update algorithm. We propose effective and general solutions to each of them. Using policy shaping, we bias the search procedure towards semantic parses that are more compatible to the text, which provide better supervision signals for training. In addition, we propose an update equation that generalizes three different families of learning algorithms, which enables fast model exploration. When experimented on a recently proposed sequential question answering dataset, our framework leads to a new state-of-theart model that outperforms previous work by 5.0% absolute on exact match accuracy.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233 ], "paper_content_text": [ "Introduction Semantic parsing from denotations (SpFD) is the problem of mapping text to executable formal representations (or program) in a situated environment and executing them to generate denotations (or answer), in the absence of access to correct representations.", "Several problems have been handled within this framework, including question answering (Berant et al., 2013; Iyyer et al., 2017) and instructions for robots (Artzi and Zettlemoyer, 2013; Misra et al., 2015) .", "Consider the example in Figure 1 .", "Given the question and a table environment, a semantic parser maps the question to an executable program, in this case a SQL query, and then executes the query on the environment to generate the answer England.", "In the SpFD setting, the training data does not contain the correct programs.", "Thus, the existing learning approaches for SpFD perform two steps for every training example, a search step that explores the space of programs Select Nation Where Points is Maximum Program: Answer: Environment: England Figure 1 : An example of semantic parsing from denotations.", "Given the table environment, map the question to an executable program that evaluates to the answer.", "and finds suitable candidates, and an update step that uses these programs to update the model.", "Figure 2 shows the two step training procedure for the above example.", "In this paper, we address two key challenges in model training for SpFD by proposing a novel learning framework, improving both the search and update steps.", "The first challenge, the existence of spurious programs, lies in the search step.", "More specifically, while the success of the search step relies on its ability to find programs that are semantically correct, we can only verify if the program can generate correct answers, given that no gold programs are presented.", "The search step is complicated by spurious programs, which happen to evaluate to the correct answer but do not represent accurately the meaning of the natural language question.", "For example, for the environment in Figure 1 , the program Select Nation Where Name = Karen Andrew is spurious.", "Selecting spurious programs as positive examples can greatly affect the performance of semantic parsers as these programs generally do not gen- Our contributions in this work are twofold.", "To address the first challenge, we propose a policy shaping (Griffith et al., 2013) method that incorporates simple, lightweight domain knowledge, such as a small set of lexical pairs of tokens in the question and program, in the form of a critique policy ( Β§ 3).", "This helps bias the search towards the correct program, an important step to improve supervision signals, which benefits learning regardless of the choice of algorithm.", "To address the second challenge, we prove that the parameter update step in several algorithms are similar and can be viewed as special cases of a generalized update equation ( Β§ 4).", "The equation contains two variable terms that govern the update behavior.", "Changing these two terms effectively defines an infinite class of learning algorithms where different values lead to significantly different results.", "We study this effect and propose a novel learning framework that improves over existing methods.", "We evaluate our methods using the sequential question answering (SQA) dataset (Iyyer et al., 2017) , and show that our proposed improvements to the search and update steps consistently enhance existing approaches.", "The proposed algorithm achieves new state-of-the-art and outperforms existing parsers by 5.0%.", "Background We give a formal problem definition of the semantic parsing task, followed by the general learning framework for solving it.", "The Semantic Parsing Task The problem discussed in this paper can be formally defined as follows.", "Let X be the set of all possible questions, Y programs (e.g., SQL-like queries), T tables (i.e., the structured data in this work) and Z answers.", "We further assume access to an executor : Y β‡₯ T !", "Z, that given a program y 2 Y and a table t 2 T , generates an answer (y, t) 2 Z.", "We assume that the executor and all tables are deterministic and the executor can be called as many times as possible.", "To facilitate discussion in the following sections, we define an environment function e t : Y !", "Z, by applying the executor to the program as e t (y) = (y, t).", "Given a question x and an environment e t , our aim is to generate a program y ⇀ 2 Y and then execute it to produce the answer e t (y ⇀ ).", "Assume that for any y 2 Y, the score of y being a correct program for x is score βœ“ (y, x, t), parameterized by βœ“.", "The inference task is thus: y ⇀ = arg max y2Y score βœ“ (y, x, t) (1) As the size of Y is exponential to the length of the program, a generic search procedure is typically employed for Eq.", "(1), as efficient dynamic algorithms typically do not exist.", "These search procedures generally maintain a beam of program states sorted according to some scoring function, where each program state represents an incomplete program.", "The search then generates a new program state from an existing state by performing an action.", "Each action adds a set of tokens (e.g., Nation) and keyword (e.g., Select) to a program state.", "For example, in order to generate the program in Figure 1 , the DynSP parser (Iyyer et al., 2017) will take the first action as adding the SQL expression Select Nation.", "Notice that score βœ“ can be used in either probabilistic or nonprobabilistic models.", "For probabilistic models, we assume that it is a Boltzmann policy, meaning that p βœ“ (y | x, t) / exp{score βœ“ (y, x, t)}.", "Learning Learning a semantic parser is equivalent to learning the parameters βœ“ in the scoring function, which is a structured learning problem, due to the large, structured output space Y.", "Structured learning algorithms generally consist of two major components: search and update.", "When the gold programs are available during training, the search procedure finds a set of high-scoring incorrect programs.", "These programs are used by the update step to derive loss for updating parameters.", "For example, these programs are used for approximating the partition-function in maximum-likelihood objective (Liang et al., 2011) and finding set of programs causing margin violation in margin based methods (DaumΓ© III and Marcu, 2005) .", "Depending on the exact algorithm being used, these two components are not necessarily separated into isolated steps.", "For instance, parameters can be updated in the middle of search (e.g., Huang et al., 2012) .", "For learning semantic parsers from denotations, where we assume only answers are available in a training set {( x i , t i , z i )} N i=1 of N examples, the basic construction of the learning algorithms remains the same.", "However, the problems that search needs to handle in SpFD is more challenging.", "In addition to finding a set of high-scoring incorrect programs, the search procedure also needs to guess the correct program(s) evaluating to the gold answer z i .", "This problem is further complicated by the presence of spurious programs, which generate the correct answer but are semantically incompatible with the question.", "For example, although all programs in Figure 2 evaluate to the same answer, only one of them is correct.", "The issue of the spurious programs also affects the design of model update.", "For instance, maximum marginal likelihood methods treat all the programs that evaluate to the gold answer equally, while maximum margin reward networks use model score to break tie and pick one of the programs as the correct reference.", "Addressing Spurious Programs: Policy Shaping Given a training example (x, t, z), the aim of the search step is to find a set K(x, t, z) of programs consisting of correct programs that evaluate to z and high-scoring incorrect programs.", "The search step should avoid picking up spurious programs for learning since such programs typically do not generalize.", "For example, in Figure 2, the spurious program Select Nation Where Index is Min will evaluate to an incorrect answer if the indices of the first two rows are swapped 1 .", "This problem is challenging since among the programs that evaluate to the correct answer, most of them are spurious.", "The search step can be viewed as following an exploration policy b βœ“ (y|x, t, z) to explore the set of programs Y.", "This exploration is often performed by beam search and at each step, we either sample from b βœ“ or take the top scoring programs.", "The set K(x, t, z) is then used by the update step for parameter update.", "Most search strategies use an exploration policy which is based on the score function, for example b βœ“ (y|x, t, z) / exp{score βœ“ (y, t)}.", "However, this approach can suffer from a divergence phenomenon whereby the score of spurious programs picked up by the search in the first epoch increases, making it more likely for the search to pick them up in the future.", "Such divergence issues are common with latent-variable learning and often require careful initialization to overcome (Rose, 1998) .", "Unfortunately such initialization schemes are not applicable for deep neural networks which form the model of most successful semantic parsers today (Jia and Liang, 2016; Misra and Artzi, 2016; Iyyer et al., 2017) .", "Prior work, such as ✏-greedy exploration (Guu et al., 2017) , has reduced the severity of this problem by introducing random noise in the search procedure to avoid saturating the search on high-scoring spurious programs.", "However, random noise need not bias the search towards the correct program(s).", "In this paper, we introduce a simple policy-shaping method to guide the search.", "This approach allows incorporating prior knowledge in the exploration policy and can bias the search away from spurious programs.", "Β» Compute generalized gradient updates 6: βœ“ = βœ“ + Β΅ (K) 7: return βœ“ Policy Shaping Policy shaping is a method to introduce prior knowledge into a policy (Griffith et al., 2013) .", "Formally, let the current behavior policy be b βœ“ (y|x, t, z) and a predefined critique policy, the prior knowledge, be p c (y|x, t).", "Policy shaping defines a new shaped behavior policy p b (y|x, t) given by: p b (y|x, t) = b βœ“ (y|x, t, z)p c (y|x, t) P y 0 2Y b βœ“ (y 0 |x, t, z)p c (y 0 |x, t) .", "(2) Using the shaped policy for exploration biases the search towards the critique policy's preference.", "We next describe a simple critique policy that we use in this paper.", "Lexical Policy Shaping We qualitatively observed that correct programs often contains tokens which are also present in the question.", "For example, the correct program in Figure 2 contains the token Points, which is also present in the question.", "We therefore, define a simple surface form similarity feature match(x, y) that computes the ratio of number of non-keyword tokens in the program y that are also present in the question x.", "However, surface-form similarity is often not enough.", "For example, both the first and fourth program in Figure 2 contain the token Points but only the fourth program is correct.", "Therefore, we also use a simple co-occurrence feature that triggers on frequently co-occurring pairs of tokens in the program and instruction.", "For example, the token most is highly likely to co-occur with a correct program containing the keyword Max.", "This happens for the example in Figure 2 .", "Similarly the token not may co-occur with the keyword NotEqual.", "We assume access to a lexicon ⇀ = {(w j , !", "j )} k j=1 containing k lexical pairs of tokens and keywords.", "Each lexical pair (w, !)", "maps the token w in a text to a keyword !", "in a program.", "For a given program y and question x, we define a co-occurrence score as co_occur(y, x) = P (w,!", ")2⇀ {w 2 x^!", "2 y}}.", "We define critique score critique(y, x) as the sum of the match and co_occur scores.", "The critique policy is given by: p c (y|x, t) / exp (⌘ ⇀ critique(y, x)) , (3) where ⌘ is a single scalar hyper-parameter denoting the confidence in the critique policy.", "Addressing Update Strategy Selection: Generalized Update Equation Given the set of programs generated by the search step, one can use many objectives to update the parameters.", "For example, previous work have utilized maximum marginal likelihood (Krishnamurthy et al., 2017; Guu et al., 2017) , reinforcement learning (Zhong et al., 2017; Guu et al., 2017) and margin based methods (Iyyer et al., 2017) .", "It could be difficult to choose the suitable algorithm from these options.", "In this section, we propose a principle and general update equation such that previous update algorithms can be considered as special cases to this equation.", "Having a general update is important for the following reasons.", "First, it allows us to understand existing algorithms better by examining their basic properties.", "Second, the generalized update equation also makes it easy to implement and experiment with various different algorithms.", "Moreover, it provides a framework that enables the development of new variations or extensions of existing learning methods.", "In the following, we describe how the commonly used algorithms are in fact very similartheir update rules can all be viewed as special cases of the proposed generalized update equation.", "Algorithm 1 shows the meta-learning framework.", "For every training example, we first find a set of candidates using an exploration policy (line 4).", "We use the program candidates to update the parameters (line 6).", "Commonly Used Learning Algorithms We briefly describe three algorithms: maximum marginalized likelihood, policy gradient and maximum margin reward.", "Maximum Marginalized Likelihood The maximum marginalized likelihood method maximizes the log-likelihood of the training data by marginalizing over the set of programs.", "J MML = log p(z i |x i , t i ) = log X y2Y p(z i |y, t i )p(y|x i , t i ) (4) Because an answer is deterministically computed given a program and a table, we define p(z | y, t) as 1 or 0 depending upon whether the y evaluates to z given t, or not.", "Let Gen(z, t) βœ“ Y be the set of compatible programs that evaluate to z given the table t. The objective can then be expressed as: J MML = log X y2Gen(zi,ti) p(y|x i , t i ) (5) In practice, the summation over Gen(.)", "is approximated by only using the compatible programs in the set K generated by the search step.", "Policy Gradient Methods Most reinforcement learning approaches for semantic parsing assume access to a reward function R : Y β‡₯ X β‡₯ Z !", "R, giving a scalar reward R(y, z) for a given program y and the correct answer z.", "2 We can further assume without loss of generality that the reward is always in [0, 1].", "Reinforcement learning approaches maximize the expected reward J RL : J RL = X y2Y p(y|x i , t i )R(y, z i ) (6) J RL is hard to approximate using numerical integration since the reward for all programs may not be known a priori.", "Policy gradient methods solve this by approximating the derivative using a sample from the policy.", "When the search space is large, the policy may fail to sample a correct program, which can greatly slow down the learning.", "Therefore, off-policy methods are sometimes introduced to bias the sampling towards high-reward yielding programs.", "In those methods, an additional exploration policy u(y|x i , t i , z i ) is used to improve sampling.", "Importance weights are used to make the gradient unbiased (see Appendix for derivation).", "Maximum Margin Reward For every training example (x i , t i , z i ), the maximum margin reward method finds the highest scoring program y i that evaluates to z i , as the reference program, from the set K of programs generated by the search.", "With a margin function : Y β‡₯Y β‡₯Z !", "R and reference program y, the set of programs V that violate the margin constraint can thus be defined as: V = {y 0 | y 0 2 Y and score βœ“ (y, x, t) ο£Ώ score βœ“ (y 0 , x, t) + (y, y 0 , z)}, (7) where (y, y 0 , z) = R(y, z) R(y 0 , z).", "Similarly, the program that most violates the constraint can be written as: y = arg max y 0 2Y {score βœ“ (y 0 , x, t) + (y, y 0 , z) score βœ“ (y, x, t)} (8) The most-violation margin objective (negative margin loss) is thus defined as: J MMR = max{0, score βœ“ (Θ³, x i , t i ) score βœ“ (y i , x i , t i ) + (y i ,Θ³, z i )} Unlike the previous two learning algorithms, margin methods only update the score of the reference program and the program that violates the margin.", "Generalized Update Equation Although the algorithms described in Β§4.1 seem very different on the surface, the gradients of their loss functions can in fact be described in the same generalized form, given in Eq.", "(9) 3 .", "In addition to the gradient of the model scoring function, this equation has two variable terms, w(Β·), q(Β·).", "We call the first term w(y, x, t, z) intensity, which is a positive scalar value and the second term q(y|x, t) the competing distribution, which is a probability distribution over programs.", "Varying them makes the equation equivalent to the update rule of the algorithms we discussed, as shown in Table 1 .", "We also consider meritocratic update policy which uses a hyperparameter to sharpen or smooth the intensity of maximum marginal likelihood (Guu et al., 2017) .", "Intuitively, w(y, x, t, z) defines the positive part of the update equation, which defines how aggressively the update favors program y.", "Likewise, q(y|x, t) defines the negative part of the learning Generalized Update Equation: (7) ) andΘ³ is the most violating program (cf.", "Eq.", "(8) ).", "For REINFORCE,Ε· is sampled from K using p(.)", "whereas for Off-Policy Policy Gradient,Ε· is sampled using u(.).", "(K) = X y2K w(y, x, t, z) 0 @ r βœ“ score βœ“ (y, x, t) X y 0 2Y q(y 0 |x, t)r βœ“ score βœ“ (y 0 , x, t) algorithm, namely how aggressively the update penalizes the members of the program set.", "The generalized update equation provides a tool for better understanding individual algorithm, and helps shed some light on when a particular method may perform better.", "Intensity versus Search Quality In SpFD, the effectiveness of the algorithms for SpFD is closely related to the quality of the search results given that the gold program is not available.", "Intuitively, if the search quality is good, the update algorithm could be aggressive on updating the model parameters.", "When the search quality is poor, the algorithm should be conservative.", "The intensity w(Β·) is closely related to the aggressiveness of the algorithm.", "For example, the maximum marginal likelihood is less aggressive given that it produces a non-zero intensity over all programs in the program set K that evaluate to the correct answer.", "The intensity for a particular correct program y is proportional to its probability p(y|x, t).", "Further, meritocratic update becomes more aggressive as becomes larger.", "In contrast, REINFORCE and maximum margin reward both have a non-zero intensity only on a single program in K. This value is 1.0 for maximum margin reward, while for reinforcement learning, this value is the reward.", "Maximum margin reward therefore updates most aggressively in favor of its selection while maximum marginal likelihood tends to hedge its bet.", "Therefore, the maximum margin methods should benefit the most when the search quality improves.", "Stability The general equation also allows us to investigate the stability of a model update algorithm.", "In general, the variance of update direction can be high, hence less stable, if the model update algorithm has peaky competing distribution, or it puts all of its intensity on a single program.", "For example, REINFORCE only samples one program and puts non-zero intensity only on that program, so it could be unstable depending on the sampling results.", "The competing distribution affects the stability of the algorithm.", "For example, maximum margin reward penalizes only the most violating program and is benign to other incorrect programs.", "Therefore, the MMR algorithm could be unstable during training.", "New Model Update Algorithm The general equation provides a framework that enables the development of new variations or extensions of existing learning methods.", "For example, in order to improve the stability of the MMR algorithm, we propose a simple variant of maximum margin reward, which penalizes all violating programs instead of only the most violating one.", "We call this approach maximum margin average violation reward (MAVER), which is included in Table 1 as well.", "Given that MAVER effectively considers more negative examples during each update, we expect that it is more stable compared to the MMR algorithm.", "Experiments We describe the setup in Β§5.1 and results in Β§5.2.", "Setup Dataset We use the sequential question answering (SQA) dataset (Iyyer et al., 2017) for our experiments.", "SQA contains 6,066 sequences and each sequence contains up to 3 questions, with 17,553 questions in total.", "The data is partitioned into training (83%) and test (17%) splits.", "We use 4/5 of the original train split as our training set and the remaining 1/5 as the dev set.", "We evaluate using exact match on answer.", "Previous state-of-theart result on the SQA dataset is 44.7% accuracy, using maximum margin reward learning.", "Semantic Parser Our semantic parser is based on DynSP (Iyyer et al., 2017) , which contains a set of SQL actions, such as adding a clause (e.g., Select Column) or adding an operator (e.g., Max).", "Each action has an associated neural network module that generates the score for the action based on the instruction, the table and the list of past actions.", "The score of the entire program is given by the sum of scores of all actions.", "We modified DynSP to improve its representational capacity.", "We refer to the new parser as DynSP++.", "Most notably, we included new features and introduced two additional parser actions.", "See Appendix 8.2 for more details.", "While these improvements help us achieve state-of-the-art results, the majority of the gain comes from the learning contributions described in this paper.", "Hyperparameters For each experiment, we train the model for 30 epochs.", "We find the optimal stopping epoch by evaluating the model on the dev set.", "We then train on train+dev set till the stopping epoch and evaluate the model on the held-out test set.", "Model parameters are trained using stochastic gradient descent with learning rate of 0.1.", "We set the hyperparameter ⌘ for policy shaping to 5.", "All hyperparameters were tuned on the dev set.", "We use 40 lexical pairs for defining the co-occur score.", "We used common English superlatives (e.g., highest, most) and comparators (e.g., more, larger) and did not fit the lexical pairs based on the dataset.", "Given the model parameter βœ“, we use a base exploration policy defined in (Iyyer et al., 2017) .", "This exploration policy is given by b βœ“ (y | x, t, z) / exp( Β· R(y, z) + score βœ“ (y, βœ“, z)).", "R(y, z) is the reward function of the incomplete program y, given the answer z.", "We use a reward function R(y, z) given by the Jaccard similarity of the gold answer z and the answer generated by the program y.", "The value of is set to infinity, which essentially is equivalent to sorting the programs based on the reward and using the current model score for tie breaking.", "Further, we prune all syntactically invalid programs.", "For more details, we refer the reader to (Iyyer et al., 2017) .", "Table 2 contains the dev and test results when using our algorithm on the SQA dataset.", "We observe that margin based methods perform better than maximum likelihood methods and policy gradient in our experiment.", "Policy shaping in general improves the performance across different algorithms.", "Our best test results outperform previous SOTA by 5.0%.", "Results Policy Gradient vs Off-Policy Gradient RE-INFORCE, a simple policy gradient method, achieved extremely poor performance.", "This likely due to the problem of exploration and having to sample from a large space of programs.", "This is further corroborated from observing the much superior performance of off-policy policy gradient methods.", "Thus, the sampling policy is an important factor to consider for policy gradient methods.", "The Effect of Policy Shaping We observe that the improvement due to policy shaping is 6.0% on the SQA dataset for MAVER and only 1.3% for maximum marginal likelihood.", "We also observe that as increases, the improvement due to policy shaping for meritocratic update increases.", "This supports our hypothesis that aggressive updates of margin based methods is beneficial when the search method is more accurate as compared to maximum marginal likelihood which hedges its bet between all programs that evaluate to the right answer.", "Stability of MMR In Section 4, the general update equation helps us point out that MMR could be unstable due to the peaky competing distribution.", "MAVER was proposed to increase the stability of the algorithm.", "To measure stability, we cal- Table 4 .", "Policy Shaping vs Model Shaping Critique policy contains useful information that can bias the search away from spurious programs.", "Therefore, one can also consider making the critique policy as part of the model.", "We call this model shaping.", "We define our model to be the shaped policy and train and test using the new model.", "Using MAVER updates, we found that the dev accuracy dropped to 37.1%.", "We conjecture that the strong prior in the critique policy can hinder generalization in model shaping.", "Related Work Semantic Parsing from Denotation Mapping natural language text to formal meaning representation was first studied by Montague (1970) .", "Early work on learning semantic parsers rely on labeled formal representations as the supervision signals Collins, 2005, 2007; Zelle and Mooney, 1993) .", "However, because getting access to gold formal representation generally requires expensive annotations by an expert, distant supervision approaches, where semantic parsers are learned from denotation only, have become the main learning paradigm (e.g., Clarke et al., 2010; Liang et al., 2011; Artzi and Zettlemoyer, 2013; Berant et al., 2013; Iyyer et al., 2017; Krishnamurthy et al., 2017) .", "Guu et al.", "(2017) studied the problem of spurious programs and considered adding noise to diversify the search procedure and introduced meritocratic updates.", "Reinforcement Learning Algorithms Reinforcement learning algorithms have been applied to various NLP problems including dialogue (Li et al., 2016 ), text-based games (Narasimhan et al., 2015) , information extraction (Narasimhan et al., 2016) , coreference resolution (Clark and Man- ning, 2016), semantic parsing (Guu et al., 2017) and instruction following (Misra et al., 2017) .", "Guu et al.", "(2017) show that policy gradient methods underperform maximum marginal likelihood approaches.", "Our result on the SQA dataset supports their observation.", "However, we show that using off-policy sampling, policy gradient methods can provide superior performance to maximum marginal likelihood methods.", "Margin-based Learning Margin-based methods have been considered in the context of SVM learning.", "In the NLP literature, margin based learning has been applied to parsing (Taskar et al., 2004; McDonald et al., 2005) , text classification (Taskar et al., 2003) , machine translation (Watanabe et al., 2007) and semantic parsing (Iyyer et al., 2017) .", "Kummerfeld et al.", "(2015) found that max-margin based methods generally outperform likelihood maximization on a range of tasks.", "Previous work have studied connections between margin based method and likelihood maximization for supervised learning setting.", "We show them as special cases of our unified update equation for distant supervision learning.", "Similar to this work, Lee et al.", "(2016) also found that in the context of supervised learning, margin-based algorithms which update all violated examples perform better than the one that only updates the most violated example.", "Latent Variable Modeling Learning semantic parsers from denotation can be viewed as a latent variable modeling problem, where the program is the latent variable.", "Probabilistic latent variable models have been studied using EM-algorithm and its variant (Dempster et al., 1977) .", "The graphical model literature has studied latent variable learning on margin-based methods (Yu and Joachims, 2009 ) and probabilistic models (Quattoni et al., 2007) .", "Samdani et al.", "(2012) studied various vari-ants of EM algorithm and showed that all of them are special cases of a unified framework.", "Our generalized update framework is similar in spirit.", "Conclusion In this paper, we propose a general update equation from semantic parsing from denotation and propose a policy shaping method for addressing the spurious program challenge.", "For the future, we plan to apply the proposed learning framework to more semantic parsing tasks and consider new methods for policy shaping." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "4", "4.1", "4.2", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Background", "The Semantic Parsing Task", "Learning", "Addressing Spurious Programs: Policy Shaping", "Addressing Update Strategy Selection:", "Commonly Used Learning Algorithms", "Generalized Update Equation", "Experiments", "Setup", "Results", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-103#paper-1267#slide-2
Learning
x: What nation scored the most points? y: Select Nation Where Index is Minimum neural models score(x, y): encode x, encode y, and produce scores Beamseach: argmax score(x, y) Find approximated gold meaning representations
x: What nation scored the most points? y: Select Nation Where Index is Minimum neural models score(x, y): encode x, encode y, and produce scores Beamseach: argmax score(x, y) Find approximated gold meaning representations
[]
GEM-SciDuet-train-103#paper-1267#slide-3
1267
Policy Shaping and Generalized Update Equations for Semantic Parsing from Denotations
Semantic parsing from denotations faces two key challenges in model training: (1) given only the denotations (e.g., answers), search for good candidate semantic parses, and (2) choose the best model update algorithm. We propose effective and general solutions to each of them. Using policy shaping, we bias the search procedure towards semantic parses that are more compatible to the text, which provide better supervision signals for training. In addition, we propose an update equation that generalizes three different families of learning algorithms, which enables fast model exploration. When experimented on a recently proposed sequential question answering dataset, our framework leads to a new state-of-theart model that outperforms previous work by 5.0% absolute on exact match accuracy.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233 ], "paper_content_text": [ "Introduction Semantic parsing from denotations (SpFD) is the problem of mapping text to executable formal representations (or program) in a situated environment and executing them to generate denotations (or answer), in the absence of access to correct representations.", "Several problems have been handled within this framework, including question answering (Berant et al., 2013; Iyyer et al., 2017) and instructions for robots (Artzi and Zettlemoyer, 2013; Misra et al., 2015) .", "Consider the example in Figure 1 .", "Given the question and a table environment, a semantic parser maps the question to an executable program, in this case a SQL query, and then executes the query on the environment to generate the answer England.", "In the SpFD setting, the training data does not contain the correct programs.", "Thus, the existing learning approaches for SpFD perform two steps for every training example, a search step that explores the space of programs Select Nation Where Points is Maximum Program: Answer: Environment: England Figure 1 : An example of semantic parsing from denotations.", "Given the table environment, map the question to an executable program that evaluates to the answer.", "and finds suitable candidates, and an update step that uses these programs to update the model.", "Figure 2 shows the two step training procedure for the above example.", "In this paper, we address two key challenges in model training for SpFD by proposing a novel learning framework, improving both the search and update steps.", "The first challenge, the existence of spurious programs, lies in the search step.", "More specifically, while the success of the search step relies on its ability to find programs that are semantically correct, we can only verify if the program can generate correct answers, given that no gold programs are presented.", "The search step is complicated by spurious programs, which happen to evaluate to the correct answer but do not represent accurately the meaning of the natural language question.", "For example, for the environment in Figure 1 , the program Select Nation Where Name = Karen Andrew is spurious.", "Selecting spurious programs as positive examples can greatly affect the performance of semantic parsers as these programs generally do not gen- Our contributions in this work are twofold.", "To address the first challenge, we propose a policy shaping (Griffith et al., 2013) method that incorporates simple, lightweight domain knowledge, such as a small set of lexical pairs of tokens in the question and program, in the form of a critique policy ( Β§ 3).", "This helps bias the search towards the correct program, an important step to improve supervision signals, which benefits learning regardless of the choice of algorithm.", "To address the second challenge, we prove that the parameter update step in several algorithms are similar and can be viewed as special cases of a generalized update equation ( Β§ 4).", "The equation contains two variable terms that govern the update behavior.", "Changing these two terms effectively defines an infinite class of learning algorithms where different values lead to significantly different results.", "We study this effect and propose a novel learning framework that improves over existing methods.", "We evaluate our methods using the sequential question answering (SQA) dataset (Iyyer et al., 2017) , and show that our proposed improvements to the search and update steps consistently enhance existing approaches.", "The proposed algorithm achieves new state-of-the-art and outperforms existing parsers by 5.0%.", "Background We give a formal problem definition of the semantic parsing task, followed by the general learning framework for solving it.", "The Semantic Parsing Task The problem discussed in this paper can be formally defined as follows.", "Let X be the set of all possible questions, Y programs (e.g., SQL-like queries), T tables (i.e., the structured data in this work) and Z answers.", "We further assume access to an executor : Y β‡₯ T !", "Z, that given a program y 2 Y and a table t 2 T , generates an answer (y, t) 2 Z.", "We assume that the executor and all tables are deterministic and the executor can be called as many times as possible.", "To facilitate discussion in the following sections, we define an environment function e t : Y !", "Z, by applying the executor to the program as e t (y) = (y, t).", "Given a question x and an environment e t , our aim is to generate a program y ⇀ 2 Y and then execute it to produce the answer e t (y ⇀ ).", "Assume that for any y 2 Y, the score of y being a correct program for x is score βœ“ (y, x, t), parameterized by βœ“.", "The inference task is thus: y ⇀ = arg max y2Y score βœ“ (y, x, t) (1) As the size of Y is exponential to the length of the program, a generic search procedure is typically employed for Eq.", "(1), as efficient dynamic algorithms typically do not exist.", "These search procedures generally maintain a beam of program states sorted according to some scoring function, where each program state represents an incomplete program.", "The search then generates a new program state from an existing state by performing an action.", "Each action adds a set of tokens (e.g., Nation) and keyword (e.g., Select) to a program state.", "For example, in order to generate the program in Figure 1 , the DynSP parser (Iyyer et al., 2017) will take the first action as adding the SQL expression Select Nation.", "Notice that score βœ“ can be used in either probabilistic or nonprobabilistic models.", "For probabilistic models, we assume that it is a Boltzmann policy, meaning that p βœ“ (y | x, t) / exp{score βœ“ (y, x, t)}.", "Learning Learning a semantic parser is equivalent to learning the parameters βœ“ in the scoring function, which is a structured learning problem, due to the large, structured output space Y.", "Structured learning algorithms generally consist of two major components: search and update.", "When the gold programs are available during training, the search procedure finds a set of high-scoring incorrect programs.", "These programs are used by the update step to derive loss for updating parameters.", "For example, these programs are used for approximating the partition-function in maximum-likelihood objective (Liang et al., 2011) and finding set of programs causing margin violation in margin based methods (DaumΓ© III and Marcu, 2005) .", "Depending on the exact algorithm being used, these two components are not necessarily separated into isolated steps.", "For instance, parameters can be updated in the middle of search (e.g., Huang et al., 2012) .", "For learning semantic parsers from denotations, where we assume only answers are available in a training set {( x i , t i , z i )} N i=1 of N examples, the basic construction of the learning algorithms remains the same.", "However, the problems that search needs to handle in SpFD is more challenging.", "In addition to finding a set of high-scoring incorrect programs, the search procedure also needs to guess the correct program(s) evaluating to the gold answer z i .", "This problem is further complicated by the presence of spurious programs, which generate the correct answer but are semantically incompatible with the question.", "For example, although all programs in Figure 2 evaluate to the same answer, only one of them is correct.", "The issue of the spurious programs also affects the design of model update.", "For instance, maximum marginal likelihood methods treat all the programs that evaluate to the gold answer equally, while maximum margin reward networks use model score to break tie and pick one of the programs as the correct reference.", "Addressing Spurious Programs: Policy Shaping Given a training example (x, t, z), the aim of the search step is to find a set K(x, t, z) of programs consisting of correct programs that evaluate to z and high-scoring incorrect programs.", "The search step should avoid picking up spurious programs for learning since such programs typically do not generalize.", "For example, in Figure 2, the spurious program Select Nation Where Index is Min will evaluate to an incorrect answer if the indices of the first two rows are swapped 1 .", "This problem is challenging since among the programs that evaluate to the correct answer, most of them are spurious.", "The search step can be viewed as following an exploration policy b βœ“ (y|x, t, z) to explore the set of programs Y.", "This exploration is often performed by beam search and at each step, we either sample from b βœ“ or take the top scoring programs.", "The set K(x, t, z) is then used by the update step for parameter update.", "Most search strategies use an exploration policy which is based on the score function, for example b βœ“ (y|x, t, z) / exp{score βœ“ (y, t)}.", "However, this approach can suffer from a divergence phenomenon whereby the score of spurious programs picked up by the search in the first epoch increases, making it more likely for the search to pick them up in the future.", "Such divergence issues are common with latent-variable learning and often require careful initialization to overcome (Rose, 1998) .", "Unfortunately such initialization schemes are not applicable for deep neural networks which form the model of most successful semantic parsers today (Jia and Liang, 2016; Misra and Artzi, 2016; Iyyer et al., 2017) .", "Prior work, such as ✏-greedy exploration (Guu et al., 2017) , has reduced the severity of this problem by introducing random noise in the search procedure to avoid saturating the search on high-scoring spurious programs.", "However, random noise need not bias the search towards the correct program(s).", "In this paper, we introduce a simple policy-shaping method to guide the search.", "This approach allows incorporating prior knowledge in the exploration policy and can bias the search away from spurious programs.", "Β» Compute generalized gradient updates 6: βœ“ = βœ“ + Β΅ (K) 7: return βœ“ Policy Shaping Policy shaping is a method to introduce prior knowledge into a policy (Griffith et al., 2013) .", "Formally, let the current behavior policy be b βœ“ (y|x, t, z) and a predefined critique policy, the prior knowledge, be p c (y|x, t).", "Policy shaping defines a new shaped behavior policy p b (y|x, t) given by: p b (y|x, t) = b βœ“ (y|x, t, z)p c (y|x, t) P y 0 2Y b βœ“ (y 0 |x, t, z)p c (y 0 |x, t) .", "(2) Using the shaped policy for exploration biases the search towards the critique policy's preference.", "We next describe a simple critique policy that we use in this paper.", "Lexical Policy Shaping We qualitatively observed that correct programs often contains tokens which are also present in the question.", "For example, the correct program in Figure 2 contains the token Points, which is also present in the question.", "We therefore, define a simple surface form similarity feature match(x, y) that computes the ratio of number of non-keyword tokens in the program y that are also present in the question x.", "However, surface-form similarity is often not enough.", "For example, both the first and fourth program in Figure 2 contain the token Points but only the fourth program is correct.", "Therefore, we also use a simple co-occurrence feature that triggers on frequently co-occurring pairs of tokens in the program and instruction.", "For example, the token most is highly likely to co-occur with a correct program containing the keyword Max.", "This happens for the example in Figure 2 .", "Similarly the token not may co-occur with the keyword NotEqual.", "We assume access to a lexicon ⇀ = {(w j , !", "j )} k j=1 containing k lexical pairs of tokens and keywords.", "Each lexical pair (w, !)", "maps the token w in a text to a keyword !", "in a program.", "For a given program y and question x, we define a co-occurrence score as co_occur(y, x) = P (w,!", ")2⇀ {w 2 x^!", "2 y}}.", "We define critique score critique(y, x) as the sum of the match and co_occur scores.", "The critique policy is given by: p c (y|x, t) / exp (⌘ ⇀ critique(y, x)) , (3) where ⌘ is a single scalar hyper-parameter denoting the confidence in the critique policy.", "Addressing Update Strategy Selection: Generalized Update Equation Given the set of programs generated by the search step, one can use many objectives to update the parameters.", "For example, previous work have utilized maximum marginal likelihood (Krishnamurthy et al., 2017; Guu et al., 2017) , reinforcement learning (Zhong et al., 2017; Guu et al., 2017) and margin based methods (Iyyer et al., 2017) .", "It could be difficult to choose the suitable algorithm from these options.", "In this section, we propose a principle and general update equation such that previous update algorithms can be considered as special cases to this equation.", "Having a general update is important for the following reasons.", "First, it allows us to understand existing algorithms better by examining their basic properties.", "Second, the generalized update equation also makes it easy to implement and experiment with various different algorithms.", "Moreover, it provides a framework that enables the development of new variations or extensions of existing learning methods.", "In the following, we describe how the commonly used algorithms are in fact very similartheir update rules can all be viewed as special cases of the proposed generalized update equation.", "Algorithm 1 shows the meta-learning framework.", "For every training example, we first find a set of candidates using an exploration policy (line 4).", "We use the program candidates to update the parameters (line 6).", "Commonly Used Learning Algorithms We briefly describe three algorithms: maximum marginalized likelihood, policy gradient and maximum margin reward.", "Maximum Marginalized Likelihood The maximum marginalized likelihood method maximizes the log-likelihood of the training data by marginalizing over the set of programs.", "J MML = log p(z i |x i , t i ) = log X y2Y p(z i |y, t i )p(y|x i , t i ) (4) Because an answer is deterministically computed given a program and a table, we define p(z | y, t) as 1 or 0 depending upon whether the y evaluates to z given t, or not.", "Let Gen(z, t) βœ“ Y be the set of compatible programs that evaluate to z given the table t. The objective can then be expressed as: J MML = log X y2Gen(zi,ti) p(y|x i , t i ) (5) In practice, the summation over Gen(.)", "is approximated by only using the compatible programs in the set K generated by the search step.", "Policy Gradient Methods Most reinforcement learning approaches for semantic parsing assume access to a reward function R : Y β‡₯ X β‡₯ Z !", "R, giving a scalar reward R(y, z) for a given program y and the correct answer z.", "2 We can further assume without loss of generality that the reward is always in [0, 1].", "Reinforcement learning approaches maximize the expected reward J RL : J RL = X y2Y p(y|x i , t i )R(y, z i ) (6) J RL is hard to approximate using numerical integration since the reward for all programs may not be known a priori.", "Policy gradient methods solve this by approximating the derivative using a sample from the policy.", "When the search space is large, the policy may fail to sample a correct program, which can greatly slow down the learning.", "Therefore, off-policy methods are sometimes introduced to bias the sampling towards high-reward yielding programs.", "In those methods, an additional exploration policy u(y|x i , t i , z i ) is used to improve sampling.", "Importance weights are used to make the gradient unbiased (see Appendix for derivation).", "Maximum Margin Reward For every training example (x i , t i , z i ), the maximum margin reward method finds the highest scoring program y i that evaluates to z i , as the reference program, from the set K of programs generated by the search.", "With a margin function : Y β‡₯Y β‡₯Z !", "R and reference program y, the set of programs V that violate the margin constraint can thus be defined as: V = {y 0 | y 0 2 Y and score βœ“ (y, x, t) ο£Ώ score βœ“ (y 0 , x, t) + (y, y 0 , z)}, (7) where (y, y 0 , z) = R(y, z) R(y 0 , z).", "Similarly, the program that most violates the constraint can be written as: y = arg max y 0 2Y {score βœ“ (y 0 , x, t) + (y, y 0 , z) score βœ“ (y, x, t)} (8) The most-violation margin objective (negative margin loss) is thus defined as: J MMR = max{0, score βœ“ (Θ³, x i , t i ) score βœ“ (y i , x i , t i ) + (y i ,Θ³, z i )} Unlike the previous two learning algorithms, margin methods only update the score of the reference program and the program that violates the margin.", "Generalized Update Equation Although the algorithms described in Β§4.1 seem very different on the surface, the gradients of their loss functions can in fact be described in the same generalized form, given in Eq.", "(9) 3 .", "In addition to the gradient of the model scoring function, this equation has two variable terms, w(Β·), q(Β·).", "We call the first term w(y, x, t, z) intensity, which is a positive scalar value and the second term q(y|x, t) the competing distribution, which is a probability distribution over programs.", "Varying them makes the equation equivalent to the update rule of the algorithms we discussed, as shown in Table 1 .", "We also consider meritocratic update policy which uses a hyperparameter to sharpen or smooth the intensity of maximum marginal likelihood (Guu et al., 2017) .", "Intuitively, w(y, x, t, z) defines the positive part of the update equation, which defines how aggressively the update favors program y.", "Likewise, q(y|x, t) defines the negative part of the learning Generalized Update Equation: (7) ) andΘ³ is the most violating program (cf.", "Eq.", "(8) ).", "For REINFORCE,Ε· is sampled from K using p(.)", "whereas for Off-Policy Policy Gradient,Ε· is sampled using u(.).", "(K) = X y2K w(y, x, t, z) 0 @ r βœ“ score βœ“ (y, x, t) X y 0 2Y q(y 0 |x, t)r βœ“ score βœ“ (y 0 , x, t) algorithm, namely how aggressively the update penalizes the members of the program set.", "The generalized update equation provides a tool for better understanding individual algorithm, and helps shed some light on when a particular method may perform better.", "Intensity versus Search Quality In SpFD, the effectiveness of the algorithms for SpFD is closely related to the quality of the search results given that the gold program is not available.", "Intuitively, if the search quality is good, the update algorithm could be aggressive on updating the model parameters.", "When the search quality is poor, the algorithm should be conservative.", "The intensity w(Β·) is closely related to the aggressiveness of the algorithm.", "For example, the maximum marginal likelihood is less aggressive given that it produces a non-zero intensity over all programs in the program set K that evaluate to the correct answer.", "The intensity for a particular correct program y is proportional to its probability p(y|x, t).", "Further, meritocratic update becomes more aggressive as becomes larger.", "In contrast, REINFORCE and maximum margin reward both have a non-zero intensity only on a single program in K. This value is 1.0 for maximum margin reward, while for reinforcement learning, this value is the reward.", "Maximum margin reward therefore updates most aggressively in favor of its selection while maximum marginal likelihood tends to hedge its bet.", "Therefore, the maximum margin methods should benefit the most when the search quality improves.", "Stability The general equation also allows us to investigate the stability of a model update algorithm.", "In general, the variance of update direction can be high, hence less stable, if the model update algorithm has peaky competing distribution, or it puts all of its intensity on a single program.", "For example, REINFORCE only samples one program and puts non-zero intensity only on that program, so it could be unstable depending on the sampling results.", "The competing distribution affects the stability of the algorithm.", "For example, maximum margin reward penalizes only the most violating program and is benign to other incorrect programs.", "Therefore, the MMR algorithm could be unstable during training.", "New Model Update Algorithm The general equation provides a framework that enables the development of new variations or extensions of existing learning methods.", "For example, in order to improve the stability of the MMR algorithm, we propose a simple variant of maximum margin reward, which penalizes all violating programs instead of only the most violating one.", "We call this approach maximum margin average violation reward (MAVER), which is included in Table 1 as well.", "Given that MAVER effectively considers more negative examples during each update, we expect that it is more stable compared to the MMR algorithm.", "Experiments We describe the setup in Β§5.1 and results in Β§5.2.", "Setup Dataset We use the sequential question answering (SQA) dataset (Iyyer et al., 2017) for our experiments.", "SQA contains 6,066 sequences and each sequence contains up to 3 questions, with 17,553 questions in total.", "The data is partitioned into training (83%) and test (17%) splits.", "We use 4/5 of the original train split as our training set and the remaining 1/5 as the dev set.", "We evaluate using exact match on answer.", "Previous state-of-theart result on the SQA dataset is 44.7% accuracy, using maximum margin reward learning.", "Semantic Parser Our semantic parser is based on DynSP (Iyyer et al., 2017) , which contains a set of SQL actions, such as adding a clause (e.g., Select Column) or adding an operator (e.g., Max).", "Each action has an associated neural network module that generates the score for the action based on the instruction, the table and the list of past actions.", "The score of the entire program is given by the sum of scores of all actions.", "We modified DynSP to improve its representational capacity.", "We refer to the new parser as DynSP++.", "Most notably, we included new features and introduced two additional parser actions.", "See Appendix 8.2 for more details.", "While these improvements help us achieve state-of-the-art results, the majority of the gain comes from the learning contributions described in this paper.", "Hyperparameters For each experiment, we train the model for 30 epochs.", "We find the optimal stopping epoch by evaluating the model on the dev set.", "We then train on train+dev set till the stopping epoch and evaluate the model on the held-out test set.", "Model parameters are trained using stochastic gradient descent with learning rate of 0.1.", "We set the hyperparameter ⌘ for policy shaping to 5.", "All hyperparameters were tuned on the dev set.", "We use 40 lexical pairs for defining the co-occur score.", "We used common English superlatives (e.g., highest, most) and comparators (e.g., more, larger) and did not fit the lexical pairs based on the dataset.", "Given the model parameter βœ“, we use a base exploration policy defined in (Iyyer et al., 2017) .", "This exploration policy is given by b βœ“ (y | x, t, z) / exp( Β· R(y, z) + score βœ“ (y, βœ“, z)).", "R(y, z) is the reward function of the incomplete program y, given the answer z.", "We use a reward function R(y, z) given by the Jaccard similarity of the gold answer z and the answer generated by the program y.", "The value of is set to infinity, which essentially is equivalent to sorting the programs based on the reward and using the current model score for tie breaking.", "Further, we prune all syntactically invalid programs.", "For more details, we refer the reader to (Iyyer et al., 2017) .", "Table 2 contains the dev and test results when using our algorithm on the SQA dataset.", "We observe that margin based methods perform better than maximum likelihood methods and policy gradient in our experiment.", "Policy shaping in general improves the performance across different algorithms.", "Our best test results outperform previous SOTA by 5.0%.", "Results Policy Gradient vs Off-Policy Gradient RE-INFORCE, a simple policy gradient method, achieved extremely poor performance.", "This likely due to the problem of exploration and having to sample from a large space of programs.", "This is further corroborated from observing the much superior performance of off-policy policy gradient methods.", "Thus, the sampling policy is an important factor to consider for policy gradient methods.", "The Effect of Policy Shaping We observe that the improvement due to policy shaping is 6.0% on the SQA dataset for MAVER and only 1.3% for maximum marginal likelihood.", "We also observe that as increases, the improvement due to policy shaping for meritocratic update increases.", "This supports our hypothesis that aggressive updates of margin based methods is beneficial when the search method is more accurate as compared to maximum marginal likelihood which hedges its bet between all programs that evaluate to the right answer.", "Stability of MMR In Section 4, the general update equation helps us point out that MMR could be unstable due to the peaky competing distribution.", "MAVER was proposed to increase the stability of the algorithm.", "To measure stability, we cal- Table 4 .", "Policy Shaping vs Model Shaping Critique policy contains useful information that can bias the search away from spurious programs.", "Therefore, one can also consider making the critique policy as part of the model.", "We call this model shaping.", "We define our model to be the shaped policy and train and test using the new model.", "Using MAVER updates, we found that the dev accuracy dropped to 37.1%.", "We conjecture that the strong prior in the critique policy can hinder generalization in model shaping.", "Related Work Semantic Parsing from Denotation Mapping natural language text to formal meaning representation was first studied by Montague (1970) .", "Early work on learning semantic parsers rely on labeled formal representations as the supervision signals Collins, 2005, 2007; Zelle and Mooney, 1993) .", "However, because getting access to gold formal representation generally requires expensive annotations by an expert, distant supervision approaches, where semantic parsers are learned from denotation only, have become the main learning paradigm (e.g., Clarke et al., 2010; Liang et al., 2011; Artzi and Zettlemoyer, 2013; Berant et al., 2013; Iyyer et al., 2017; Krishnamurthy et al., 2017) .", "Guu et al.", "(2017) studied the problem of spurious programs and considered adding noise to diversify the search procedure and introduced meritocratic updates.", "Reinforcement Learning Algorithms Reinforcement learning algorithms have been applied to various NLP problems including dialogue (Li et al., 2016 ), text-based games (Narasimhan et al., 2015) , information extraction (Narasimhan et al., 2016) , coreference resolution (Clark and Man- ning, 2016), semantic parsing (Guu et al., 2017) and instruction following (Misra et al., 2017) .", "Guu et al.", "(2017) show that policy gradient methods underperform maximum marginal likelihood approaches.", "Our result on the SQA dataset supports their observation.", "However, we show that using off-policy sampling, policy gradient methods can provide superior performance to maximum marginal likelihood methods.", "Margin-based Learning Margin-based methods have been considered in the context of SVM learning.", "In the NLP literature, margin based learning has been applied to parsing (Taskar et al., 2004; McDonald et al., 2005) , text classification (Taskar et al., 2003) , machine translation (Watanabe et al., 2007) and semantic parsing (Iyyer et al., 2017) .", "Kummerfeld et al.", "(2015) found that max-margin based methods generally outperform likelihood maximization on a range of tasks.", "Previous work have studied connections between margin based method and likelihood maximization for supervised learning setting.", "We show them as special cases of our unified update equation for distant supervision learning.", "Similar to this work, Lee et al.", "(2016) also found that in the context of supervised learning, margin-based algorithms which update all violated examples perform better than the one that only updates the most violated example.", "Latent Variable Modeling Learning semantic parsers from denotation can be viewed as a latent variable modeling problem, where the program is the latent variable.", "Probabilistic latent variable models have been studied using EM-algorithm and its variant (Dempster et al., 1977) .", "The graphical model literature has studied latent variable learning on margin-based methods (Yu and Joachims, 2009 ) and probabilistic models (Quattoni et al., 2007) .", "Samdani et al.", "(2012) studied various vari-ants of EM algorithm and showed that all of them are special cases of a unified framework.", "Our generalized update framework is similar in spirit.", "Conclusion In this paper, we propose a general update equation from semantic parsing from denotation and propose a policy shaping method for addressing the spurious program challenge.", "For the future, we plan to apply the proposed learning framework to more semantic parsing tasks and consider new methods for policy shaping." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "4", "4.1", "4.2", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Background", "The Semantic Parsing Task", "Learning", "Addressing Spurious Programs: Policy Shaping", "Addressing Update Strategy Selection:", "Commonly Used Learning Algorithms", "Generalized Update Equation", "Experiments", "Setup", "Results", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-103#paper-1267#slide-3
Semantic Parsing with Indirect Supervision
Question: What nation scored the most points? Christelle Le Duff France
Question: What nation scored the most points? Christelle Le Duff France
[]
GEM-SciDuet-train-103#paper-1267#slide-4
1267
Policy Shaping and Generalized Update Equations for Semantic Parsing from Denotations
Semantic parsing from denotations faces two key challenges in model training: (1) given only the denotations (e.g., answers), search for good candidate semantic parses, and (2) choose the best model update algorithm. We propose effective and general solutions to each of them. Using policy shaping, we bias the search procedure towards semantic parses that are more compatible to the text, which provide better supervision signals for training. In addition, we propose an update equation that generalizes three different families of learning algorithms, which enables fast model exploration. When experimented on a recently proposed sequential question answering dataset, our framework leads to a new state-of-theart model that outperforms previous work by 5.0% absolute on exact match accuracy.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233 ], "paper_content_text": [ "Introduction Semantic parsing from denotations (SpFD) is the problem of mapping text to executable formal representations (or program) in a situated environment and executing them to generate denotations (or answer), in the absence of access to correct representations.", "Several problems have been handled within this framework, including question answering (Berant et al., 2013; Iyyer et al., 2017) and instructions for robots (Artzi and Zettlemoyer, 2013; Misra et al., 2015) .", "Consider the example in Figure 1 .", "Given the question and a table environment, a semantic parser maps the question to an executable program, in this case a SQL query, and then executes the query on the environment to generate the answer England.", "In the SpFD setting, the training data does not contain the correct programs.", "Thus, the existing learning approaches for SpFD perform two steps for every training example, a search step that explores the space of programs Select Nation Where Points is Maximum Program: Answer: Environment: England Figure 1 : An example of semantic parsing from denotations.", "Given the table environment, map the question to an executable program that evaluates to the answer.", "and finds suitable candidates, and an update step that uses these programs to update the model.", "Figure 2 shows the two step training procedure for the above example.", "In this paper, we address two key challenges in model training for SpFD by proposing a novel learning framework, improving both the search and update steps.", "The first challenge, the existence of spurious programs, lies in the search step.", "More specifically, while the success of the search step relies on its ability to find programs that are semantically correct, we can only verify if the program can generate correct answers, given that no gold programs are presented.", "The search step is complicated by spurious programs, which happen to evaluate to the correct answer but do not represent accurately the meaning of the natural language question.", "For example, for the environment in Figure 1 , the program Select Nation Where Name = Karen Andrew is spurious.", "Selecting spurious programs as positive examples can greatly affect the performance of semantic parsers as these programs generally do not gen- Our contributions in this work are twofold.", "To address the first challenge, we propose a policy shaping (Griffith et al., 2013) method that incorporates simple, lightweight domain knowledge, such as a small set of lexical pairs of tokens in the question and program, in the form of a critique policy ( Β§ 3).", "This helps bias the search towards the correct program, an important step to improve supervision signals, which benefits learning regardless of the choice of algorithm.", "To address the second challenge, we prove that the parameter update step in several algorithms are similar and can be viewed as special cases of a generalized update equation ( Β§ 4).", "The equation contains two variable terms that govern the update behavior.", "Changing these two terms effectively defines an infinite class of learning algorithms where different values lead to significantly different results.", "We study this effect and propose a novel learning framework that improves over existing methods.", "We evaluate our methods using the sequential question answering (SQA) dataset (Iyyer et al., 2017) , and show that our proposed improvements to the search and update steps consistently enhance existing approaches.", "The proposed algorithm achieves new state-of-the-art and outperforms existing parsers by 5.0%.", "Background We give a formal problem definition of the semantic parsing task, followed by the general learning framework for solving it.", "The Semantic Parsing Task The problem discussed in this paper can be formally defined as follows.", "Let X be the set of all possible questions, Y programs (e.g., SQL-like queries), T tables (i.e., the structured data in this work) and Z answers.", "We further assume access to an executor : Y β‡₯ T !", "Z, that given a program y 2 Y and a table t 2 T , generates an answer (y, t) 2 Z.", "We assume that the executor and all tables are deterministic and the executor can be called as many times as possible.", "To facilitate discussion in the following sections, we define an environment function e t : Y !", "Z, by applying the executor to the program as e t (y) = (y, t).", "Given a question x and an environment e t , our aim is to generate a program y ⇀ 2 Y and then execute it to produce the answer e t (y ⇀ ).", "Assume that for any y 2 Y, the score of y being a correct program for x is score βœ“ (y, x, t), parameterized by βœ“.", "The inference task is thus: y ⇀ = arg max y2Y score βœ“ (y, x, t) (1) As the size of Y is exponential to the length of the program, a generic search procedure is typically employed for Eq.", "(1), as efficient dynamic algorithms typically do not exist.", "These search procedures generally maintain a beam of program states sorted according to some scoring function, where each program state represents an incomplete program.", "The search then generates a new program state from an existing state by performing an action.", "Each action adds a set of tokens (e.g., Nation) and keyword (e.g., Select) to a program state.", "For example, in order to generate the program in Figure 1 , the DynSP parser (Iyyer et al., 2017) will take the first action as adding the SQL expression Select Nation.", "Notice that score βœ“ can be used in either probabilistic or nonprobabilistic models.", "For probabilistic models, we assume that it is a Boltzmann policy, meaning that p βœ“ (y | x, t) / exp{score βœ“ (y, x, t)}.", "Learning Learning a semantic parser is equivalent to learning the parameters βœ“ in the scoring function, which is a structured learning problem, due to the large, structured output space Y.", "Structured learning algorithms generally consist of two major components: search and update.", "When the gold programs are available during training, the search procedure finds a set of high-scoring incorrect programs.", "These programs are used by the update step to derive loss for updating parameters.", "For example, these programs are used for approximating the partition-function in maximum-likelihood objective (Liang et al., 2011) and finding set of programs causing margin violation in margin based methods (DaumΓ© III and Marcu, 2005) .", "Depending on the exact algorithm being used, these two components are not necessarily separated into isolated steps.", "For instance, parameters can be updated in the middle of search (e.g., Huang et al., 2012) .", "For learning semantic parsers from denotations, where we assume only answers are available in a training set {( x i , t i , z i )} N i=1 of N examples, the basic construction of the learning algorithms remains the same.", "However, the problems that search needs to handle in SpFD is more challenging.", "In addition to finding a set of high-scoring incorrect programs, the search procedure also needs to guess the correct program(s) evaluating to the gold answer z i .", "This problem is further complicated by the presence of spurious programs, which generate the correct answer but are semantically incompatible with the question.", "For example, although all programs in Figure 2 evaluate to the same answer, only one of them is correct.", "The issue of the spurious programs also affects the design of model update.", "For instance, maximum marginal likelihood methods treat all the programs that evaluate to the gold answer equally, while maximum margin reward networks use model score to break tie and pick one of the programs as the correct reference.", "Addressing Spurious Programs: Policy Shaping Given a training example (x, t, z), the aim of the search step is to find a set K(x, t, z) of programs consisting of correct programs that evaluate to z and high-scoring incorrect programs.", "The search step should avoid picking up spurious programs for learning since such programs typically do not generalize.", "For example, in Figure 2, the spurious program Select Nation Where Index is Min will evaluate to an incorrect answer if the indices of the first two rows are swapped 1 .", "This problem is challenging since among the programs that evaluate to the correct answer, most of them are spurious.", "The search step can be viewed as following an exploration policy b βœ“ (y|x, t, z) to explore the set of programs Y.", "This exploration is often performed by beam search and at each step, we either sample from b βœ“ or take the top scoring programs.", "The set K(x, t, z) is then used by the update step for parameter update.", "Most search strategies use an exploration policy which is based on the score function, for example b βœ“ (y|x, t, z) / exp{score βœ“ (y, t)}.", "However, this approach can suffer from a divergence phenomenon whereby the score of spurious programs picked up by the search in the first epoch increases, making it more likely for the search to pick them up in the future.", "Such divergence issues are common with latent-variable learning and often require careful initialization to overcome (Rose, 1998) .", "Unfortunately such initialization schemes are not applicable for deep neural networks which form the model of most successful semantic parsers today (Jia and Liang, 2016; Misra and Artzi, 2016; Iyyer et al., 2017) .", "Prior work, such as ✏-greedy exploration (Guu et al., 2017) , has reduced the severity of this problem by introducing random noise in the search procedure to avoid saturating the search on high-scoring spurious programs.", "However, random noise need not bias the search towards the correct program(s).", "In this paper, we introduce a simple policy-shaping method to guide the search.", "This approach allows incorporating prior knowledge in the exploration policy and can bias the search away from spurious programs.", "Β» Compute generalized gradient updates 6: βœ“ = βœ“ + Β΅ (K) 7: return βœ“ Policy Shaping Policy shaping is a method to introduce prior knowledge into a policy (Griffith et al., 2013) .", "Formally, let the current behavior policy be b βœ“ (y|x, t, z) and a predefined critique policy, the prior knowledge, be p c (y|x, t).", "Policy shaping defines a new shaped behavior policy p b (y|x, t) given by: p b (y|x, t) = b βœ“ (y|x, t, z)p c (y|x, t) P y 0 2Y b βœ“ (y 0 |x, t, z)p c (y 0 |x, t) .", "(2) Using the shaped policy for exploration biases the search towards the critique policy's preference.", "We next describe a simple critique policy that we use in this paper.", "Lexical Policy Shaping We qualitatively observed that correct programs often contains tokens which are also present in the question.", "For example, the correct program in Figure 2 contains the token Points, which is also present in the question.", "We therefore, define a simple surface form similarity feature match(x, y) that computes the ratio of number of non-keyword tokens in the program y that are also present in the question x.", "However, surface-form similarity is often not enough.", "For example, both the first and fourth program in Figure 2 contain the token Points but only the fourth program is correct.", "Therefore, we also use a simple co-occurrence feature that triggers on frequently co-occurring pairs of tokens in the program and instruction.", "For example, the token most is highly likely to co-occur with a correct program containing the keyword Max.", "This happens for the example in Figure 2 .", "Similarly the token not may co-occur with the keyword NotEqual.", "We assume access to a lexicon ⇀ = {(w j , !", "j )} k j=1 containing k lexical pairs of tokens and keywords.", "Each lexical pair (w, !)", "maps the token w in a text to a keyword !", "in a program.", "For a given program y and question x, we define a co-occurrence score as co_occur(y, x) = P (w,!", ")2⇀ {w 2 x^!", "2 y}}.", "We define critique score critique(y, x) as the sum of the match and co_occur scores.", "The critique policy is given by: p c (y|x, t) / exp (⌘ ⇀ critique(y, x)) , (3) where ⌘ is a single scalar hyper-parameter denoting the confidence in the critique policy.", "Addressing Update Strategy Selection: Generalized Update Equation Given the set of programs generated by the search step, one can use many objectives to update the parameters.", "For example, previous work have utilized maximum marginal likelihood (Krishnamurthy et al., 2017; Guu et al., 2017) , reinforcement learning (Zhong et al., 2017; Guu et al., 2017) and margin based methods (Iyyer et al., 2017) .", "It could be difficult to choose the suitable algorithm from these options.", "In this section, we propose a principle and general update equation such that previous update algorithms can be considered as special cases to this equation.", "Having a general update is important for the following reasons.", "First, it allows us to understand existing algorithms better by examining their basic properties.", "Second, the generalized update equation also makes it easy to implement and experiment with various different algorithms.", "Moreover, it provides a framework that enables the development of new variations or extensions of existing learning methods.", "In the following, we describe how the commonly used algorithms are in fact very similartheir update rules can all be viewed as special cases of the proposed generalized update equation.", "Algorithm 1 shows the meta-learning framework.", "For every training example, we first find a set of candidates using an exploration policy (line 4).", "We use the program candidates to update the parameters (line 6).", "Commonly Used Learning Algorithms We briefly describe three algorithms: maximum marginalized likelihood, policy gradient and maximum margin reward.", "Maximum Marginalized Likelihood The maximum marginalized likelihood method maximizes the log-likelihood of the training data by marginalizing over the set of programs.", "J MML = log p(z i |x i , t i ) = log X y2Y p(z i |y, t i )p(y|x i , t i ) (4) Because an answer is deterministically computed given a program and a table, we define p(z | y, t) as 1 or 0 depending upon whether the y evaluates to z given t, or not.", "Let Gen(z, t) βœ“ Y be the set of compatible programs that evaluate to z given the table t. The objective can then be expressed as: J MML = log X y2Gen(zi,ti) p(y|x i , t i ) (5) In practice, the summation over Gen(.)", "is approximated by only using the compatible programs in the set K generated by the search step.", "Policy Gradient Methods Most reinforcement learning approaches for semantic parsing assume access to a reward function R : Y β‡₯ X β‡₯ Z !", "R, giving a scalar reward R(y, z) for a given program y and the correct answer z.", "2 We can further assume without loss of generality that the reward is always in [0, 1].", "Reinforcement learning approaches maximize the expected reward J RL : J RL = X y2Y p(y|x i , t i )R(y, z i ) (6) J RL is hard to approximate using numerical integration since the reward for all programs may not be known a priori.", "Policy gradient methods solve this by approximating the derivative using a sample from the policy.", "When the search space is large, the policy may fail to sample a correct program, which can greatly slow down the learning.", "Therefore, off-policy methods are sometimes introduced to bias the sampling towards high-reward yielding programs.", "In those methods, an additional exploration policy u(y|x i , t i , z i ) is used to improve sampling.", "Importance weights are used to make the gradient unbiased (see Appendix for derivation).", "Maximum Margin Reward For every training example (x i , t i , z i ), the maximum margin reward method finds the highest scoring program y i that evaluates to z i , as the reference program, from the set K of programs generated by the search.", "With a margin function : Y β‡₯Y β‡₯Z !", "R and reference program y, the set of programs V that violate the margin constraint can thus be defined as: V = {y 0 | y 0 2 Y and score βœ“ (y, x, t) ο£Ώ score βœ“ (y 0 , x, t) + (y, y 0 , z)}, (7) where (y, y 0 , z) = R(y, z) R(y 0 , z).", "Similarly, the program that most violates the constraint can be written as: y = arg max y 0 2Y {score βœ“ (y 0 , x, t) + (y, y 0 , z) score βœ“ (y, x, t)} (8) The most-violation margin objective (negative margin loss) is thus defined as: J MMR = max{0, score βœ“ (Θ³, x i , t i ) score βœ“ (y i , x i , t i ) + (y i ,Θ³, z i )} Unlike the previous two learning algorithms, margin methods only update the score of the reference program and the program that violates the margin.", "Generalized Update Equation Although the algorithms described in Β§4.1 seem very different on the surface, the gradients of their loss functions can in fact be described in the same generalized form, given in Eq.", "(9) 3 .", "In addition to the gradient of the model scoring function, this equation has two variable terms, w(Β·), q(Β·).", "We call the first term w(y, x, t, z) intensity, which is a positive scalar value and the second term q(y|x, t) the competing distribution, which is a probability distribution over programs.", "Varying them makes the equation equivalent to the update rule of the algorithms we discussed, as shown in Table 1 .", "We also consider meritocratic update policy which uses a hyperparameter to sharpen or smooth the intensity of maximum marginal likelihood (Guu et al., 2017) .", "Intuitively, w(y, x, t, z) defines the positive part of the update equation, which defines how aggressively the update favors program y.", "Likewise, q(y|x, t) defines the negative part of the learning Generalized Update Equation: (7) ) andΘ³ is the most violating program (cf.", "Eq.", "(8) ).", "For REINFORCE,Ε· is sampled from K using p(.)", "whereas for Off-Policy Policy Gradient,Ε· is sampled using u(.).", "(K) = X y2K w(y, x, t, z) 0 @ r βœ“ score βœ“ (y, x, t) X y 0 2Y q(y 0 |x, t)r βœ“ score βœ“ (y 0 , x, t) algorithm, namely how aggressively the update penalizes the members of the program set.", "The generalized update equation provides a tool for better understanding individual algorithm, and helps shed some light on when a particular method may perform better.", "Intensity versus Search Quality In SpFD, the effectiveness of the algorithms for SpFD is closely related to the quality of the search results given that the gold program is not available.", "Intuitively, if the search quality is good, the update algorithm could be aggressive on updating the model parameters.", "When the search quality is poor, the algorithm should be conservative.", "The intensity w(Β·) is closely related to the aggressiveness of the algorithm.", "For example, the maximum marginal likelihood is less aggressive given that it produces a non-zero intensity over all programs in the program set K that evaluate to the correct answer.", "The intensity for a particular correct program y is proportional to its probability p(y|x, t).", "Further, meritocratic update becomes more aggressive as becomes larger.", "In contrast, REINFORCE and maximum margin reward both have a non-zero intensity only on a single program in K. This value is 1.0 for maximum margin reward, while for reinforcement learning, this value is the reward.", "Maximum margin reward therefore updates most aggressively in favor of its selection while maximum marginal likelihood tends to hedge its bet.", "Therefore, the maximum margin methods should benefit the most when the search quality improves.", "Stability The general equation also allows us to investigate the stability of a model update algorithm.", "In general, the variance of update direction can be high, hence less stable, if the model update algorithm has peaky competing distribution, or it puts all of its intensity on a single program.", "For example, REINFORCE only samples one program and puts non-zero intensity only on that program, so it could be unstable depending on the sampling results.", "The competing distribution affects the stability of the algorithm.", "For example, maximum margin reward penalizes only the most violating program and is benign to other incorrect programs.", "Therefore, the MMR algorithm could be unstable during training.", "New Model Update Algorithm The general equation provides a framework that enables the development of new variations or extensions of existing learning methods.", "For example, in order to improve the stability of the MMR algorithm, we propose a simple variant of maximum margin reward, which penalizes all violating programs instead of only the most violating one.", "We call this approach maximum margin average violation reward (MAVER), which is included in Table 1 as well.", "Given that MAVER effectively considers more negative examples during each update, we expect that it is more stable compared to the MMR algorithm.", "Experiments We describe the setup in Β§5.1 and results in Β§5.2.", "Setup Dataset We use the sequential question answering (SQA) dataset (Iyyer et al., 2017) for our experiments.", "SQA contains 6,066 sequences and each sequence contains up to 3 questions, with 17,553 questions in total.", "The data is partitioned into training (83%) and test (17%) splits.", "We use 4/5 of the original train split as our training set and the remaining 1/5 as the dev set.", "We evaluate using exact match on answer.", "Previous state-of-theart result on the SQA dataset is 44.7% accuracy, using maximum margin reward learning.", "Semantic Parser Our semantic parser is based on DynSP (Iyyer et al., 2017) , which contains a set of SQL actions, such as adding a clause (e.g., Select Column) or adding an operator (e.g., Max).", "Each action has an associated neural network module that generates the score for the action based on the instruction, the table and the list of past actions.", "The score of the entire program is given by the sum of scores of all actions.", "We modified DynSP to improve its representational capacity.", "We refer to the new parser as DynSP++.", "Most notably, we included new features and introduced two additional parser actions.", "See Appendix 8.2 for more details.", "While these improvements help us achieve state-of-the-art results, the majority of the gain comes from the learning contributions described in this paper.", "Hyperparameters For each experiment, we train the model for 30 epochs.", "We find the optimal stopping epoch by evaluating the model on the dev set.", "We then train on train+dev set till the stopping epoch and evaluate the model on the held-out test set.", "Model parameters are trained using stochastic gradient descent with learning rate of 0.1.", "We set the hyperparameter ⌘ for policy shaping to 5.", "All hyperparameters were tuned on the dev set.", "We use 40 lexical pairs for defining the co-occur score.", "We used common English superlatives (e.g., highest, most) and comparators (e.g., more, larger) and did not fit the lexical pairs based on the dataset.", "Given the model parameter βœ“, we use a base exploration policy defined in (Iyyer et al., 2017) .", "This exploration policy is given by b βœ“ (y | x, t, z) / exp( Β· R(y, z) + score βœ“ (y, βœ“, z)).", "R(y, z) is the reward function of the incomplete program y, given the answer z.", "We use a reward function R(y, z) given by the Jaccard similarity of the gold answer z and the answer generated by the program y.", "The value of is set to infinity, which essentially is equivalent to sorting the programs based on the reward and using the current model score for tie breaking.", "Further, we prune all syntactically invalid programs.", "For more details, we refer the reader to (Iyyer et al., 2017) .", "Table 2 contains the dev and test results when using our algorithm on the SQA dataset.", "We observe that margin based methods perform better than maximum likelihood methods and policy gradient in our experiment.", "Policy shaping in general improves the performance across different algorithms.", "Our best test results outperform previous SOTA by 5.0%.", "Results Policy Gradient vs Off-Policy Gradient RE-INFORCE, a simple policy gradient method, achieved extremely poor performance.", "This likely due to the problem of exploration and having to sample from a large space of programs.", "This is further corroborated from observing the much superior performance of off-policy policy gradient methods.", "Thus, the sampling policy is an important factor to consider for policy gradient methods.", "The Effect of Policy Shaping We observe that the improvement due to policy shaping is 6.0% on the SQA dataset for MAVER and only 1.3% for maximum marginal likelihood.", "We also observe that as increases, the improvement due to policy shaping for meritocratic update increases.", "This supports our hypothesis that aggressive updates of margin based methods is beneficial when the search method is more accurate as compared to maximum marginal likelihood which hedges its bet between all programs that evaluate to the right answer.", "Stability of MMR In Section 4, the general update equation helps us point out that MMR could be unstable due to the peaky competing distribution.", "MAVER was proposed to increase the stability of the algorithm.", "To measure stability, we cal- Table 4 .", "Policy Shaping vs Model Shaping Critique policy contains useful information that can bias the search away from spurious programs.", "Therefore, one can also consider making the critique policy as part of the model.", "We call this model shaping.", "We define our model to be the shaped policy and train and test using the new model.", "Using MAVER updates, we found that the dev accuracy dropped to 37.1%.", "We conjecture that the strong prior in the critique policy can hinder generalization in model shaping.", "Related Work Semantic Parsing from Denotation Mapping natural language text to formal meaning representation was first studied by Montague (1970) .", "Early work on learning semantic parsers rely on labeled formal representations as the supervision signals Collins, 2005, 2007; Zelle and Mooney, 1993) .", "However, because getting access to gold formal representation generally requires expensive annotations by an expert, distant supervision approaches, where semantic parsers are learned from denotation only, have become the main learning paradigm (e.g., Clarke et al., 2010; Liang et al., 2011; Artzi and Zettlemoyer, 2013; Berant et al., 2013; Iyyer et al., 2017; Krishnamurthy et al., 2017) .", "Guu et al.", "(2017) studied the problem of spurious programs and considered adding noise to diversify the search procedure and introduced meritocratic updates.", "Reinforcement Learning Algorithms Reinforcement learning algorithms have been applied to various NLP problems including dialogue (Li et al., 2016 ), text-based games (Narasimhan et al., 2015) , information extraction (Narasimhan et al., 2016) , coreference resolution (Clark and Man- ning, 2016), semantic parsing (Guu et al., 2017) and instruction following (Misra et al., 2017) .", "Guu et al.", "(2017) show that policy gradient methods underperform maximum marginal likelihood approaches.", "Our result on the SQA dataset supports their observation.", "However, we show that using off-policy sampling, policy gradient methods can provide superior performance to maximum marginal likelihood methods.", "Margin-based Learning Margin-based methods have been considered in the context of SVM learning.", "In the NLP literature, margin based learning has been applied to parsing (Taskar et al., 2004; McDonald et al., 2005) , text classification (Taskar et al., 2003) , machine translation (Watanabe et al., 2007) and semantic parsing (Iyyer et al., 2017) .", "Kummerfeld et al.", "(2015) found that max-margin based methods generally outperform likelihood maximization on a range of tasks.", "Previous work have studied connections between margin based method and likelihood maximization for supervised learning setting.", "We show them as special cases of our unified update equation for distant supervision learning.", "Similar to this work, Lee et al.", "(2016) also found that in the context of supervised learning, margin-based algorithms which update all violated examples perform better than the one that only updates the most violated example.", "Latent Variable Modeling Learning semantic parsers from denotation can be viewed as a latent variable modeling problem, where the program is the latent variable.", "Probabilistic latent variable models have been studied using EM-algorithm and its variant (Dempster et al., 1977) .", "The graphical model literature has studied latent variable learning on margin-based methods (Yu and Joachims, 2009 ) and probabilistic models (Quattoni et al., 2007) .", "Samdani et al.", "(2012) studied various vari-ants of EM algorithm and showed that all of them are special cases of a unified framework.", "Our generalized update framework is similar in spirit.", "Conclusion In this paper, we propose a general update equation from semantic parsing from denotation and propose a policy shaping method for addressing the spurious program challenge.", "For the future, we plan to apply the proposed learning framework to more semantic parsing tasks and consider new methods for policy shaping." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "4", "4.1", "4.2", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Background", "The Semantic Parsing Task", "Learning", "Addressing Spurious Programs: Policy Shaping", "Addressing Update Strategy Selection:", "Commonly Used Learning Algorithms", "Generalized Update Equation", "Experiments", "Setup", "Results", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-103#paper-1267#slide-4
Search for Training
In general, there are several spurious programs that execute to the gold answer but are semantically incorrect.
In general, there are several spurious programs that execute to the gold answer but are semantically incorrect.
[]
GEM-SciDuet-train-103#paper-1267#slide-5
1267
Policy Shaping and Generalized Update Equations for Semantic Parsing from Denotations
Semantic parsing from denotations faces two key challenges in model training: (1) given only the denotations (e.g., answers), search for good candidate semantic parses, and (2) choose the best model update algorithm. We propose effective and general solutions to each of them. Using policy shaping, we bias the search procedure towards semantic parses that are more compatible to the text, which provide better supervision signals for training. In addition, we propose an update equation that generalizes three different families of learning algorithms, which enables fast model exploration. When experimented on a recently proposed sequential question answering dataset, our framework leads to a new state-of-theart model that outperforms previous work by 5.0% absolute on exact match accuracy.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233 ], "paper_content_text": [ "Introduction Semantic parsing from denotations (SpFD) is the problem of mapping text to executable formal representations (or program) in a situated environment and executing them to generate denotations (or answer), in the absence of access to correct representations.", "Several problems have been handled within this framework, including question answering (Berant et al., 2013; Iyyer et al., 2017) and instructions for robots (Artzi and Zettlemoyer, 2013; Misra et al., 2015) .", "Consider the example in Figure 1 .", "Given the question and a table environment, a semantic parser maps the question to an executable program, in this case a SQL query, and then executes the query on the environment to generate the answer England.", "In the SpFD setting, the training data does not contain the correct programs.", "Thus, the existing learning approaches for SpFD perform two steps for every training example, a search step that explores the space of programs Select Nation Where Points is Maximum Program: Answer: Environment: England Figure 1 : An example of semantic parsing from denotations.", "Given the table environment, map the question to an executable program that evaluates to the answer.", "and finds suitable candidates, and an update step that uses these programs to update the model.", "Figure 2 shows the two step training procedure for the above example.", "In this paper, we address two key challenges in model training for SpFD by proposing a novel learning framework, improving both the search and update steps.", "The first challenge, the existence of spurious programs, lies in the search step.", "More specifically, while the success of the search step relies on its ability to find programs that are semantically correct, we can only verify if the program can generate correct answers, given that no gold programs are presented.", "The search step is complicated by spurious programs, which happen to evaluate to the correct answer but do not represent accurately the meaning of the natural language question.", "For example, for the environment in Figure 1 , the program Select Nation Where Name = Karen Andrew is spurious.", "Selecting spurious programs as positive examples can greatly affect the performance of semantic parsers as these programs generally do not gen- Our contributions in this work are twofold.", "To address the first challenge, we propose a policy shaping (Griffith et al., 2013) method that incorporates simple, lightweight domain knowledge, such as a small set of lexical pairs of tokens in the question and program, in the form of a critique policy ( Β§ 3).", "This helps bias the search towards the correct program, an important step to improve supervision signals, which benefits learning regardless of the choice of algorithm.", "To address the second challenge, we prove that the parameter update step in several algorithms are similar and can be viewed as special cases of a generalized update equation ( Β§ 4).", "The equation contains two variable terms that govern the update behavior.", "Changing these two terms effectively defines an infinite class of learning algorithms where different values lead to significantly different results.", "We study this effect and propose a novel learning framework that improves over existing methods.", "We evaluate our methods using the sequential question answering (SQA) dataset (Iyyer et al., 2017) , and show that our proposed improvements to the search and update steps consistently enhance existing approaches.", "The proposed algorithm achieves new state-of-the-art and outperforms existing parsers by 5.0%.", "Background We give a formal problem definition of the semantic parsing task, followed by the general learning framework for solving it.", "The Semantic Parsing Task The problem discussed in this paper can be formally defined as follows.", "Let X be the set of all possible questions, Y programs (e.g., SQL-like queries), T tables (i.e., the structured data in this work) and Z answers.", "We further assume access to an executor : Y β‡₯ T !", "Z, that given a program y 2 Y and a table t 2 T , generates an answer (y, t) 2 Z.", "We assume that the executor and all tables are deterministic and the executor can be called as many times as possible.", "To facilitate discussion in the following sections, we define an environment function e t : Y !", "Z, by applying the executor to the program as e t (y) = (y, t).", "Given a question x and an environment e t , our aim is to generate a program y ⇀ 2 Y and then execute it to produce the answer e t (y ⇀ ).", "Assume that for any y 2 Y, the score of y being a correct program for x is score βœ“ (y, x, t), parameterized by βœ“.", "The inference task is thus: y ⇀ = arg max y2Y score βœ“ (y, x, t) (1) As the size of Y is exponential to the length of the program, a generic search procedure is typically employed for Eq.", "(1), as efficient dynamic algorithms typically do not exist.", "These search procedures generally maintain a beam of program states sorted according to some scoring function, where each program state represents an incomplete program.", "The search then generates a new program state from an existing state by performing an action.", "Each action adds a set of tokens (e.g., Nation) and keyword (e.g., Select) to a program state.", "For example, in order to generate the program in Figure 1 , the DynSP parser (Iyyer et al., 2017) will take the first action as adding the SQL expression Select Nation.", "Notice that score βœ“ can be used in either probabilistic or nonprobabilistic models.", "For probabilistic models, we assume that it is a Boltzmann policy, meaning that p βœ“ (y | x, t) / exp{score βœ“ (y, x, t)}.", "Learning Learning a semantic parser is equivalent to learning the parameters βœ“ in the scoring function, which is a structured learning problem, due to the large, structured output space Y.", "Structured learning algorithms generally consist of two major components: search and update.", "When the gold programs are available during training, the search procedure finds a set of high-scoring incorrect programs.", "These programs are used by the update step to derive loss for updating parameters.", "For example, these programs are used for approximating the partition-function in maximum-likelihood objective (Liang et al., 2011) and finding set of programs causing margin violation in margin based methods (DaumΓ© III and Marcu, 2005) .", "Depending on the exact algorithm being used, these two components are not necessarily separated into isolated steps.", "For instance, parameters can be updated in the middle of search (e.g., Huang et al., 2012) .", "For learning semantic parsers from denotations, where we assume only answers are available in a training set {( x i , t i , z i )} N i=1 of N examples, the basic construction of the learning algorithms remains the same.", "However, the problems that search needs to handle in SpFD is more challenging.", "In addition to finding a set of high-scoring incorrect programs, the search procedure also needs to guess the correct program(s) evaluating to the gold answer z i .", "This problem is further complicated by the presence of spurious programs, which generate the correct answer but are semantically incompatible with the question.", "For example, although all programs in Figure 2 evaluate to the same answer, only one of them is correct.", "The issue of the spurious programs also affects the design of model update.", "For instance, maximum marginal likelihood methods treat all the programs that evaluate to the gold answer equally, while maximum margin reward networks use model score to break tie and pick one of the programs as the correct reference.", "Addressing Spurious Programs: Policy Shaping Given a training example (x, t, z), the aim of the search step is to find a set K(x, t, z) of programs consisting of correct programs that evaluate to z and high-scoring incorrect programs.", "The search step should avoid picking up spurious programs for learning since such programs typically do not generalize.", "For example, in Figure 2, the spurious program Select Nation Where Index is Min will evaluate to an incorrect answer if the indices of the first two rows are swapped 1 .", "This problem is challenging since among the programs that evaluate to the correct answer, most of them are spurious.", "The search step can be viewed as following an exploration policy b βœ“ (y|x, t, z) to explore the set of programs Y.", "This exploration is often performed by beam search and at each step, we either sample from b βœ“ or take the top scoring programs.", "The set K(x, t, z) is then used by the update step for parameter update.", "Most search strategies use an exploration policy which is based on the score function, for example b βœ“ (y|x, t, z) / exp{score βœ“ (y, t)}.", "However, this approach can suffer from a divergence phenomenon whereby the score of spurious programs picked up by the search in the first epoch increases, making it more likely for the search to pick them up in the future.", "Such divergence issues are common with latent-variable learning and often require careful initialization to overcome (Rose, 1998) .", "Unfortunately such initialization schemes are not applicable for deep neural networks which form the model of most successful semantic parsers today (Jia and Liang, 2016; Misra and Artzi, 2016; Iyyer et al., 2017) .", "Prior work, such as ✏-greedy exploration (Guu et al., 2017) , has reduced the severity of this problem by introducing random noise in the search procedure to avoid saturating the search on high-scoring spurious programs.", "However, random noise need not bias the search towards the correct program(s).", "In this paper, we introduce a simple policy-shaping method to guide the search.", "This approach allows incorporating prior knowledge in the exploration policy and can bias the search away from spurious programs.", "Β» Compute generalized gradient updates 6: βœ“ = βœ“ + Β΅ (K) 7: return βœ“ Policy Shaping Policy shaping is a method to introduce prior knowledge into a policy (Griffith et al., 2013) .", "Formally, let the current behavior policy be b βœ“ (y|x, t, z) and a predefined critique policy, the prior knowledge, be p c (y|x, t).", "Policy shaping defines a new shaped behavior policy p b (y|x, t) given by: p b (y|x, t) = b βœ“ (y|x, t, z)p c (y|x, t) P y 0 2Y b βœ“ (y 0 |x, t, z)p c (y 0 |x, t) .", "(2) Using the shaped policy for exploration biases the search towards the critique policy's preference.", "We next describe a simple critique policy that we use in this paper.", "Lexical Policy Shaping We qualitatively observed that correct programs often contains tokens which are also present in the question.", "For example, the correct program in Figure 2 contains the token Points, which is also present in the question.", "We therefore, define a simple surface form similarity feature match(x, y) that computes the ratio of number of non-keyword tokens in the program y that are also present in the question x.", "However, surface-form similarity is often not enough.", "For example, both the first and fourth program in Figure 2 contain the token Points but only the fourth program is correct.", "Therefore, we also use a simple co-occurrence feature that triggers on frequently co-occurring pairs of tokens in the program and instruction.", "For example, the token most is highly likely to co-occur with a correct program containing the keyword Max.", "This happens for the example in Figure 2 .", "Similarly the token not may co-occur with the keyword NotEqual.", "We assume access to a lexicon ⇀ = {(w j , !", "j )} k j=1 containing k lexical pairs of tokens and keywords.", "Each lexical pair (w, !)", "maps the token w in a text to a keyword !", "in a program.", "For a given program y and question x, we define a co-occurrence score as co_occur(y, x) = P (w,!", ")2⇀ {w 2 x^!", "2 y}}.", "We define critique score critique(y, x) as the sum of the match and co_occur scores.", "The critique policy is given by: p c (y|x, t) / exp (⌘ ⇀ critique(y, x)) , (3) where ⌘ is a single scalar hyper-parameter denoting the confidence in the critique policy.", "Addressing Update Strategy Selection: Generalized Update Equation Given the set of programs generated by the search step, one can use many objectives to update the parameters.", "For example, previous work have utilized maximum marginal likelihood (Krishnamurthy et al., 2017; Guu et al., 2017) , reinforcement learning (Zhong et al., 2017; Guu et al., 2017) and margin based methods (Iyyer et al., 2017) .", "It could be difficult to choose the suitable algorithm from these options.", "In this section, we propose a principle and general update equation such that previous update algorithms can be considered as special cases to this equation.", "Having a general update is important for the following reasons.", "First, it allows us to understand existing algorithms better by examining their basic properties.", "Second, the generalized update equation also makes it easy to implement and experiment with various different algorithms.", "Moreover, it provides a framework that enables the development of new variations or extensions of existing learning methods.", "In the following, we describe how the commonly used algorithms are in fact very similartheir update rules can all be viewed as special cases of the proposed generalized update equation.", "Algorithm 1 shows the meta-learning framework.", "For every training example, we first find a set of candidates using an exploration policy (line 4).", "We use the program candidates to update the parameters (line 6).", "Commonly Used Learning Algorithms We briefly describe three algorithms: maximum marginalized likelihood, policy gradient and maximum margin reward.", "Maximum Marginalized Likelihood The maximum marginalized likelihood method maximizes the log-likelihood of the training data by marginalizing over the set of programs.", "J MML = log p(z i |x i , t i ) = log X y2Y p(z i |y, t i )p(y|x i , t i ) (4) Because an answer is deterministically computed given a program and a table, we define p(z | y, t) as 1 or 0 depending upon whether the y evaluates to z given t, or not.", "Let Gen(z, t) βœ“ Y be the set of compatible programs that evaluate to z given the table t. The objective can then be expressed as: J MML = log X y2Gen(zi,ti) p(y|x i , t i ) (5) In practice, the summation over Gen(.)", "is approximated by only using the compatible programs in the set K generated by the search step.", "Policy Gradient Methods Most reinforcement learning approaches for semantic parsing assume access to a reward function R : Y β‡₯ X β‡₯ Z !", "R, giving a scalar reward R(y, z) for a given program y and the correct answer z.", "2 We can further assume without loss of generality that the reward is always in [0, 1].", "Reinforcement learning approaches maximize the expected reward J RL : J RL = X y2Y p(y|x i , t i )R(y, z i ) (6) J RL is hard to approximate using numerical integration since the reward for all programs may not be known a priori.", "Policy gradient methods solve this by approximating the derivative using a sample from the policy.", "When the search space is large, the policy may fail to sample a correct program, which can greatly slow down the learning.", "Therefore, off-policy methods are sometimes introduced to bias the sampling towards high-reward yielding programs.", "In those methods, an additional exploration policy u(y|x i , t i , z i ) is used to improve sampling.", "Importance weights are used to make the gradient unbiased (see Appendix for derivation).", "Maximum Margin Reward For every training example (x i , t i , z i ), the maximum margin reward method finds the highest scoring program y i that evaluates to z i , as the reference program, from the set K of programs generated by the search.", "With a margin function : Y β‡₯Y β‡₯Z !", "R and reference program y, the set of programs V that violate the margin constraint can thus be defined as: V = {y 0 | y 0 2 Y and score βœ“ (y, x, t) ο£Ώ score βœ“ (y 0 , x, t) + (y, y 0 , z)}, (7) where (y, y 0 , z) = R(y, z) R(y 0 , z).", "Similarly, the program that most violates the constraint can be written as: y = arg max y 0 2Y {score βœ“ (y 0 , x, t) + (y, y 0 , z) score βœ“ (y, x, t)} (8) The most-violation margin objective (negative margin loss) is thus defined as: J MMR = max{0, score βœ“ (Θ³, x i , t i ) score βœ“ (y i , x i , t i ) + (y i ,Θ³, z i )} Unlike the previous two learning algorithms, margin methods only update the score of the reference program and the program that violates the margin.", "Generalized Update Equation Although the algorithms described in Β§4.1 seem very different on the surface, the gradients of their loss functions can in fact be described in the same generalized form, given in Eq.", "(9) 3 .", "In addition to the gradient of the model scoring function, this equation has two variable terms, w(Β·), q(Β·).", "We call the first term w(y, x, t, z) intensity, which is a positive scalar value and the second term q(y|x, t) the competing distribution, which is a probability distribution over programs.", "Varying them makes the equation equivalent to the update rule of the algorithms we discussed, as shown in Table 1 .", "We also consider meritocratic update policy which uses a hyperparameter to sharpen or smooth the intensity of maximum marginal likelihood (Guu et al., 2017) .", "Intuitively, w(y, x, t, z) defines the positive part of the update equation, which defines how aggressively the update favors program y.", "Likewise, q(y|x, t) defines the negative part of the learning Generalized Update Equation: (7) ) andΘ³ is the most violating program (cf.", "Eq.", "(8) ).", "For REINFORCE,Ε· is sampled from K using p(.)", "whereas for Off-Policy Policy Gradient,Ε· is sampled using u(.).", "(K) = X y2K w(y, x, t, z) 0 @ r βœ“ score βœ“ (y, x, t) X y 0 2Y q(y 0 |x, t)r βœ“ score βœ“ (y 0 , x, t) algorithm, namely how aggressively the update penalizes the members of the program set.", "The generalized update equation provides a tool for better understanding individual algorithm, and helps shed some light on when a particular method may perform better.", "Intensity versus Search Quality In SpFD, the effectiveness of the algorithms for SpFD is closely related to the quality of the search results given that the gold program is not available.", "Intuitively, if the search quality is good, the update algorithm could be aggressive on updating the model parameters.", "When the search quality is poor, the algorithm should be conservative.", "The intensity w(Β·) is closely related to the aggressiveness of the algorithm.", "For example, the maximum marginal likelihood is less aggressive given that it produces a non-zero intensity over all programs in the program set K that evaluate to the correct answer.", "The intensity for a particular correct program y is proportional to its probability p(y|x, t).", "Further, meritocratic update becomes more aggressive as becomes larger.", "In contrast, REINFORCE and maximum margin reward both have a non-zero intensity only on a single program in K. This value is 1.0 for maximum margin reward, while for reinforcement learning, this value is the reward.", "Maximum margin reward therefore updates most aggressively in favor of its selection while maximum marginal likelihood tends to hedge its bet.", "Therefore, the maximum margin methods should benefit the most when the search quality improves.", "Stability The general equation also allows us to investigate the stability of a model update algorithm.", "In general, the variance of update direction can be high, hence less stable, if the model update algorithm has peaky competing distribution, or it puts all of its intensity on a single program.", "For example, REINFORCE only samples one program and puts non-zero intensity only on that program, so it could be unstable depending on the sampling results.", "The competing distribution affects the stability of the algorithm.", "For example, maximum margin reward penalizes only the most violating program and is benign to other incorrect programs.", "Therefore, the MMR algorithm could be unstable during training.", "New Model Update Algorithm The general equation provides a framework that enables the development of new variations or extensions of existing learning methods.", "For example, in order to improve the stability of the MMR algorithm, we propose a simple variant of maximum margin reward, which penalizes all violating programs instead of only the most violating one.", "We call this approach maximum margin average violation reward (MAVER), which is included in Table 1 as well.", "Given that MAVER effectively considers more negative examples during each update, we expect that it is more stable compared to the MMR algorithm.", "Experiments We describe the setup in Β§5.1 and results in Β§5.2.", "Setup Dataset We use the sequential question answering (SQA) dataset (Iyyer et al., 2017) for our experiments.", "SQA contains 6,066 sequences and each sequence contains up to 3 questions, with 17,553 questions in total.", "The data is partitioned into training (83%) and test (17%) splits.", "We use 4/5 of the original train split as our training set and the remaining 1/5 as the dev set.", "We evaluate using exact match on answer.", "Previous state-of-theart result on the SQA dataset is 44.7% accuracy, using maximum margin reward learning.", "Semantic Parser Our semantic parser is based on DynSP (Iyyer et al., 2017) , which contains a set of SQL actions, such as adding a clause (e.g., Select Column) or adding an operator (e.g., Max).", "Each action has an associated neural network module that generates the score for the action based on the instruction, the table and the list of past actions.", "The score of the entire program is given by the sum of scores of all actions.", "We modified DynSP to improve its representational capacity.", "We refer to the new parser as DynSP++.", "Most notably, we included new features and introduced two additional parser actions.", "See Appendix 8.2 for more details.", "While these improvements help us achieve state-of-the-art results, the majority of the gain comes from the learning contributions described in this paper.", "Hyperparameters For each experiment, we train the model for 30 epochs.", "We find the optimal stopping epoch by evaluating the model on the dev set.", "We then train on train+dev set till the stopping epoch and evaluate the model on the held-out test set.", "Model parameters are trained using stochastic gradient descent with learning rate of 0.1.", "We set the hyperparameter ⌘ for policy shaping to 5.", "All hyperparameters were tuned on the dev set.", "We use 40 lexical pairs for defining the co-occur score.", "We used common English superlatives (e.g., highest, most) and comparators (e.g., more, larger) and did not fit the lexical pairs based on the dataset.", "Given the model parameter βœ“, we use a base exploration policy defined in (Iyyer et al., 2017) .", "This exploration policy is given by b βœ“ (y | x, t, z) / exp( Β· R(y, z) + score βœ“ (y, βœ“, z)).", "R(y, z) is the reward function of the incomplete program y, given the answer z.", "We use a reward function R(y, z) given by the Jaccard similarity of the gold answer z and the answer generated by the program y.", "The value of is set to infinity, which essentially is equivalent to sorting the programs based on the reward and using the current model score for tie breaking.", "Further, we prune all syntactically invalid programs.", "For more details, we refer the reader to (Iyyer et al., 2017) .", "Table 2 contains the dev and test results when using our algorithm on the SQA dataset.", "We observe that margin based methods perform better than maximum likelihood methods and policy gradient in our experiment.", "Policy shaping in general improves the performance across different algorithms.", "Our best test results outperform previous SOTA by 5.0%.", "Results Policy Gradient vs Off-Policy Gradient RE-INFORCE, a simple policy gradient method, achieved extremely poor performance.", "This likely due to the problem of exploration and having to sample from a large space of programs.", "This is further corroborated from observing the much superior performance of off-policy policy gradient methods.", "Thus, the sampling policy is an important factor to consider for policy gradient methods.", "The Effect of Policy Shaping We observe that the improvement due to policy shaping is 6.0% on the SQA dataset for MAVER and only 1.3% for maximum marginal likelihood.", "We also observe that as increases, the improvement due to policy shaping for meritocratic update increases.", "This supports our hypothesis that aggressive updates of margin based methods is beneficial when the search method is more accurate as compared to maximum marginal likelihood which hedges its bet between all programs that evaluate to the right answer.", "Stability of MMR In Section 4, the general update equation helps us point out that MMR could be unstable due to the peaky competing distribution.", "MAVER was proposed to increase the stability of the algorithm.", "To measure stability, we cal- Table 4 .", "Policy Shaping vs Model Shaping Critique policy contains useful information that can bias the search away from spurious programs.", "Therefore, one can also consider making the critique policy as part of the model.", "We call this model shaping.", "We define our model to be the shaped policy and train and test using the new model.", "Using MAVER updates, we found that the dev accuracy dropped to 37.1%.", "We conjecture that the strong prior in the critique policy can hinder generalization in model shaping.", "Related Work Semantic Parsing from Denotation Mapping natural language text to formal meaning representation was first studied by Montague (1970) .", "Early work on learning semantic parsers rely on labeled formal representations as the supervision signals Collins, 2005, 2007; Zelle and Mooney, 1993) .", "However, because getting access to gold formal representation generally requires expensive annotations by an expert, distant supervision approaches, where semantic parsers are learned from denotation only, have become the main learning paradigm (e.g., Clarke et al., 2010; Liang et al., 2011; Artzi and Zettlemoyer, 2013; Berant et al., 2013; Iyyer et al., 2017; Krishnamurthy et al., 2017) .", "Guu et al.", "(2017) studied the problem of spurious programs and considered adding noise to diversify the search procedure and introduced meritocratic updates.", "Reinforcement Learning Algorithms Reinforcement learning algorithms have been applied to various NLP problems including dialogue (Li et al., 2016 ), text-based games (Narasimhan et al., 2015) , information extraction (Narasimhan et al., 2016) , coreference resolution (Clark and Man- ning, 2016), semantic parsing (Guu et al., 2017) and instruction following (Misra et al., 2017) .", "Guu et al.", "(2017) show that policy gradient methods underperform maximum marginal likelihood approaches.", "Our result on the SQA dataset supports their observation.", "However, we show that using off-policy sampling, policy gradient methods can provide superior performance to maximum marginal likelihood methods.", "Margin-based Learning Margin-based methods have been considered in the context of SVM learning.", "In the NLP literature, margin based learning has been applied to parsing (Taskar et al., 2004; McDonald et al., 2005) , text classification (Taskar et al., 2003) , machine translation (Watanabe et al., 2007) and semantic parsing (Iyyer et al., 2017) .", "Kummerfeld et al.", "(2015) found that max-margin based methods generally outperform likelihood maximization on a range of tasks.", "Previous work have studied connections between margin based method and likelihood maximization for supervised learning setting.", "We show them as special cases of our unified update equation for distant supervision learning.", "Similar to this work, Lee et al.", "(2016) also found that in the context of supervised learning, margin-based algorithms which update all violated examples perform better than the one that only updates the most violated example.", "Latent Variable Modeling Learning semantic parsers from denotation can be viewed as a latent variable modeling problem, where the program is the latent variable.", "Probabilistic latent variable models have been studied using EM-algorithm and its variant (Dempster et al., 1977) .", "The graphical model literature has studied latent variable learning on margin-based methods (Yu and Joachims, 2009 ) and probabilistic models (Quattoni et al., 2007) .", "Samdani et al.", "(2012) studied various vari-ants of EM algorithm and showed that all of them are special cases of a unified framework.", "Our generalized update framework is similar in spirit.", "Conclusion In this paper, we propose a general update equation from semantic parsing from denotation and propose a policy shaping method for addressing the spurious program challenge.", "For the future, we plan to apply the proposed learning framework to more semantic parsing tasks and consider new methods for policy shaping." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "4", "4.1", "4.2", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Background", "The Semantic Parsing Task", "Learning", "Addressing Spurious Programs: Policy Shaping", "Addressing Update Strategy Selection:", "Commonly Used Learning Algorithms", "Generalized Update Equation", "Experiments", "Setup", "Results", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-103#paper-1267#slide-5
Search for Training Spurious Programs
Search for training. Goal: find semantically correct parse! Question: What nation scored the most points? Select Nation Where Points = 44 England Select Nation Where Index is Minimum England Select Nation Where Pts/game is Maximum England Select Nation Where Point is Maximum England All programs above generate right answers but only one is correct.
Search for training. Goal: find semantically correct parse! Question: What nation scored the most points? Select Nation Where Points = 44 England Select Nation Where Index is Minimum England Select Nation Where Pts/game is Maximum England Select Nation Where Point is Maximum England All programs above generate right answers but only one is correct.
[]
GEM-SciDuet-train-103#paper-1267#slide-6
1267
Policy Shaping and Generalized Update Equations for Semantic Parsing from Denotations
Semantic parsing from denotations faces two key challenges in model training: (1) given only the denotations (e.g., answers), search for good candidate semantic parses, and (2) choose the best model update algorithm. We propose effective and general solutions to each of them. Using policy shaping, we bias the search procedure towards semantic parses that are more compatible to the text, which provide better supervision signals for training. In addition, we propose an update equation that generalizes three different families of learning algorithms, which enables fast model exploration. When experimented on a recently proposed sequential question answering dataset, our framework leads to a new state-of-theart model that outperforms previous work by 5.0% absolute on exact match accuracy.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233 ], "paper_content_text": [ "Introduction Semantic parsing from denotations (SpFD) is the problem of mapping text to executable formal representations (or program) in a situated environment and executing them to generate denotations (or answer), in the absence of access to correct representations.", "Several problems have been handled within this framework, including question answering (Berant et al., 2013; Iyyer et al., 2017) and instructions for robots (Artzi and Zettlemoyer, 2013; Misra et al., 2015) .", "Consider the example in Figure 1 .", "Given the question and a table environment, a semantic parser maps the question to an executable program, in this case a SQL query, and then executes the query on the environment to generate the answer England.", "In the SpFD setting, the training data does not contain the correct programs.", "Thus, the existing learning approaches for SpFD perform two steps for every training example, a search step that explores the space of programs Select Nation Where Points is Maximum Program: Answer: Environment: England Figure 1 : An example of semantic parsing from denotations.", "Given the table environment, map the question to an executable program that evaluates to the answer.", "and finds suitable candidates, and an update step that uses these programs to update the model.", "Figure 2 shows the two step training procedure for the above example.", "In this paper, we address two key challenges in model training for SpFD by proposing a novel learning framework, improving both the search and update steps.", "The first challenge, the existence of spurious programs, lies in the search step.", "More specifically, while the success of the search step relies on its ability to find programs that are semantically correct, we can only verify if the program can generate correct answers, given that no gold programs are presented.", "The search step is complicated by spurious programs, which happen to evaluate to the correct answer but do not represent accurately the meaning of the natural language question.", "For example, for the environment in Figure 1 , the program Select Nation Where Name = Karen Andrew is spurious.", "Selecting spurious programs as positive examples can greatly affect the performance of semantic parsers as these programs generally do not gen- Our contributions in this work are twofold.", "To address the first challenge, we propose a policy shaping (Griffith et al., 2013) method that incorporates simple, lightweight domain knowledge, such as a small set of lexical pairs of tokens in the question and program, in the form of a critique policy ( Β§ 3).", "This helps bias the search towards the correct program, an important step to improve supervision signals, which benefits learning regardless of the choice of algorithm.", "To address the second challenge, we prove that the parameter update step in several algorithms are similar and can be viewed as special cases of a generalized update equation ( Β§ 4).", "The equation contains two variable terms that govern the update behavior.", "Changing these two terms effectively defines an infinite class of learning algorithms where different values lead to significantly different results.", "We study this effect and propose a novel learning framework that improves over existing methods.", "We evaluate our methods using the sequential question answering (SQA) dataset (Iyyer et al., 2017) , and show that our proposed improvements to the search and update steps consistently enhance existing approaches.", "The proposed algorithm achieves new state-of-the-art and outperforms existing parsers by 5.0%.", "Background We give a formal problem definition of the semantic parsing task, followed by the general learning framework for solving it.", "The Semantic Parsing Task The problem discussed in this paper can be formally defined as follows.", "Let X be the set of all possible questions, Y programs (e.g., SQL-like queries), T tables (i.e., the structured data in this work) and Z answers.", "We further assume access to an executor : Y β‡₯ T !", "Z, that given a program y 2 Y and a table t 2 T , generates an answer (y, t) 2 Z.", "We assume that the executor and all tables are deterministic and the executor can be called as many times as possible.", "To facilitate discussion in the following sections, we define an environment function e t : Y !", "Z, by applying the executor to the program as e t (y) = (y, t).", "Given a question x and an environment e t , our aim is to generate a program y ⇀ 2 Y and then execute it to produce the answer e t (y ⇀ ).", "Assume that for any y 2 Y, the score of y being a correct program for x is score βœ“ (y, x, t), parameterized by βœ“.", "The inference task is thus: y ⇀ = arg max y2Y score βœ“ (y, x, t) (1) As the size of Y is exponential to the length of the program, a generic search procedure is typically employed for Eq.", "(1), as efficient dynamic algorithms typically do not exist.", "These search procedures generally maintain a beam of program states sorted according to some scoring function, where each program state represents an incomplete program.", "The search then generates a new program state from an existing state by performing an action.", "Each action adds a set of tokens (e.g., Nation) and keyword (e.g., Select) to a program state.", "For example, in order to generate the program in Figure 1 , the DynSP parser (Iyyer et al., 2017) will take the first action as adding the SQL expression Select Nation.", "Notice that score βœ“ can be used in either probabilistic or nonprobabilistic models.", "For probabilistic models, we assume that it is a Boltzmann policy, meaning that p βœ“ (y | x, t) / exp{score βœ“ (y, x, t)}.", "Learning Learning a semantic parser is equivalent to learning the parameters βœ“ in the scoring function, which is a structured learning problem, due to the large, structured output space Y.", "Structured learning algorithms generally consist of two major components: search and update.", "When the gold programs are available during training, the search procedure finds a set of high-scoring incorrect programs.", "These programs are used by the update step to derive loss for updating parameters.", "For example, these programs are used for approximating the partition-function in maximum-likelihood objective (Liang et al., 2011) and finding set of programs causing margin violation in margin based methods (DaumΓ© III and Marcu, 2005) .", "Depending on the exact algorithm being used, these two components are not necessarily separated into isolated steps.", "For instance, parameters can be updated in the middle of search (e.g., Huang et al., 2012) .", "For learning semantic parsers from denotations, where we assume only answers are available in a training set {( x i , t i , z i )} N i=1 of N examples, the basic construction of the learning algorithms remains the same.", "However, the problems that search needs to handle in SpFD is more challenging.", "In addition to finding a set of high-scoring incorrect programs, the search procedure also needs to guess the correct program(s) evaluating to the gold answer z i .", "This problem is further complicated by the presence of spurious programs, which generate the correct answer but are semantically incompatible with the question.", "For example, although all programs in Figure 2 evaluate to the same answer, only one of them is correct.", "The issue of the spurious programs also affects the design of model update.", "For instance, maximum marginal likelihood methods treat all the programs that evaluate to the gold answer equally, while maximum margin reward networks use model score to break tie and pick one of the programs as the correct reference.", "Addressing Spurious Programs: Policy Shaping Given a training example (x, t, z), the aim of the search step is to find a set K(x, t, z) of programs consisting of correct programs that evaluate to z and high-scoring incorrect programs.", "The search step should avoid picking up spurious programs for learning since such programs typically do not generalize.", "For example, in Figure 2, the spurious program Select Nation Where Index is Min will evaluate to an incorrect answer if the indices of the first two rows are swapped 1 .", "This problem is challenging since among the programs that evaluate to the correct answer, most of them are spurious.", "The search step can be viewed as following an exploration policy b βœ“ (y|x, t, z) to explore the set of programs Y.", "This exploration is often performed by beam search and at each step, we either sample from b βœ“ or take the top scoring programs.", "The set K(x, t, z) is then used by the update step for parameter update.", "Most search strategies use an exploration policy which is based on the score function, for example b βœ“ (y|x, t, z) / exp{score βœ“ (y, t)}.", "However, this approach can suffer from a divergence phenomenon whereby the score of spurious programs picked up by the search in the first epoch increases, making it more likely for the search to pick them up in the future.", "Such divergence issues are common with latent-variable learning and often require careful initialization to overcome (Rose, 1998) .", "Unfortunately such initialization schemes are not applicable for deep neural networks which form the model of most successful semantic parsers today (Jia and Liang, 2016; Misra and Artzi, 2016; Iyyer et al., 2017) .", "Prior work, such as ✏-greedy exploration (Guu et al., 2017) , has reduced the severity of this problem by introducing random noise in the search procedure to avoid saturating the search on high-scoring spurious programs.", "However, random noise need not bias the search towards the correct program(s).", "In this paper, we introduce a simple policy-shaping method to guide the search.", "This approach allows incorporating prior knowledge in the exploration policy and can bias the search away from spurious programs.", "Β» Compute generalized gradient updates 6: βœ“ = βœ“ + Β΅ (K) 7: return βœ“ Policy Shaping Policy shaping is a method to introduce prior knowledge into a policy (Griffith et al., 2013) .", "Formally, let the current behavior policy be b βœ“ (y|x, t, z) and a predefined critique policy, the prior knowledge, be p c (y|x, t).", "Policy shaping defines a new shaped behavior policy p b (y|x, t) given by: p b (y|x, t) = b βœ“ (y|x, t, z)p c (y|x, t) P y 0 2Y b βœ“ (y 0 |x, t, z)p c (y 0 |x, t) .", "(2) Using the shaped policy for exploration biases the search towards the critique policy's preference.", "We next describe a simple critique policy that we use in this paper.", "Lexical Policy Shaping We qualitatively observed that correct programs often contains tokens which are also present in the question.", "For example, the correct program in Figure 2 contains the token Points, which is also present in the question.", "We therefore, define a simple surface form similarity feature match(x, y) that computes the ratio of number of non-keyword tokens in the program y that are also present in the question x.", "However, surface-form similarity is often not enough.", "For example, both the first and fourth program in Figure 2 contain the token Points but only the fourth program is correct.", "Therefore, we also use a simple co-occurrence feature that triggers on frequently co-occurring pairs of tokens in the program and instruction.", "For example, the token most is highly likely to co-occur with a correct program containing the keyword Max.", "This happens for the example in Figure 2 .", "Similarly the token not may co-occur with the keyword NotEqual.", "We assume access to a lexicon ⇀ = {(w j , !", "j )} k j=1 containing k lexical pairs of tokens and keywords.", "Each lexical pair (w, !)", "maps the token w in a text to a keyword !", "in a program.", "For a given program y and question x, we define a co-occurrence score as co_occur(y, x) = P (w,!", ")2⇀ {w 2 x^!", "2 y}}.", "We define critique score critique(y, x) as the sum of the match and co_occur scores.", "The critique policy is given by: p c (y|x, t) / exp (⌘ ⇀ critique(y, x)) , (3) where ⌘ is a single scalar hyper-parameter denoting the confidence in the critique policy.", "Addressing Update Strategy Selection: Generalized Update Equation Given the set of programs generated by the search step, one can use many objectives to update the parameters.", "For example, previous work have utilized maximum marginal likelihood (Krishnamurthy et al., 2017; Guu et al., 2017) , reinforcement learning (Zhong et al., 2017; Guu et al., 2017) and margin based methods (Iyyer et al., 2017) .", "It could be difficult to choose the suitable algorithm from these options.", "In this section, we propose a principle and general update equation such that previous update algorithms can be considered as special cases to this equation.", "Having a general update is important for the following reasons.", "First, it allows us to understand existing algorithms better by examining their basic properties.", "Second, the generalized update equation also makes it easy to implement and experiment with various different algorithms.", "Moreover, it provides a framework that enables the development of new variations or extensions of existing learning methods.", "In the following, we describe how the commonly used algorithms are in fact very similartheir update rules can all be viewed as special cases of the proposed generalized update equation.", "Algorithm 1 shows the meta-learning framework.", "For every training example, we first find a set of candidates using an exploration policy (line 4).", "We use the program candidates to update the parameters (line 6).", "Commonly Used Learning Algorithms We briefly describe three algorithms: maximum marginalized likelihood, policy gradient and maximum margin reward.", "Maximum Marginalized Likelihood The maximum marginalized likelihood method maximizes the log-likelihood of the training data by marginalizing over the set of programs.", "J MML = log p(z i |x i , t i ) = log X y2Y p(z i |y, t i )p(y|x i , t i ) (4) Because an answer is deterministically computed given a program and a table, we define p(z | y, t) as 1 or 0 depending upon whether the y evaluates to z given t, or not.", "Let Gen(z, t) βœ“ Y be the set of compatible programs that evaluate to z given the table t. The objective can then be expressed as: J MML = log X y2Gen(zi,ti) p(y|x i , t i ) (5) In practice, the summation over Gen(.)", "is approximated by only using the compatible programs in the set K generated by the search step.", "Policy Gradient Methods Most reinforcement learning approaches for semantic parsing assume access to a reward function R : Y β‡₯ X β‡₯ Z !", "R, giving a scalar reward R(y, z) for a given program y and the correct answer z.", "2 We can further assume without loss of generality that the reward is always in [0, 1].", "Reinforcement learning approaches maximize the expected reward J RL : J RL = X y2Y p(y|x i , t i )R(y, z i ) (6) J RL is hard to approximate using numerical integration since the reward for all programs may not be known a priori.", "Policy gradient methods solve this by approximating the derivative using a sample from the policy.", "When the search space is large, the policy may fail to sample a correct program, which can greatly slow down the learning.", "Therefore, off-policy methods are sometimes introduced to bias the sampling towards high-reward yielding programs.", "In those methods, an additional exploration policy u(y|x i , t i , z i ) is used to improve sampling.", "Importance weights are used to make the gradient unbiased (see Appendix for derivation).", "Maximum Margin Reward For every training example (x i , t i , z i ), the maximum margin reward method finds the highest scoring program y i that evaluates to z i , as the reference program, from the set K of programs generated by the search.", "With a margin function : Y β‡₯Y β‡₯Z !", "R and reference program y, the set of programs V that violate the margin constraint can thus be defined as: V = {y 0 | y 0 2 Y and score βœ“ (y, x, t) ο£Ώ score βœ“ (y 0 , x, t) + (y, y 0 , z)}, (7) where (y, y 0 , z) = R(y, z) R(y 0 , z).", "Similarly, the program that most violates the constraint can be written as: y = arg max y 0 2Y {score βœ“ (y 0 , x, t) + (y, y 0 , z) score βœ“ (y, x, t)} (8) The most-violation margin objective (negative margin loss) is thus defined as: J MMR = max{0, score βœ“ (Θ³, x i , t i ) score βœ“ (y i , x i , t i ) + (y i ,Θ³, z i )} Unlike the previous two learning algorithms, margin methods only update the score of the reference program and the program that violates the margin.", "Generalized Update Equation Although the algorithms described in Β§4.1 seem very different on the surface, the gradients of their loss functions can in fact be described in the same generalized form, given in Eq.", "(9) 3 .", "In addition to the gradient of the model scoring function, this equation has two variable terms, w(Β·), q(Β·).", "We call the first term w(y, x, t, z) intensity, which is a positive scalar value and the second term q(y|x, t) the competing distribution, which is a probability distribution over programs.", "Varying them makes the equation equivalent to the update rule of the algorithms we discussed, as shown in Table 1 .", "We also consider meritocratic update policy which uses a hyperparameter to sharpen or smooth the intensity of maximum marginal likelihood (Guu et al., 2017) .", "Intuitively, w(y, x, t, z) defines the positive part of the update equation, which defines how aggressively the update favors program y.", "Likewise, q(y|x, t) defines the negative part of the learning Generalized Update Equation: (7) ) andΘ³ is the most violating program (cf.", "Eq.", "(8) ).", "For REINFORCE,Ε· is sampled from K using p(.)", "whereas for Off-Policy Policy Gradient,Ε· is sampled using u(.).", "(K) = X y2K w(y, x, t, z) 0 @ r βœ“ score βœ“ (y, x, t) X y 0 2Y q(y 0 |x, t)r βœ“ score βœ“ (y 0 , x, t) algorithm, namely how aggressively the update penalizes the members of the program set.", "The generalized update equation provides a tool for better understanding individual algorithm, and helps shed some light on when a particular method may perform better.", "Intensity versus Search Quality In SpFD, the effectiveness of the algorithms for SpFD is closely related to the quality of the search results given that the gold program is not available.", "Intuitively, if the search quality is good, the update algorithm could be aggressive on updating the model parameters.", "When the search quality is poor, the algorithm should be conservative.", "The intensity w(Β·) is closely related to the aggressiveness of the algorithm.", "For example, the maximum marginal likelihood is less aggressive given that it produces a non-zero intensity over all programs in the program set K that evaluate to the correct answer.", "The intensity for a particular correct program y is proportional to its probability p(y|x, t).", "Further, meritocratic update becomes more aggressive as becomes larger.", "In contrast, REINFORCE and maximum margin reward both have a non-zero intensity only on a single program in K. This value is 1.0 for maximum margin reward, while for reinforcement learning, this value is the reward.", "Maximum margin reward therefore updates most aggressively in favor of its selection while maximum marginal likelihood tends to hedge its bet.", "Therefore, the maximum margin methods should benefit the most when the search quality improves.", "Stability The general equation also allows us to investigate the stability of a model update algorithm.", "In general, the variance of update direction can be high, hence less stable, if the model update algorithm has peaky competing distribution, or it puts all of its intensity on a single program.", "For example, REINFORCE only samples one program and puts non-zero intensity only on that program, so it could be unstable depending on the sampling results.", "The competing distribution affects the stability of the algorithm.", "For example, maximum margin reward penalizes only the most violating program and is benign to other incorrect programs.", "Therefore, the MMR algorithm could be unstable during training.", "New Model Update Algorithm The general equation provides a framework that enables the development of new variations or extensions of existing learning methods.", "For example, in order to improve the stability of the MMR algorithm, we propose a simple variant of maximum margin reward, which penalizes all violating programs instead of only the most violating one.", "We call this approach maximum margin average violation reward (MAVER), which is included in Table 1 as well.", "Given that MAVER effectively considers more negative examples during each update, we expect that it is more stable compared to the MMR algorithm.", "Experiments We describe the setup in Β§5.1 and results in Β§5.2.", "Setup Dataset We use the sequential question answering (SQA) dataset (Iyyer et al., 2017) for our experiments.", "SQA contains 6,066 sequences and each sequence contains up to 3 questions, with 17,553 questions in total.", "The data is partitioned into training (83%) and test (17%) splits.", "We use 4/5 of the original train split as our training set and the remaining 1/5 as the dev set.", "We evaluate using exact match on answer.", "Previous state-of-theart result on the SQA dataset is 44.7% accuracy, using maximum margin reward learning.", "Semantic Parser Our semantic parser is based on DynSP (Iyyer et al., 2017) , which contains a set of SQL actions, such as adding a clause (e.g., Select Column) or adding an operator (e.g., Max).", "Each action has an associated neural network module that generates the score for the action based on the instruction, the table and the list of past actions.", "The score of the entire program is given by the sum of scores of all actions.", "We modified DynSP to improve its representational capacity.", "We refer to the new parser as DynSP++.", "Most notably, we included new features and introduced two additional parser actions.", "See Appendix 8.2 for more details.", "While these improvements help us achieve state-of-the-art results, the majority of the gain comes from the learning contributions described in this paper.", "Hyperparameters For each experiment, we train the model for 30 epochs.", "We find the optimal stopping epoch by evaluating the model on the dev set.", "We then train on train+dev set till the stopping epoch and evaluate the model on the held-out test set.", "Model parameters are trained using stochastic gradient descent with learning rate of 0.1.", "We set the hyperparameter ⌘ for policy shaping to 5.", "All hyperparameters were tuned on the dev set.", "We use 40 lexical pairs for defining the co-occur score.", "We used common English superlatives (e.g., highest, most) and comparators (e.g., more, larger) and did not fit the lexical pairs based on the dataset.", "Given the model parameter βœ“, we use a base exploration policy defined in (Iyyer et al., 2017) .", "This exploration policy is given by b βœ“ (y | x, t, z) / exp( Β· R(y, z) + score βœ“ (y, βœ“, z)).", "R(y, z) is the reward function of the incomplete program y, given the answer z.", "We use a reward function R(y, z) given by the Jaccard similarity of the gold answer z and the answer generated by the program y.", "The value of is set to infinity, which essentially is equivalent to sorting the programs based on the reward and using the current model score for tie breaking.", "Further, we prune all syntactically invalid programs.", "For more details, we refer the reader to (Iyyer et al., 2017) .", "Table 2 contains the dev and test results when using our algorithm on the SQA dataset.", "We observe that margin based methods perform better than maximum likelihood methods and policy gradient in our experiment.", "Policy shaping in general improves the performance across different algorithms.", "Our best test results outperform previous SOTA by 5.0%.", "Results Policy Gradient vs Off-Policy Gradient RE-INFORCE, a simple policy gradient method, achieved extremely poor performance.", "This likely due to the problem of exploration and having to sample from a large space of programs.", "This is further corroborated from observing the much superior performance of off-policy policy gradient methods.", "Thus, the sampling policy is an important factor to consider for policy gradient methods.", "The Effect of Policy Shaping We observe that the improvement due to policy shaping is 6.0% on the SQA dataset for MAVER and only 1.3% for maximum marginal likelihood.", "We also observe that as increases, the improvement due to policy shaping for meritocratic update increases.", "This supports our hypothesis that aggressive updates of margin based methods is beneficial when the search method is more accurate as compared to maximum marginal likelihood which hedges its bet between all programs that evaluate to the right answer.", "Stability of MMR In Section 4, the general update equation helps us point out that MMR could be unstable due to the peaky competing distribution.", "MAVER was proposed to increase the stability of the algorithm.", "To measure stability, we cal- Table 4 .", "Policy Shaping vs Model Shaping Critique policy contains useful information that can bias the search away from spurious programs.", "Therefore, one can also consider making the critique policy as part of the model.", "We call this model shaping.", "We define our model to be the shaped policy and train and test using the new model.", "Using MAVER updates, we found that the dev accuracy dropped to 37.1%.", "We conjecture that the strong prior in the critique policy can hinder generalization in model shaping.", "Related Work Semantic Parsing from Denotation Mapping natural language text to formal meaning representation was first studied by Montague (1970) .", "Early work on learning semantic parsers rely on labeled formal representations as the supervision signals Collins, 2005, 2007; Zelle and Mooney, 1993) .", "However, because getting access to gold formal representation generally requires expensive annotations by an expert, distant supervision approaches, where semantic parsers are learned from denotation only, have become the main learning paradigm (e.g., Clarke et al., 2010; Liang et al., 2011; Artzi and Zettlemoyer, 2013; Berant et al., 2013; Iyyer et al., 2017; Krishnamurthy et al., 2017) .", "Guu et al.", "(2017) studied the problem of spurious programs and considered adding noise to diversify the search procedure and introduced meritocratic updates.", "Reinforcement Learning Algorithms Reinforcement learning algorithms have been applied to various NLP problems including dialogue (Li et al., 2016 ), text-based games (Narasimhan et al., 2015) , information extraction (Narasimhan et al., 2016) , coreference resolution (Clark and Man- ning, 2016), semantic parsing (Guu et al., 2017) and instruction following (Misra et al., 2017) .", "Guu et al.", "(2017) show that policy gradient methods underperform maximum marginal likelihood approaches.", "Our result on the SQA dataset supports their observation.", "However, we show that using off-policy sampling, policy gradient methods can provide superior performance to maximum marginal likelihood methods.", "Margin-based Learning Margin-based methods have been considered in the context of SVM learning.", "In the NLP literature, margin based learning has been applied to parsing (Taskar et al., 2004; McDonald et al., 2005) , text classification (Taskar et al., 2003) , machine translation (Watanabe et al., 2007) and semantic parsing (Iyyer et al., 2017) .", "Kummerfeld et al.", "(2015) found that max-margin based methods generally outperform likelihood maximization on a range of tasks.", "Previous work have studied connections between margin based method and likelihood maximization for supervised learning setting.", "We show them as special cases of our unified update equation for distant supervision learning.", "Similar to this work, Lee et al.", "(2016) also found that in the context of supervised learning, margin-based algorithms which update all violated examples perform better than the one that only updates the most violated example.", "Latent Variable Modeling Learning semantic parsers from denotation can be viewed as a latent variable modeling problem, where the program is the latent variable.", "Probabilistic latent variable models have been studied using EM-algorithm and its variant (Dempster et al., 1977) .", "The graphical model literature has studied latent variable learning on margin-based methods (Yu and Joachims, 2009 ) and probabilistic models (Quattoni et al., 2007) .", "Samdani et al.", "(2012) studied various vari-ants of EM algorithm and showed that all of them are special cases of a unified framework.", "Our generalized update framework is similar in spirit.", "Conclusion In this paper, we propose a general update equation from semantic parsing from denotation and propose a policy shaping method for addressing the spurious program challenge.", "For the future, we plan to apply the proposed learning framework to more semantic parsing tasks and consider new methods for policy shaping." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "4", "4.1", "4.2", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Background", "The Semantic Parsing Task", "Learning", "Addressing Spurious Programs: Policy Shaping", "Addressing Update Strategy Selection:", "Commonly Used Learning Algorithms", "Generalized Update Equation", "Experiments", "Setup", "Results", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-103#paper-1267#slide-6
Update Step
Generally there are several methods to update the model. Examples: maximum marginal likelihood, reinforcement learning, margin methods.
Generally there are several methods to update the model. Examples: maximum marginal likelihood, reinforcement learning, margin methods.
[]
GEM-SciDuet-train-103#paper-1267#slide-7
1267
Policy Shaping and Generalized Update Equations for Semantic Parsing from Denotations
Semantic parsing from denotations faces two key challenges in model training: (1) given only the denotations (e.g., answers), search for good candidate semantic parses, and (2) choose the best model update algorithm. We propose effective and general solutions to each of them. Using policy shaping, we bias the search procedure towards semantic parses that are more compatible to the text, which provide better supervision signals for training. In addition, we propose an update equation that generalizes three different families of learning algorithms, which enables fast model exploration. When experimented on a recently proposed sequential question answering dataset, our framework leads to a new state-of-theart model that outperforms previous work by 5.0% absolute on exact match accuracy.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233 ], "paper_content_text": [ "Introduction Semantic parsing from denotations (SpFD) is the problem of mapping text to executable formal representations (or program) in a situated environment and executing them to generate denotations (or answer), in the absence of access to correct representations.", "Several problems have been handled within this framework, including question answering (Berant et al., 2013; Iyyer et al., 2017) and instructions for robots (Artzi and Zettlemoyer, 2013; Misra et al., 2015) .", "Consider the example in Figure 1 .", "Given the question and a table environment, a semantic parser maps the question to an executable program, in this case a SQL query, and then executes the query on the environment to generate the answer England.", "In the SpFD setting, the training data does not contain the correct programs.", "Thus, the existing learning approaches for SpFD perform two steps for every training example, a search step that explores the space of programs Select Nation Where Points is Maximum Program: Answer: Environment: England Figure 1 : An example of semantic parsing from denotations.", "Given the table environment, map the question to an executable program that evaluates to the answer.", "and finds suitable candidates, and an update step that uses these programs to update the model.", "Figure 2 shows the two step training procedure for the above example.", "In this paper, we address two key challenges in model training for SpFD by proposing a novel learning framework, improving both the search and update steps.", "The first challenge, the existence of spurious programs, lies in the search step.", "More specifically, while the success of the search step relies on its ability to find programs that are semantically correct, we can only verify if the program can generate correct answers, given that no gold programs are presented.", "The search step is complicated by spurious programs, which happen to evaluate to the correct answer but do not represent accurately the meaning of the natural language question.", "For example, for the environment in Figure 1 , the program Select Nation Where Name = Karen Andrew is spurious.", "Selecting spurious programs as positive examples can greatly affect the performance of semantic parsers as these programs generally do not gen- Our contributions in this work are twofold.", "To address the first challenge, we propose a policy shaping (Griffith et al., 2013) method that incorporates simple, lightweight domain knowledge, such as a small set of lexical pairs of tokens in the question and program, in the form of a critique policy ( Β§ 3).", "This helps bias the search towards the correct program, an important step to improve supervision signals, which benefits learning regardless of the choice of algorithm.", "To address the second challenge, we prove that the parameter update step in several algorithms are similar and can be viewed as special cases of a generalized update equation ( Β§ 4).", "The equation contains two variable terms that govern the update behavior.", "Changing these two terms effectively defines an infinite class of learning algorithms where different values lead to significantly different results.", "We study this effect and propose a novel learning framework that improves over existing methods.", "We evaluate our methods using the sequential question answering (SQA) dataset (Iyyer et al., 2017) , and show that our proposed improvements to the search and update steps consistently enhance existing approaches.", "The proposed algorithm achieves new state-of-the-art and outperforms existing parsers by 5.0%.", "Background We give a formal problem definition of the semantic parsing task, followed by the general learning framework for solving it.", "The Semantic Parsing Task The problem discussed in this paper can be formally defined as follows.", "Let X be the set of all possible questions, Y programs (e.g., SQL-like queries), T tables (i.e., the structured data in this work) and Z answers.", "We further assume access to an executor : Y β‡₯ T !", "Z, that given a program y 2 Y and a table t 2 T , generates an answer (y, t) 2 Z.", "We assume that the executor and all tables are deterministic and the executor can be called as many times as possible.", "To facilitate discussion in the following sections, we define an environment function e t : Y !", "Z, by applying the executor to the program as e t (y) = (y, t).", "Given a question x and an environment e t , our aim is to generate a program y ⇀ 2 Y and then execute it to produce the answer e t (y ⇀ ).", "Assume that for any y 2 Y, the score of y being a correct program for x is score βœ“ (y, x, t), parameterized by βœ“.", "The inference task is thus: y ⇀ = arg max y2Y score βœ“ (y, x, t) (1) As the size of Y is exponential to the length of the program, a generic search procedure is typically employed for Eq.", "(1), as efficient dynamic algorithms typically do not exist.", "These search procedures generally maintain a beam of program states sorted according to some scoring function, where each program state represents an incomplete program.", "The search then generates a new program state from an existing state by performing an action.", "Each action adds a set of tokens (e.g., Nation) and keyword (e.g., Select) to a program state.", "For example, in order to generate the program in Figure 1 , the DynSP parser (Iyyer et al., 2017) will take the first action as adding the SQL expression Select Nation.", "Notice that score βœ“ can be used in either probabilistic or nonprobabilistic models.", "For probabilistic models, we assume that it is a Boltzmann policy, meaning that p βœ“ (y | x, t) / exp{score βœ“ (y, x, t)}.", "Learning Learning a semantic parser is equivalent to learning the parameters βœ“ in the scoring function, which is a structured learning problem, due to the large, structured output space Y.", "Structured learning algorithms generally consist of two major components: search and update.", "When the gold programs are available during training, the search procedure finds a set of high-scoring incorrect programs.", "These programs are used by the update step to derive loss for updating parameters.", "For example, these programs are used for approximating the partition-function in maximum-likelihood objective (Liang et al., 2011) and finding set of programs causing margin violation in margin based methods (DaumΓ© III and Marcu, 2005) .", "Depending on the exact algorithm being used, these two components are not necessarily separated into isolated steps.", "For instance, parameters can be updated in the middle of search (e.g., Huang et al., 2012) .", "For learning semantic parsers from denotations, where we assume only answers are available in a training set {( x i , t i , z i )} N i=1 of N examples, the basic construction of the learning algorithms remains the same.", "However, the problems that search needs to handle in SpFD is more challenging.", "In addition to finding a set of high-scoring incorrect programs, the search procedure also needs to guess the correct program(s) evaluating to the gold answer z i .", "This problem is further complicated by the presence of spurious programs, which generate the correct answer but are semantically incompatible with the question.", "For example, although all programs in Figure 2 evaluate to the same answer, only one of them is correct.", "The issue of the spurious programs also affects the design of model update.", "For instance, maximum marginal likelihood methods treat all the programs that evaluate to the gold answer equally, while maximum margin reward networks use model score to break tie and pick one of the programs as the correct reference.", "Addressing Spurious Programs: Policy Shaping Given a training example (x, t, z), the aim of the search step is to find a set K(x, t, z) of programs consisting of correct programs that evaluate to z and high-scoring incorrect programs.", "The search step should avoid picking up spurious programs for learning since such programs typically do not generalize.", "For example, in Figure 2, the spurious program Select Nation Where Index is Min will evaluate to an incorrect answer if the indices of the first two rows are swapped 1 .", "This problem is challenging since among the programs that evaluate to the correct answer, most of them are spurious.", "The search step can be viewed as following an exploration policy b βœ“ (y|x, t, z) to explore the set of programs Y.", "This exploration is often performed by beam search and at each step, we either sample from b βœ“ or take the top scoring programs.", "The set K(x, t, z) is then used by the update step for parameter update.", "Most search strategies use an exploration policy which is based on the score function, for example b βœ“ (y|x, t, z) / exp{score βœ“ (y, t)}.", "However, this approach can suffer from a divergence phenomenon whereby the score of spurious programs picked up by the search in the first epoch increases, making it more likely for the search to pick them up in the future.", "Such divergence issues are common with latent-variable learning and often require careful initialization to overcome (Rose, 1998) .", "Unfortunately such initialization schemes are not applicable for deep neural networks which form the model of most successful semantic parsers today (Jia and Liang, 2016; Misra and Artzi, 2016; Iyyer et al., 2017) .", "Prior work, such as ✏-greedy exploration (Guu et al., 2017) , has reduced the severity of this problem by introducing random noise in the search procedure to avoid saturating the search on high-scoring spurious programs.", "However, random noise need not bias the search towards the correct program(s).", "In this paper, we introduce a simple policy-shaping method to guide the search.", "This approach allows incorporating prior knowledge in the exploration policy and can bias the search away from spurious programs.", "Β» Compute generalized gradient updates 6: βœ“ = βœ“ + Β΅ (K) 7: return βœ“ Policy Shaping Policy shaping is a method to introduce prior knowledge into a policy (Griffith et al., 2013) .", "Formally, let the current behavior policy be b βœ“ (y|x, t, z) and a predefined critique policy, the prior knowledge, be p c (y|x, t).", "Policy shaping defines a new shaped behavior policy p b (y|x, t) given by: p b (y|x, t) = b βœ“ (y|x, t, z)p c (y|x, t) P y 0 2Y b βœ“ (y 0 |x, t, z)p c (y 0 |x, t) .", "(2) Using the shaped policy for exploration biases the search towards the critique policy's preference.", "We next describe a simple critique policy that we use in this paper.", "Lexical Policy Shaping We qualitatively observed that correct programs often contains tokens which are also present in the question.", "For example, the correct program in Figure 2 contains the token Points, which is also present in the question.", "We therefore, define a simple surface form similarity feature match(x, y) that computes the ratio of number of non-keyword tokens in the program y that are also present in the question x.", "However, surface-form similarity is often not enough.", "For example, both the first and fourth program in Figure 2 contain the token Points but only the fourth program is correct.", "Therefore, we also use a simple co-occurrence feature that triggers on frequently co-occurring pairs of tokens in the program and instruction.", "For example, the token most is highly likely to co-occur with a correct program containing the keyword Max.", "This happens for the example in Figure 2 .", "Similarly the token not may co-occur with the keyword NotEqual.", "We assume access to a lexicon ⇀ = {(w j , !", "j )} k j=1 containing k lexical pairs of tokens and keywords.", "Each lexical pair (w, !)", "maps the token w in a text to a keyword !", "in a program.", "For a given program y and question x, we define a co-occurrence score as co_occur(y, x) = P (w,!", ")2⇀ {w 2 x^!", "2 y}}.", "We define critique score critique(y, x) as the sum of the match and co_occur scores.", "The critique policy is given by: p c (y|x, t) / exp (⌘ ⇀ critique(y, x)) , (3) where ⌘ is a single scalar hyper-parameter denoting the confidence in the critique policy.", "Addressing Update Strategy Selection: Generalized Update Equation Given the set of programs generated by the search step, one can use many objectives to update the parameters.", "For example, previous work have utilized maximum marginal likelihood (Krishnamurthy et al., 2017; Guu et al., 2017) , reinforcement learning (Zhong et al., 2017; Guu et al., 2017) and margin based methods (Iyyer et al., 2017) .", "It could be difficult to choose the suitable algorithm from these options.", "In this section, we propose a principle and general update equation such that previous update algorithms can be considered as special cases to this equation.", "Having a general update is important for the following reasons.", "First, it allows us to understand existing algorithms better by examining their basic properties.", "Second, the generalized update equation also makes it easy to implement and experiment with various different algorithms.", "Moreover, it provides a framework that enables the development of new variations or extensions of existing learning methods.", "In the following, we describe how the commonly used algorithms are in fact very similartheir update rules can all be viewed as special cases of the proposed generalized update equation.", "Algorithm 1 shows the meta-learning framework.", "For every training example, we first find a set of candidates using an exploration policy (line 4).", "We use the program candidates to update the parameters (line 6).", "Commonly Used Learning Algorithms We briefly describe three algorithms: maximum marginalized likelihood, policy gradient and maximum margin reward.", "Maximum Marginalized Likelihood The maximum marginalized likelihood method maximizes the log-likelihood of the training data by marginalizing over the set of programs.", "J MML = log p(z i |x i , t i ) = log X y2Y p(z i |y, t i )p(y|x i , t i ) (4) Because an answer is deterministically computed given a program and a table, we define p(z | y, t) as 1 or 0 depending upon whether the y evaluates to z given t, or not.", "Let Gen(z, t) βœ“ Y be the set of compatible programs that evaluate to z given the table t. The objective can then be expressed as: J MML = log X y2Gen(zi,ti) p(y|x i , t i ) (5) In practice, the summation over Gen(.)", "is approximated by only using the compatible programs in the set K generated by the search step.", "Policy Gradient Methods Most reinforcement learning approaches for semantic parsing assume access to a reward function R : Y β‡₯ X β‡₯ Z !", "R, giving a scalar reward R(y, z) for a given program y and the correct answer z.", "2 We can further assume without loss of generality that the reward is always in [0, 1].", "Reinforcement learning approaches maximize the expected reward J RL : J RL = X y2Y p(y|x i , t i )R(y, z i ) (6) J RL is hard to approximate using numerical integration since the reward for all programs may not be known a priori.", "Policy gradient methods solve this by approximating the derivative using a sample from the policy.", "When the search space is large, the policy may fail to sample a correct program, which can greatly slow down the learning.", "Therefore, off-policy methods are sometimes introduced to bias the sampling towards high-reward yielding programs.", "In those methods, an additional exploration policy u(y|x i , t i , z i ) is used to improve sampling.", "Importance weights are used to make the gradient unbiased (see Appendix for derivation).", "Maximum Margin Reward For every training example (x i , t i , z i ), the maximum margin reward method finds the highest scoring program y i that evaluates to z i , as the reference program, from the set K of programs generated by the search.", "With a margin function : Y β‡₯Y β‡₯Z !", "R and reference program y, the set of programs V that violate the margin constraint can thus be defined as: V = {y 0 | y 0 2 Y and score βœ“ (y, x, t) ο£Ώ score βœ“ (y 0 , x, t) + (y, y 0 , z)}, (7) where (y, y 0 , z) = R(y, z) R(y 0 , z).", "Similarly, the program that most violates the constraint can be written as: y = arg max y 0 2Y {score βœ“ (y 0 , x, t) + (y, y 0 , z) score βœ“ (y, x, t)} (8) The most-violation margin objective (negative margin loss) is thus defined as: J MMR = max{0, score βœ“ (Θ³, x i , t i ) score βœ“ (y i , x i , t i ) + (y i ,Θ³, z i )} Unlike the previous two learning algorithms, margin methods only update the score of the reference program and the program that violates the margin.", "Generalized Update Equation Although the algorithms described in Β§4.1 seem very different on the surface, the gradients of their loss functions can in fact be described in the same generalized form, given in Eq.", "(9) 3 .", "In addition to the gradient of the model scoring function, this equation has two variable terms, w(Β·), q(Β·).", "We call the first term w(y, x, t, z) intensity, which is a positive scalar value and the second term q(y|x, t) the competing distribution, which is a probability distribution over programs.", "Varying them makes the equation equivalent to the update rule of the algorithms we discussed, as shown in Table 1 .", "We also consider meritocratic update policy which uses a hyperparameter to sharpen or smooth the intensity of maximum marginal likelihood (Guu et al., 2017) .", "Intuitively, w(y, x, t, z) defines the positive part of the update equation, which defines how aggressively the update favors program y.", "Likewise, q(y|x, t) defines the negative part of the learning Generalized Update Equation: (7) ) andΘ³ is the most violating program (cf.", "Eq.", "(8) ).", "For REINFORCE,Ε· is sampled from K using p(.)", "whereas for Off-Policy Policy Gradient,Ε· is sampled using u(.).", "(K) = X y2K w(y, x, t, z) 0 @ r βœ“ score βœ“ (y, x, t) X y 0 2Y q(y 0 |x, t)r βœ“ score βœ“ (y 0 , x, t) algorithm, namely how aggressively the update penalizes the members of the program set.", "The generalized update equation provides a tool for better understanding individual algorithm, and helps shed some light on when a particular method may perform better.", "Intensity versus Search Quality In SpFD, the effectiveness of the algorithms for SpFD is closely related to the quality of the search results given that the gold program is not available.", "Intuitively, if the search quality is good, the update algorithm could be aggressive on updating the model parameters.", "When the search quality is poor, the algorithm should be conservative.", "The intensity w(Β·) is closely related to the aggressiveness of the algorithm.", "For example, the maximum marginal likelihood is less aggressive given that it produces a non-zero intensity over all programs in the program set K that evaluate to the correct answer.", "The intensity for a particular correct program y is proportional to its probability p(y|x, t).", "Further, meritocratic update becomes more aggressive as becomes larger.", "In contrast, REINFORCE and maximum margin reward both have a non-zero intensity only on a single program in K. This value is 1.0 for maximum margin reward, while for reinforcement learning, this value is the reward.", "Maximum margin reward therefore updates most aggressively in favor of its selection while maximum marginal likelihood tends to hedge its bet.", "Therefore, the maximum margin methods should benefit the most when the search quality improves.", "Stability The general equation also allows us to investigate the stability of a model update algorithm.", "In general, the variance of update direction can be high, hence less stable, if the model update algorithm has peaky competing distribution, or it puts all of its intensity on a single program.", "For example, REINFORCE only samples one program and puts non-zero intensity only on that program, so it could be unstable depending on the sampling results.", "The competing distribution affects the stability of the algorithm.", "For example, maximum margin reward penalizes only the most violating program and is benign to other incorrect programs.", "Therefore, the MMR algorithm could be unstable during training.", "New Model Update Algorithm The general equation provides a framework that enables the development of new variations or extensions of existing learning methods.", "For example, in order to improve the stability of the MMR algorithm, we propose a simple variant of maximum margin reward, which penalizes all violating programs instead of only the most violating one.", "We call this approach maximum margin average violation reward (MAVER), which is included in Table 1 as well.", "Given that MAVER effectively considers more negative examples during each update, we expect that it is more stable compared to the MMR algorithm.", "Experiments We describe the setup in Β§5.1 and results in Β§5.2.", "Setup Dataset We use the sequential question answering (SQA) dataset (Iyyer et al., 2017) for our experiments.", "SQA contains 6,066 sequences and each sequence contains up to 3 questions, with 17,553 questions in total.", "The data is partitioned into training (83%) and test (17%) splits.", "We use 4/5 of the original train split as our training set and the remaining 1/5 as the dev set.", "We evaluate using exact match on answer.", "Previous state-of-theart result on the SQA dataset is 44.7% accuracy, using maximum margin reward learning.", "Semantic Parser Our semantic parser is based on DynSP (Iyyer et al., 2017) , which contains a set of SQL actions, such as adding a clause (e.g., Select Column) or adding an operator (e.g., Max).", "Each action has an associated neural network module that generates the score for the action based on the instruction, the table and the list of past actions.", "The score of the entire program is given by the sum of scores of all actions.", "We modified DynSP to improve its representational capacity.", "We refer to the new parser as DynSP++.", "Most notably, we included new features and introduced two additional parser actions.", "See Appendix 8.2 for more details.", "While these improvements help us achieve state-of-the-art results, the majority of the gain comes from the learning contributions described in this paper.", "Hyperparameters For each experiment, we train the model for 30 epochs.", "We find the optimal stopping epoch by evaluating the model on the dev set.", "We then train on train+dev set till the stopping epoch and evaluate the model on the held-out test set.", "Model parameters are trained using stochastic gradient descent with learning rate of 0.1.", "We set the hyperparameter ⌘ for policy shaping to 5.", "All hyperparameters were tuned on the dev set.", "We use 40 lexical pairs for defining the co-occur score.", "We used common English superlatives (e.g., highest, most) and comparators (e.g., more, larger) and did not fit the lexical pairs based on the dataset.", "Given the model parameter βœ“, we use a base exploration policy defined in (Iyyer et al., 2017) .", "This exploration policy is given by b βœ“ (y | x, t, z) / exp( Β· R(y, z) + score βœ“ (y, βœ“, z)).", "R(y, z) is the reward function of the incomplete program y, given the answer z.", "We use a reward function R(y, z) given by the Jaccard similarity of the gold answer z and the answer generated by the program y.", "The value of is set to infinity, which essentially is equivalent to sorting the programs based on the reward and using the current model score for tie breaking.", "Further, we prune all syntactically invalid programs.", "For more details, we refer the reader to (Iyyer et al., 2017) .", "Table 2 contains the dev and test results when using our algorithm on the SQA dataset.", "We observe that margin based methods perform better than maximum likelihood methods and policy gradient in our experiment.", "Policy shaping in general improves the performance across different algorithms.", "Our best test results outperform previous SOTA by 5.0%.", "Results Policy Gradient vs Off-Policy Gradient RE-INFORCE, a simple policy gradient method, achieved extremely poor performance.", "This likely due to the problem of exploration and having to sample from a large space of programs.", "This is further corroborated from observing the much superior performance of off-policy policy gradient methods.", "Thus, the sampling policy is an important factor to consider for policy gradient methods.", "The Effect of Policy Shaping We observe that the improvement due to policy shaping is 6.0% on the SQA dataset for MAVER and only 1.3% for maximum marginal likelihood.", "We also observe that as increases, the improvement due to policy shaping for meritocratic update increases.", "This supports our hypothesis that aggressive updates of margin based methods is beneficial when the search method is more accurate as compared to maximum marginal likelihood which hedges its bet between all programs that evaluate to the right answer.", "Stability of MMR In Section 4, the general update equation helps us point out that MMR could be unstable due to the peaky competing distribution.", "MAVER was proposed to increase the stability of the algorithm.", "To measure stability, we cal- Table 4 .", "Policy Shaping vs Model Shaping Critique policy contains useful information that can bias the search away from spurious programs.", "Therefore, one can also consider making the critique policy as part of the model.", "We call this model shaping.", "We define our model to be the shaped policy and train and test using the new model.", "Using MAVER updates, we found that the dev accuracy dropped to 37.1%.", "We conjecture that the strong prior in the critique policy can hinder generalization in model shaping.", "Related Work Semantic Parsing from Denotation Mapping natural language text to formal meaning representation was first studied by Montague (1970) .", "Early work on learning semantic parsers rely on labeled formal representations as the supervision signals Collins, 2005, 2007; Zelle and Mooney, 1993) .", "However, because getting access to gold formal representation generally requires expensive annotations by an expert, distant supervision approaches, where semantic parsers are learned from denotation only, have become the main learning paradigm (e.g., Clarke et al., 2010; Liang et al., 2011; Artzi and Zettlemoyer, 2013; Berant et al., 2013; Iyyer et al., 2017; Krishnamurthy et al., 2017) .", "Guu et al.", "(2017) studied the problem of spurious programs and considered adding noise to diversify the search procedure and introduced meritocratic updates.", "Reinforcement Learning Algorithms Reinforcement learning algorithms have been applied to various NLP problems including dialogue (Li et al., 2016 ), text-based games (Narasimhan et al., 2015) , information extraction (Narasimhan et al., 2016) , coreference resolution (Clark and Man- ning, 2016), semantic parsing (Guu et al., 2017) and instruction following (Misra et al., 2017) .", "Guu et al.", "(2017) show that policy gradient methods underperform maximum marginal likelihood approaches.", "Our result on the SQA dataset supports their observation.", "However, we show that using off-policy sampling, policy gradient methods can provide superior performance to maximum marginal likelihood methods.", "Margin-based Learning Margin-based methods have been considered in the context of SVM learning.", "In the NLP literature, margin based learning has been applied to parsing (Taskar et al., 2004; McDonald et al., 2005) , text classification (Taskar et al., 2003) , machine translation (Watanabe et al., 2007) and semantic parsing (Iyyer et al., 2017) .", "Kummerfeld et al.", "(2015) found that max-margin based methods generally outperform likelihood maximization on a range of tasks.", "Previous work have studied connections between margin based method and likelihood maximization for supervised learning setting.", "We show them as special cases of our unified update equation for distant supervision learning.", "Similar to this work, Lee et al.", "(2016) also found that in the context of supervised learning, margin-based algorithms which update all violated examples perform better than the one that only updates the most violated example.", "Latent Variable Modeling Learning semantic parsers from denotation can be viewed as a latent variable modeling problem, where the program is the latent variable.", "Probabilistic latent variable models have been studied using EM-algorithm and its variant (Dempster et al., 1977) .", "The graphical model literature has studied latent variable learning on margin-based methods (Yu and Joachims, 2009 ) and probabilistic models (Quattoni et al., 2007) .", "Samdani et al.", "(2012) studied various vari-ants of EM algorithm and showed that all of them are special cases of a unified framework.", "Our generalized update framework is similar in spirit.", "Conclusion In this paper, we propose a general update equation from semantic parsing from denotation and propose a policy shaping method for addressing the spurious program challenge.", "For the future, we plan to apply the proposed learning framework to more semantic parsing tasks and consider new methods for policy shaping." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "4", "4.1", "4.2", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Background", "The Semantic Parsing Task", "Learning", "Addressing Spurious Programs: Policy Shaping", "Addressing Update Strategy Selection:", "Commonly Used Learning Algorithms", "Generalized Update Equation", "Experiments", "Setup", "Results", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-103#paper-1267#slide-7
Contributions
Policy Shaping for handling spurious programs Generalized Update Equation for generalizing common update strategies and allowing novel updates. (1) and (2) seem independent, but they interact with each other!! 5% absolute improvement over SOTA on SQA dataset
Policy Shaping for handling spurious programs Generalized Update Equation for generalizing common update strategies and allowing novel updates. (1) and (2) seem independent, but they interact with each other!! 5% absolute improvement over SOTA on SQA dataset
[]
GEM-SciDuet-train-103#paper-1267#slide-8
1267
Policy Shaping and Generalized Update Equations for Semantic Parsing from Denotations
Semantic parsing from denotations faces two key challenges in model training: (1) given only the denotations (e.g., answers), search for good candidate semantic parses, and (2) choose the best model update algorithm. We propose effective and general solutions to each of them. Using policy shaping, we bias the search procedure towards semantic parses that are more compatible to the text, which provide better supervision signals for training. In addition, we propose an update equation that generalizes three different families of learning algorithms, which enables fast model exploration. When experimented on a recently proposed sequential question answering dataset, our framework leads to a new state-of-theart model that outperforms previous work by 5.0% absolute on exact match accuracy.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233 ], "paper_content_text": [ "Introduction Semantic parsing from denotations (SpFD) is the problem of mapping text to executable formal representations (or program) in a situated environment and executing them to generate denotations (or answer), in the absence of access to correct representations.", "Several problems have been handled within this framework, including question answering (Berant et al., 2013; Iyyer et al., 2017) and instructions for robots (Artzi and Zettlemoyer, 2013; Misra et al., 2015) .", "Consider the example in Figure 1 .", "Given the question and a table environment, a semantic parser maps the question to an executable program, in this case a SQL query, and then executes the query on the environment to generate the answer England.", "In the SpFD setting, the training data does not contain the correct programs.", "Thus, the existing learning approaches for SpFD perform two steps for every training example, a search step that explores the space of programs Select Nation Where Points is Maximum Program: Answer: Environment: England Figure 1 : An example of semantic parsing from denotations.", "Given the table environment, map the question to an executable program that evaluates to the answer.", "and finds suitable candidates, and an update step that uses these programs to update the model.", "Figure 2 shows the two step training procedure for the above example.", "In this paper, we address two key challenges in model training for SpFD by proposing a novel learning framework, improving both the search and update steps.", "The first challenge, the existence of spurious programs, lies in the search step.", "More specifically, while the success of the search step relies on its ability to find programs that are semantically correct, we can only verify if the program can generate correct answers, given that no gold programs are presented.", "The search step is complicated by spurious programs, which happen to evaluate to the correct answer but do not represent accurately the meaning of the natural language question.", "For example, for the environment in Figure 1 , the program Select Nation Where Name = Karen Andrew is spurious.", "Selecting spurious programs as positive examples can greatly affect the performance of semantic parsers as these programs generally do not gen- Our contributions in this work are twofold.", "To address the first challenge, we propose a policy shaping (Griffith et al., 2013) method that incorporates simple, lightweight domain knowledge, such as a small set of lexical pairs of tokens in the question and program, in the form of a critique policy ( Β§ 3).", "This helps bias the search towards the correct program, an important step to improve supervision signals, which benefits learning regardless of the choice of algorithm.", "To address the second challenge, we prove that the parameter update step in several algorithms are similar and can be viewed as special cases of a generalized update equation ( Β§ 4).", "The equation contains two variable terms that govern the update behavior.", "Changing these two terms effectively defines an infinite class of learning algorithms where different values lead to significantly different results.", "We study this effect and propose a novel learning framework that improves over existing methods.", "We evaluate our methods using the sequential question answering (SQA) dataset (Iyyer et al., 2017) , and show that our proposed improvements to the search and update steps consistently enhance existing approaches.", "The proposed algorithm achieves new state-of-the-art and outperforms existing parsers by 5.0%.", "Background We give a formal problem definition of the semantic parsing task, followed by the general learning framework for solving it.", "The Semantic Parsing Task The problem discussed in this paper can be formally defined as follows.", "Let X be the set of all possible questions, Y programs (e.g., SQL-like queries), T tables (i.e., the structured data in this work) and Z answers.", "We further assume access to an executor : Y β‡₯ T !", "Z, that given a program y 2 Y and a table t 2 T , generates an answer (y, t) 2 Z.", "We assume that the executor and all tables are deterministic and the executor can be called as many times as possible.", "To facilitate discussion in the following sections, we define an environment function e t : Y !", "Z, by applying the executor to the program as e t (y) = (y, t).", "Given a question x and an environment e t , our aim is to generate a program y ⇀ 2 Y and then execute it to produce the answer e t (y ⇀ ).", "Assume that for any y 2 Y, the score of y being a correct program for x is score βœ“ (y, x, t), parameterized by βœ“.", "The inference task is thus: y ⇀ = arg max y2Y score βœ“ (y, x, t) (1) As the size of Y is exponential to the length of the program, a generic search procedure is typically employed for Eq.", "(1), as efficient dynamic algorithms typically do not exist.", "These search procedures generally maintain a beam of program states sorted according to some scoring function, where each program state represents an incomplete program.", "The search then generates a new program state from an existing state by performing an action.", "Each action adds a set of tokens (e.g., Nation) and keyword (e.g., Select) to a program state.", "For example, in order to generate the program in Figure 1 , the DynSP parser (Iyyer et al., 2017) will take the first action as adding the SQL expression Select Nation.", "Notice that score βœ“ can be used in either probabilistic or nonprobabilistic models.", "For probabilistic models, we assume that it is a Boltzmann policy, meaning that p βœ“ (y | x, t) / exp{score βœ“ (y, x, t)}.", "Learning Learning a semantic parser is equivalent to learning the parameters βœ“ in the scoring function, which is a structured learning problem, due to the large, structured output space Y.", "Structured learning algorithms generally consist of two major components: search and update.", "When the gold programs are available during training, the search procedure finds a set of high-scoring incorrect programs.", "These programs are used by the update step to derive loss for updating parameters.", "For example, these programs are used for approximating the partition-function in maximum-likelihood objective (Liang et al., 2011) and finding set of programs causing margin violation in margin based methods (DaumΓ© III and Marcu, 2005) .", "Depending on the exact algorithm being used, these two components are not necessarily separated into isolated steps.", "For instance, parameters can be updated in the middle of search (e.g., Huang et al., 2012) .", "For learning semantic parsers from denotations, where we assume only answers are available in a training set {( x i , t i , z i )} N i=1 of N examples, the basic construction of the learning algorithms remains the same.", "However, the problems that search needs to handle in SpFD is more challenging.", "In addition to finding a set of high-scoring incorrect programs, the search procedure also needs to guess the correct program(s) evaluating to the gold answer z i .", "This problem is further complicated by the presence of spurious programs, which generate the correct answer but are semantically incompatible with the question.", "For example, although all programs in Figure 2 evaluate to the same answer, only one of them is correct.", "The issue of the spurious programs also affects the design of model update.", "For instance, maximum marginal likelihood methods treat all the programs that evaluate to the gold answer equally, while maximum margin reward networks use model score to break tie and pick one of the programs as the correct reference.", "Addressing Spurious Programs: Policy Shaping Given a training example (x, t, z), the aim of the search step is to find a set K(x, t, z) of programs consisting of correct programs that evaluate to z and high-scoring incorrect programs.", "The search step should avoid picking up spurious programs for learning since such programs typically do not generalize.", "For example, in Figure 2, the spurious program Select Nation Where Index is Min will evaluate to an incorrect answer if the indices of the first two rows are swapped 1 .", "This problem is challenging since among the programs that evaluate to the correct answer, most of them are spurious.", "The search step can be viewed as following an exploration policy b βœ“ (y|x, t, z) to explore the set of programs Y.", "This exploration is often performed by beam search and at each step, we either sample from b βœ“ or take the top scoring programs.", "The set K(x, t, z) is then used by the update step for parameter update.", "Most search strategies use an exploration policy which is based on the score function, for example b βœ“ (y|x, t, z) / exp{score βœ“ (y, t)}.", "However, this approach can suffer from a divergence phenomenon whereby the score of spurious programs picked up by the search in the first epoch increases, making it more likely for the search to pick them up in the future.", "Such divergence issues are common with latent-variable learning and often require careful initialization to overcome (Rose, 1998) .", "Unfortunately such initialization schemes are not applicable for deep neural networks which form the model of most successful semantic parsers today (Jia and Liang, 2016; Misra and Artzi, 2016; Iyyer et al., 2017) .", "Prior work, such as ✏-greedy exploration (Guu et al., 2017) , has reduced the severity of this problem by introducing random noise in the search procedure to avoid saturating the search on high-scoring spurious programs.", "However, random noise need not bias the search towards the correct program(s).", "In this paper, we introduce a simple policy-shaping method to guide the search.", "This approach allows incorporating prior knowledge in the exploration policy and can bias the search away from spurious programs.", "Β» Compute generalized gradient updates 6: βœ“ = βœ“ + Β΅ (K) 7: return βœ“ Policy Shaping Policy shaping is a method to introduce prior knowledge into a policy (Griffith et al., 2013) .", "Formally, let the current behavior policy be b βœ“ (y|x, t, z) and a predefined critique policy, the prior knowledge, be p c (y|x, t).", "Policy shaping defines a new shaped behavior policy p b (y|x, t) given by: p b (y|x, t) = b βœ“ (y|x, t, z)p c (y|x, t) P y 0 2Y b βœ“ (y 0 |x, t, z)p c (y 0 |x, t) .", "(2) Using the shaped policy for exploration biases the search towards the critique policy's preference.", "We next describe a simple critique policy that we use in this paper.", "Lexical Policy Shaping We qualitatively observed that correct programs often contains tokens which are also present in the question.", "For example, the correct program in Figure 2 contains the token Points, which is also present in the question.", "We therefore, define a simple surface form similarity feature match(x, y) that computes the ratio of number of non-keyword tokens in the program y that are also present in the question x.", "However, surface-form similarity is often not enough.", "For example, both the first and fourth program in Figure 2 contain the token Points but only the fourth program is correct.", "Therefore, we also use a simple co-occurrence feature that triggers on frequently co-occurring pairs of tokens in the program and instruction.", "For example, the token most is highly likely to co-occur with a correct program containing the keyword Max.", "This happens for the example in Figure 2 .", "Similarly the token not may co-occur with the keyword NotEqual.", "We assume access to a lexicon ⇀ = {(w j , !", "j )} k j=1 containing k lexical pairs of tokens and keywords.", "Each lexical pair (w, !)", "maps the token w in a text to a keyword !", "in a program.", "For a given program y and question x, we define a co-occurrence score as co_occur(y, x) = P (w,!", ")2⇀ {w 2 x^!", "2 y}}.", "We define critique score critique(y, x) as the sum of the match and co_occur scores.", "The critique policy is given by: p c (y|x, t) / exp (⌘ ⇀ critique(y, x)) , (3) where ⌘ is a single scalar hyper-parameter denoting the confidence in the critique policy.", "Addressing Update Strategy Selection: Generalized Update Equation Given the set of programs generated by the search step, one can use many objectives to update the parameters.", "For example, previous work have utilized maximum marginal likelihood (Krishnamurthy et al., 2017; Guu et al., 2017) , reinforcement learning (Zhong et al., 2017; Guu et al., 2017) and margin based methods (Iyyer et al., 2017) .", "It could be difficult to choose the suitable algorithm from these options.", "In this section, we propose a principle and general update equation such that previous update algorithms can be considered as special cases to this equation.", "Having a general update is important for the following reasons.", "First, it allows us to understand existing algorithms better by examining their basic properties.", "Second, the generalized update equation also makes it easy to implement and experiment with various different algorithms.", "Moreover, it provides a framework that enables the development of new variations or extensions of existing learning methods.", "In the following, we describe how the commonly used algorithms are in fact very similartheir update rules can all be viewed as special cases of the proposed generalized update equation.", "Algorithm 1 shows the meta-learning framework.", "For every training example, we first find a set of candidates using an exploration policy (line 4).", "We use the program candidates to update the parameters (line 6).", "Commonly Used Learning Algorithms We briefly describe three algorithms: maximum marginalized likelihood, policy gradient and maximum margin reward.", "Maximum Marginalized Likelihood The maximum marginalized likelihood method maximizes the log-likelihood of the training data by marginalizing over the set of programs.", "J MML = log p(z i |x i , t i ) = log X y2Y p(z i |y, t i )p(y|x i , t i ) (4) Because an answer is deterministically computed given a program and a table, we define p(z | y, t) as 1 or 0 depending upon whether the y evaluates to z given t, or not.", "Let Gen(z, t) βœ“ Y be the set of compatible programs that evaluate to z given the table t. The objective can then be expressed as: J MML = log X y2Gen(zi,ti) p(y|x i , t i ) (5) In practice, the summation over Gen(.)", "is approximated by only using the compatible programs in the set K generated by the search step.", "Policy Gradient Methods Most reinforcement learning approaches for semantic parsing assume access to a reward function R : Y β‡₯ X β‡₯ Z !", "R, giving a scalar reward R(y, z) for a given program y and the correct answer z.", "2 We can further assume without loss of generality that the reward is always in [0, 1].", "Reinforcement learning approaches maximize the expected reward J RL : J RL = X y2Y p(y|x i , t i )R(y, z i ) (6) J RL is hard to approximate using numerical integration since the reward for all programs may not be known a priori.", "Policy gradient methods solve this by approximating the derivative using a sample from the policy.", "When the search space is large, the policy may fail to sample a correct program, which can greatly slow down the learning.", "Therefore, off-policy methods are sometimes introduced to bias the sampling towards high-reward yielding programs.", "In those methods, an additional exploration policy u(y|x i , t i , z i ) is used to improve sampling.", "Importance weights are used to make the gradient unbiased (see Appendix for derivation).", "Maximum Margin Reward For every training example (x i , t i , z i ), the maximum margin reward method finds the highest scoring program y i that evaluates to z i , as the reference program, from the set K of programs generated by the search.", "With a margin function : Y β‡₯Y β‡₯Z !", "R and reference program y, the set of programs V that violate the margin constraint can thus be defined as: V = {y 0 | y 0 2 Y and score βœ“ (y, x, t) ο£Ώ score βœ“ (y 0 , x, t) + (y, y 0 , z)}, (7) where (y, y 0 , z) = R(y, z) R(y 0 , z).", "Similarly, the program that most violates the constraint can be written as: y = arg max y 0 2Y {score βœ“ (y 0 , x, t) + (y, y 0 , z) score βœ“ (y, x, t)} (8) The most-violation margin objective (negative margin loss) is thus defined as: J MMR = max{0, score βœ“ (Θ³, x i , t i ) score βœ“ (y i , x i , t i ) + (y i ,Θ³, z i )} Unlike the previous two learning algorithms, margin methods only update the score of the reference program and the program that violates the margin.", "Generalized Update Equation Although the algorithms described in Β§4.1 seem very different on the surface, the gradients of their loss functions can in fact be described in the same generalized form, given in Eq.", "(9) 3 .", "In addition to the gradient of the model scoring function, this equation has two variable terms, w(Β·), q(Β·).", "We call the first term w(y, x, t, z) intensity, which is a positive scalar value and the second term q(y|x, t) the competing distribution, which is a probability distribution over programs.", "Varying them makes the equation equivalent to the update rule of the algorithms we discussed, as shown in Table 1 .", "We also consider meritocratic update policy which uses a hyperparameter to sharpen or smooth the intensity of maximum marginal likelihood (Guu et al., 2017) .", "Intuitively, w(y, x, t, z) defines the positive part of the update equation, which defines how aggressively the update favors program y.", "Likewise, q(y|x, t) defines the negative part of the learning Generalized Update Equation: (7) ) andΘ³ is the most violating program (cf.", "Eq.", "(8) ).", "For REINFORCE,Ε· is sampled from K using p(.)", "whereas for Off-Policy Policy Gradient,Ε· is sampled using u(.).", "(K) = X y2K w(y, x, t, z) 0 @ r βœ“ score βœ“ (y, x, t) X y 0 2Y q(y 0 |x, t)r βœ“ score βœ“ (y 0 , x, t) algorithm, namely how aggressively the update penalizes the members of the program set.", "The generalized update equation provides a tool for better understanding individual algorithm, and helps shed some light on when a particular method may perform better.", "Intensity versus Search Quality In SpFD, the effectiveness of the algorithms for SpFD is closely related to the quality of the search results given that the gold program is not available.", "Intuitively, if the search quality is good, the update algorithm could be aggressive on updating the model parameters.", "When the search quality is poor, the algorithm should be conservative.", "The intensity w(Β·) is closely related to the aggressiveness of the algorithm.", "For example, the maximum marginal likelihood is less aggressive given that it produces a non-zero intensity over all programs in the program set K that evaluate to the correct answer.", "The intensity for a particular correct program y is proportional to its probability p(y|x, t).", "Further, meritocratic update becomes more aggressive as becomes larger.", "In contrast, REINFORCE and maximum margin reward both have a non-zero intensity only on a single program in K. This value is 1.0 for maximum margin reward, while for reinforcement learning, this value is the reward.", "Maximum margin reward therefore updates most aggressively in favor of its selection while maximum marginal likelihood tends to hedge its bet.", "Therefore, the maximum margin methods should benefit the most when the search quality improves.", "Stability The general equation also allows us to investigate the stability of a model update algorithm.", "In general, the variance of update direction can be high, hence less stable, if the model update algorithm has peaky competing distribution, or it puts all of its intensity on a single program.", "For example, REINFORCE only samples one program and puts non-zero intensity only on that program, so it could be unstable depending on the sampling results.", "The competing distribution affects the stability of the algorithm.", "For example, maximum margin reward penalizes only the most violating program and is benign to other incorrect programs.", "Therefore, the MMR algorithm could be unstable during training.", "New Model Update Algorithm The general equation provides a framework that enables the development of new variations or extensions of existing learning methods.", "For example, in order to improve the stability of the MMR algorithm, we propose a simple variant of maximum margin reward, which penalizes all violating programs instead of only the most violating one.", "We call this approach maximum margin average violation reward (MAVER), which is included in Table 1 as well.", "Given that MAVER effectively considers more negative examples during each update, we expect that it is more stable compared to the MMR algorithm.", "Experiments We describe the setup in Β§5.1 and results in Β§5.2.", "Setup Dataset We use the sequential question answering (SQA) dataset (Iyyer et al., 2017) for our experiments.", "SQA contains 6,066 sequences and each sequence contains up to 3 questions, with 17,553 questions in total.", "The data is partitioned into training (83%) and test (17%) splits.", "We use 4/5 of the original train split as our training set and the remaining 1/5 as the dev set.", "We evaluate using exact match on answer.", "Previous state-of-theart result on the SQA dataset is 44.7% accuracy, using maximum margin reward learning.", "Semantic Parser Our semantic parser is based on DynSP (Iyyer et al., 2017) , which contains a set of SQL actions, such as adding a clause (e.g., Select Column) or adding an operator (e.g., Max).", "Each action has an associated neural network module that generates the score for the action based on the instruction, the table and the list of past actions.", "The score of the entire program is given by the sum of scores of all actions.", "We modified DynSP to improve its representational capacity.", "We refer to the new parser as DynSP++.", "Most notably, we included new features and introduced two additional parser actions.", "See Appendix 8.2 for more details.", "While these improvements help us achieve state-of-the-art results, the majority of the gain comes from the learning contributions described in this paper.", "Hyperparameters For each experiment, we train the model for 30 epochs.", "We find the optimal stopping epoch by evaluating the model on the dev set.", "We then train on train+dev set till the stopping epoch and evaluate the model on the held-out test set.", "Model parameters are trained using stochastic gradient descent with learning rate of 0.1.", "We set the hyperparameter ⌘ for policy shaping to 5.", "All hyperparameters were tuned on the dev set.", "We use 40 lexical pairs for defining the co-occur score.", "We used common English superlatives (e.g., highest, most) and comparators (e.g., more, larger) and did not fit the lexical pairs based on the dataset.", "Given the model parameter βœ“, we use a base exploration policy defined in (Iyyer et al., 2017) .", "This exploration policy is given by b βœ“ (y | x, t, z) / exp( Β· R(y, z) + score βœ“ (y, βœ“, z)).", "R(y, z) is the reward function of the incomplete program y, given the answer z.", "We use a reward function R(y, z) given by the Jaccard similarity of the gold answer z and the answer generated by the program y.", "The value of is set to infinity, which essentially is equivalent to sorting the programs based on the reward and using the current model score for tie breaking.", "Further, we prune all syntactically invalid programs.", "For more details, we refer the reader to (Iyyer et al., 2017) .", "Table 2 contains the dev and test results when using our algorithm on the SQA dataset.", "We observe that margin based methods perform better than maximum likelihood methods and policy gradient in our experiment.", "Policy shaping in general improves the performance across different algorithms.", "Our best test results outperform previous SOTA by 5.0%.", "Results Policy Gradient vs Off-Policy Gradient RE-INFORCE, a simple policy gradient method, achieved extremely poor performance.", "This likely due to the problem of exploration and having to sample from a large space of programs.", "This is further corroborated from observing the much superior performance of off-policy policy gradient methods.", "Thus, the sampling policy is an important factor to consider for policy gradient methods.", "The Effect of Policy Shaping We observe that the improvement due to policy shaping is 6.0% on the SQA dataset for MAVER and only 1.3% for maximum marginal likelihood.", "We also observe that as increases, the improvement due to policy shaping for meritocratic update increases.", "This supports our hypothesis that aggressive updates of margin based methods is beneficial when the search method is more accurate as compared to maximum marginal likelihood which hedges its bet between all programs that evaluate to the right answer.", "Stability of MMR In Section 4, the general update equation helps us point out that MMR could be unstable due to the peaky competing distribution.", "MAVER was proposed to increase the stability of the algorithm.", "To measure stability, we cal- Table 4 .", "Policy Shaping vs Model Shaping Critique policy contains useful information that can bias the search away from spurious programs.", "Therefore, one can also consider making the critique policy as part of the model.", "We call this model shaping.", "We define our model to be the shaped policy and train and test using the new model.", "Using MAVER updates, we found that the dev accuracy dropped to 37.1%.", "We conjecture that the strong prior in the critique policy can hinder generalization in model shaping.", "Related Work Semantic Parsing from Denotation Mapping natural language text to formal meaning representation was first studied by Montague (1970) .", "Early work on learning semantic parsers rely on labeled formal representations as the supervision signals Collins, 2005, 2007; Zelle and Mooney, 1993) .", "However, because getting access to gold formal representation generally requires expensive annotations by an expert, distant supervision approaches, where semantic parsers are learned from denotation only, have become the main learning paradigm (e.g., Clarke et al., 2010; Liang et al., 2011; Artzi and Zettlemoyer, 2013; Berant et al., 2013; Iyyer et al., 2017; Krishnamurthy et al., 2017) .", "Guu et al.", "(2017) studied the problem of spurious programs and considered adding noise to diversify the search procedure and introduced meritocratic updates.", "Reinforcement Learning Algorithms Reinforcement learning algorithms have been applied to various NLP problems including dialogue (Li et al., 2016 ), text-based games (Narasimhan et al., 2015) , information extraction (Narasimhan et al., 2016) , coreference resolution (Clark and Man- ning, 2016), semantic parsing (Guu et al., 2017) and instruction following (Misra et al., 2017) .", "Guu et al.", "(2017) show that policy gradient methods underperform maximum marginal likelihood approaches.", "Our result on the SQA dataset supports their observation.", "However, we show that using off-policy sampling, policy gradient methods can provide superior performance to maximum marginal likelihood methods.", "Margin-based Learning Margin-based methods have been considered in the context of SVM learning.", "In the NLP literature, margin based learning has been applied to parsing (Taskar et al., 2004; McDonald et al., 2005) , text classification (Taskar et al., 2003) , machine translation (Watanabe et al., 2007) and semantic parsing (Iyyer et al., 2017) .", "Kummerfeld et al.", "(2015) found that max-margin based methods generally outperform likelihood maximization on a range of tasks.", "Previous work have studied connections between margin based method and likelihood maximization for supervised learning setting.", "We show them as special cases of our unified update equation for distant supervision learning.", "Similar to this work, Lee et al.", "(2016) also found that in the context of supervised learning, margin-based algorithms which update all violated examples perform better than the one that only updates the most violated example.", "Latent Variable Modeling Learning semantic parsers from denotation can be viewed as a latent variable modeling problem, where the program is the latent variable.", "Probabilistic latent variable models have been studied using EM-algorithm and its variant (Dempster et al., 1977) .", "The graphical model literature has studied latent variable learning on margin-based methods (Yu and Joachims, 2009 ) and probabilistic models (Quattoni et al., 2007) .", "Samdani et al.", "(2012) studied various vari-ants of EM algorithm and showed that all of them are special cases of a unified framework.", "Our generalized update framework is similar in spirit.", "Conclusion In this paper, we propose a general update equation from semantic parsing from denotation and propose a policy shaping method for addressing the spurious program challenge.", "For the future, we plan to apply the proposed learning framework to more semantic parsing tasks and consider new methods for policy shaping." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "4", "4.1", "4.2", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Background", "The Semantic Parsing Task", "Learning", "Addressing Spurious Programs: Policy Shaping", "Addressing Update Strategy Selection:", "Commonly Used Learning Algorithms", "Generalized Update Equation", "Experiments", "Setup", "Results", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-103#paper-1267#slide-8
Learning from Indirect Supervision
Question x, Table t, Answer z, Parameters [Search for Training] With x, t, z, beam search suitable ={y} [Update] Update , according K = {y} Search in training. Goal: finding semantically correct y [Update] Update , according {y} Many different ways of update
Question x, Table t, Answer z, Parameters [Search for Training] With x, t, z, beam search suitable ={y} [Update] Update , according K = {y} Search in training. Goal: finding semantically correct y [Update] Update , according {y} Many different ways of update
[]
GEM-SciDuet-train-103#paper-1267#slide-9
1267
Policy Shaping and Generalized Update Equations for Semantic Parsing from Denotations
Semantic parsing from denotations faces two key challenges in model training: (1) given only the denotations (e.g., answers), search for good candidate semantic parses, and (2) choose the best model update algorithm. We propose effective and general solutions to each of them. Using policy shaping, we bias the search procedure towards semantic parses that are more compatible to the text, which provide better supervision signals for training. In addition, we propose an update equation that generalizes three different families of learning algorithms, which enables fast model exploration. When experimented on a recently proposed sequential question answering dataset, our framework leads to a new state-of-theart model that outperforms previous work by 5.0% absolute on exact match accuracy.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233 ], "paper_content_text": [ "Introduction Semantic parsing from denotations (SpFD) is the problem of mapping text to executable formal representations (or program) in a situated environment and executing them to generate denotations (or answer), in the absence of access to correct representations.", "Several problems have been handled within this framework, including question answering (Berant et al., 2013; Iyyer et al., 2017) and instructions for robots (Artzi and Zettlemoyer, 2013; Misra et al., 2015) .", "Consider the example in Figure 1 .", "Given the question and a table environment, a semantic parser maps the question to an executable program, in this case a SQL query, and then executes the query on the environment to generate the answer England.", "In the SpFD setting, the training data does not contain the correct programs.", "Thus, the existing learning approaches for SpFD perform two steps for every training example, a search step that explores the space of programs Select Nation Where Points is Maximum Program: Answer: Environment: England Figure 1 : An example of semantic parsing from denotations.", "Given the table environment, map the question to an executable program that evaluates to the answer.", "and finds suitable candidates, and an update step that uses these programs to update the model.", "Figure 2 shows the two step training procedure for the above example.", "In this paper, we address two key challenges in model training for SpFD by proposing a novel learning framework, improving both the search and update steps.", "The first challenge, the existence of spurious programs, lies in the search step.", "More specifically, while the success of the search step relies on its ability to find programs that are semantically correct, we can only verify if the program can generate correct answers, given that no gold programs are presented.", "The search step is complicated by spurious programs, which happen to evaluate to the correct answer but do not represent accurately the meaning of the natural language question.", "For example, for the environment in Figure 1 , the program Select Nation Where Name = Karen Andrew is spurious.", "Selecting spurious programs as positive examples can greatly affect the performance of semantic parsers as these programs generally do not gen- Our contributions in this work are twofold.", "To address the first challenge, we propose a policy shaping (Griffith et al., 2013) method that incorporates simple, lightweight domain knowledge, such as a small set of lexical pairs of tokens in the question and program, in the form of a critique policy ( Β§ 3).", "This helps bias the search towards the correct program, an important step to improve supervision signals, which benefits learning regardless of the choice of algorithm.", "To address the second challenge, we prove that the parameter update step in several algorithms are similar and can be viewed as special cases of a generalized update equation ( Β§ 4).", "The equation contains two variable terms that govern the update behavior.", "Changing these two terms effectively defines an infinite class of learning algorithms where different values lead to significantly different results.", "We study this effect and propose a novel learning framework that improves over existing methods.", "We evaluate our methods using the sequential question answering (SQA) dataset (Iyyer et al., 2017) , and show that our proposed improvements to the search and update steps consistently enhance existing approaches.", "The proposed algorithm achieves new state-of-the-art and outperforms existing parsers by 5.0%.", "Background We give a formal problem definition of the semantic parsing task, followed by the general learning framework for solving it.", "The Semantic Parsing Task The problem discussed in this paper can be formally defined as follows.", "Let X be the set of all possible questions, Y programs (e.g., SQL-like queries), T tables (i.e., the structured data in this work) and Z answers.", "We further assume access to an executor : Y β‡₯ T !", "Z, that given a program y 2 Y and a table t 2 T , generates an answer (y, t) 2 Z.", "We assume that the executor and all tables are deterministic and the executor can be called as many times as possible.", "To facilitate discussion in the following sections, we define an environment function e t : Y !", "Z, by applying the executor to the program as e t (y) = (y, t).", "Given a question x and an environment e t , our aim is to generate a program y ⇀ 2 Y and then execute it to produce the answer e t (y ⇀ ).", "Assume that for any y 2 Y, the score of y being a correct program for x is score βœ“ (y, x, t), parameterized by βœ“.", "The inference task is thus: y ⇀ = arg max y2Y score βœ“ (y, x, t) (1) As the size of Y is exponential to the length of the program, a generic search procedure is typically employed for Eq.", "(1), as efficient dynamic algorithms typically do not exist.", "These search procedures generally maintain a beam of program states sorted according to some scoring function, where each program state represents an incomplete program.", "The search then generates a new program state from an existing state by performing an action.", "Each action adds a set of tokens (e.g., Nation) and keyword (e.g., Select) to a program state.", "For example, in order to generate the program in Figure 1 , the DynSP parser (Iyyer et al., 2017) will take the first action as adding the SQL expression Select Nation.", "Notice that score βœ“ can be used in either probabilistic or nonprobabilistic models.", "For probabilistic models, we assume that it is a Boltzmann policy, meaning that p βœ“ (y | x, t) / exp{score βœ“ (y, x, t)}.", "Learning Learning a semantic parser is equivalent to learning the parameters βœ“ in the scoring function, which is a structured learning problem, due to the large, structured output space Y.", "Structured learning algorithms generally consist of two major components: search and update.", "When the gold programs are available during training, the search procedure finds a set of high-scoring incorrect programs.", "These programs are used by the update step to derive loss for updating parameters.", "For example, these programs are used for approximating the partition-function in maximum-likelihood objective (Liang et al., 2011) and finding set of programs causing margin violation in margin based methods (DaumΓ© III and Marcu, 2005) .", "Depending on the exact algorithm being used, these two components are not necessarily separated into isolated steps.", "For instance, parameters can be updated in the middle of search (e.g., Huang et al., 2012) .", "For learning semantic parsers from denotations, where we assume only answers are available in a training set {( x i , t i , z i )} N i=1 of N examples, the basic construction of the learning algorithms remains the same.", "However, the problems that search needs to handle in SpFD is more challenging.", "In addition to finding a set of high-scoring incorrect programs, the search procedure also needs to guess the correct program(s) evaluating to the gold answer z i .", "This problem is further complicated by the presence of spurious programs, which generate the correct answer but are semantically incompatible with the question.", "For example, although all programs in Figure 2 evaluate to the same answer, only one of them is correct.", "The issue of the spurious programs also affects the design of model update.", "For instance, maximum marginal likelihood methods treat all the programs that evaluate to the gold answer equally, while maximum margin reward networks use model score to break tie and pick one of the programs as the correct reference.", "Addressing Spurious Programs: Policy Shaping Given a training example (x, t, z), the aim of the search step is to find a set K(x, t, z) of programs consisting of correct programs that evaluate to z and high-scoring incorrect programs.", "The search step should avoid picking up spurious programs for learning since such programs typically do not generalize.", "For example, in Figure 2, the spurious program Select Nation Where Index is Min will evaluate to an incorrect answer if the indices of the first two rows are swapped 1 .", "This problem is challenging since among the programs that evaluate to the correct answer, most of them are spurious.", "The search step can be viewed as following an exploration policy b βœ“ (y|x, t, z) to explore the set of programs Y.", "This exploration is often performed by beam search and at each step, we either sample from b βœ“ or take the top scoring programs.", "The set K(x, t, z) is then used by the update step for parameter update.", "Most search strategies use an exploration policy which is based on the score function, for example b βœ“ (y|x, t, z) / exp{score βœ“ (y, t)}.", "However, this approach can suffer from a divergence phenomenon whereby the score of spurious programs picked up by the search in the first epoch increases, making it more likely for the search to pick them up in the future.", "Such divergence issues are common with latent-variable learning and often require careful initialization to overcome (Rose, 1998) .", "Unfortunately such initialization schemes are not applicable for deep neural networks which form the model of most successful semantic parsers today (Jia and Liang, 2016; Misra and Artzi, 2016; Iyyer et al., 2017) .", "Prior work, such as ✏-greedy exploration (Guu et al., 2017) , has reduced the severity of this problem by introducing random noise in the search procedure to avoid saturating the search on high-scoring spurious programs.", "However, random noise need not bias the search towards the correct program(s).", "In this paper, we introduce a simple policy-shaping method to guide the search.", "This approach allows incorporating prior knowledge in the exploration policy and can bias the search away from spurious programs.", "Β» Compute generalized gradient updates 6: βœ“ = βœ“ + Β΅ (K) 7: return βœ“ Policy Shaping Policy shaping is a method to introduce prior knowledge into a policy (Griffith et al., 2013) .", "Formally, let the current behavior policy be b βœ“ (y|x, t, z) and a predefined critique policy, the prior knowledge, be p c (y|x, t).", "Policy shaping defines a new shaped behavior policy p b (y|x, t) given by: p b (y|x, t) = b βœ“ (y|x, t, z)p c (y|x, t) P y 0 2Y b βœ“ (y 0 |x, t, z)p c (y 0 |x, t) .", "(2) Using the shaped policy for exploration biases the search towards the critique policy's preference.", "We next describe a simple critique policy that we use in this paper.", "Lexical Policy Shaping We qualitatively observed that correct programs often contains tokens which are also present in the question.", "For example, the correct program in Figure 2 contains the token Points, which is also present in the question.", "We therefore, define a simple surface form similarity feature match(x, y) that computes the ratio of number of non-keyword tokens in the program y that are also present in the question x.", "However, surface-form similarity is often not enough.", "For example, both the first and fourth program in Figure 2 contain the token Points but only the fourth program is correct.", "Therefore, we also use a simple co-occurrence feature that triggers on frequently co-occurring pairs of tokens in the program and instruction.", "For example, the token most is highly likely to co-occur with a correct program containing the keyword Max.", "This happens for the example in Figure 2 .", "Similarly the token not may co-occur with the keyword NotEqual.", "We assume access to a lexicon ⇀ = {(w j , !", "j )} k j=1 containing k lexical pairs of tokens and keywords.", "Each lexical pair (w, !)", "maps the token w in a text to a keyword !", "in a program.", "For a given program y and question x, we define a co-occurrence score as co_occur(y, x) = P (w,!", ")2⇀ {w 2 x^!", "2 y}}.", "We define critique score critique(y, x) as the sum of the match and co_occur scores.", "The critique policy is given by: p c (y|x, t) / exp (⌘ ⇀ critique(y, x)) , (3) where ⌘ is a single scalar hyper-parameter denoting the confidence in the critique policy.", "Addressing Update Strategy Selection: Generalized Update Equation Given the set of programs generated by the search step, one can use many objectives to update the parameters.", "For example, previous work have utilized maximum marginal likelihood (Krishnamurthy et al., 2017; Guu et al., 2017) , reinforcement learning (Zhong et al., 2017; Guu et al., 2017) and margin based methods (Iyyer et al., 2017) .", "It could be difficult to choose the suitable algorithm from these options.", "In this section, we propose a principle and general update equation such that previous update algorithms can be considered as special cases to this equation.", "Having a general update is important for the following reasons.", "First, it allows us to understand existing algorithms better by examining their basic properties.", "Second, the generalized update equation also makes it easy to implement and experiment with various different algorithms.", "Moreover, it provides a framework that enables the development of new variations or extensions of existing learning methods.", "In the following, we describe how the commonly used algorithms are in fact very similartheir update rules can all be viewed as special cases of the proposed generalized update equation.", "Algorithm 1 shows the meta-learning framework.", "For every training example, we first find a set of candidates using an exploration policy (line 4).", "We use the program candidates to update the parameters (line 6).", "Commonly Used Learning Algorithms We briefly describe three algorithms: maximum marginalized likelihood, policy gradient and maximum margin reward.", "Maximum Marginalized Likelihood The maximum marginalized likelihood method maximizes the log-likelihood of the training data by marginalizing over the set of programs.", "J MML = log p(z i |x i , t i ) = log X y2Y p(z i |y, t i )p(y|x i , t i ) (4) Because an answer is deterministically computed given a program and a table, we define p(z | y, t) as 1 or 0 depending upon whether the y evaluates to z given t, or not.", "Let Gen(z, t) βœ“ Y be the set of compatible programs that evaluate to z given the table t. The objective can then be expressed as: J MML = log X y2Gen(zi,ti) p(y|x i , t i ) (5) In practice, the summation over Gen(.)", "is approximated by only using the compatible programs in the set K generated by the search step.", "Policy Gradient Methods Most reinforcement learning approaches for semantic parsing assume access to a reward function R : Y β‡₯ X β‡₯ Z !", "R, giving a scalar reward R(y, z) for a given program y and the correct answer z.", "2 We can further assume without loss of generality that the reward is always in [0, 1].", "Reinforcement learning approaches maximize the expected reward J RL : J RL = X y2Y p(y|x i , t i )R(y, z i ) (6) J RL is hard to approximate using numerical integration since the reward for all programs may not be known a priori.", "Policy gradient methods solve this by approximating the derivative using a sample from the policy.", "When the search space is large, the policy may fail to sample a correct program, which can greatly slow down the learning.", "Therefore, off-policy methods are sometimes introduced to bias the sampling towards high-reward yielding programs.", "In those methods, an additional exploration policy u(y|x i , t i , z i ) is used to improve sampling.", "Importance weights are used to make the gradient unbiased (see Appendix for derivation).", "Maximum Margin Reward For every training example (x i , t i , z i ), the maximum margin reward method finds the highest scoring program y i that evaluates to z i , as the reference program, from the set K of programs generated by the search.", "With a margin function : Y β‡₯Y β‡₯Z !", "R and reference program y, the set of programs V that violate the margin constraint can thus be defined as: V = {y 0 | y 0 2 Y and score βœ“ (y, x, t) ο£Ώ score βœ“ (y 0 , x, t) + (y, y 0 , z)}, (7) where (y, y 0 , z) = R(y, z) R(y 0 , z).", "Similarly, the program that most violates the constraint can be written as: y = arg max y 0 2Y {score βœ“ (y 0 , x, t) + (y, y 0 , z) score βœ“ (y, x, t)} (8) The most-violation margin objective (negative margin loss) is thus defined as: J MMR = max{0, score βœ“ (Θ³, x i , t i ) score βœ“ (y i , x i , t i ) + (y i ,Θ³, z i )} Unlike the previous two learning algorithms, margin methods only update the score of the reference program and the program that violates the margin.", "Generalized Update Equation Although the algorithms described in Β§4.1 seem very different on the surface, the gradients of their loss functions can in fact be described in the same generalized form, given in Eq.", "(9) 3 .", "In addition to the gradient of the model scoring function, this equation has two variable terms, w(Β·), q(Β·).", "We call the first term w(y, x, t, z) intensity, which is a positive scalar value and the second term q(y|x, t) the competing distribution, which is a probability distribution over programs.", "Varying them makes the equation equivalent to the update rule of the algorithms we discussed, as shown in Table 1 .", "We also consider meritocratic update policy which uses a hyperparameter to sharpen or smooth the intensity of maximum marginal likelihood (Guu et al., 2017) .", "Intuitively, w(y, x, t, z) defines the positive part of the update equation, which defines how aggressively the update favors program y.", "Likewise, q(y|x, t) defines the negative part of the learning Generalized Update Equation: (7) ) andΘ³ is the most violating program (cf.", "Eq.", "(8) ).", "For REINFORCE,Ε· is sampled from K using p(.)", "whereas for Off-Policy Policy Gradient,Ε· is sampled using u(.).", "(K) = X y2K w(y, x, t, z) 0 @ r βœ“ score βœ“ (y, x, t) X y 0 2Y q(y 0 |x, t)r βœ“ score βœ“ (y 0 , x, t) algorithm, namely how aggressively the update penalizes the members of the program set.", "The generalized update equation provides a tool for better understanding individual algorithm, and helps shed some light on when a particular method may perform better.", "Intensity versus Search Quality In SpFD, the effectiveness of the algorithms for SpFD is closely related to the quality of the search results given that the gold program is not available.", "Intuitively, if the search quality is good, the update algorithm could be aggressive on updating the model parameters.", "When the search quality is poor, the algorithm should be conservative.", "The intensity w(Β·) is closely related to the aggressiveness of the algorithm.", "For example, the maximum marginal likelihood is less aggressive given that it produces a non-zero intensity over all programs in the program set K that evaluate to the correct answer.", "The intensity for a particular correct program y is proportional to its probability p(y|x, t).", "Further, meritocratic update becomes more aggressive as becomes larger.", "In contrast, REINFORCE and maximum margin reward both have a non-zero intensity only on a single program in K. This value is 1.0 for maximum margin reward, while for reinforcement learning, this value is the reward.", "Maximum margin reward therefore updates most aggressively in favor of its selection while maximum marginal likelihood tends to hedge its bet.", "Therefore, the maximum margin methods should benefit the most when the search quality improves.", "Stability The general equation also allows us to investigate the stability of a model update algorithm.", "In general, the variance of update direction can be high, hence less stable, if the model update algorithm has peaky competing distribution, or it puts all of its intensity on a single program.", "For example, REINFORCE only samples one program and puts non-zero intensity only on that program, so it could be unstable depending on the sampling results.", "The competing distribution affects the stability of the algorithm.", "For example, maximum margin reward penalizes only the most violating program and is benign to other incorrect programs.", "Therefore, the MMR algorithm could be unstable during training.", "New Model Update Algorithm The general equation provides a framework that enables the development of new variations or extensions of existing learning methods.", "For example, in order to improve the stability of the MMR algorithm, we propose a simple variant of maximum margin reward, which penalizes all violating programs instead of only the most violating one.", "We call this approach maximum margin average violation reward (MAVER), which is included in Table 1 as well.", "Given that MAVER effectively considers more negative examples during each update, we expect that it is more stable compared to the MMR algorithm.", "Experiments We describe the setup in Β§5.1 and results in Β§5.2.", "Setup Dataset We use the sequential question answering (SQA) dataset (Iyyer et al., 2017) for our experiments.", "SQA contains 6,066 sequences and each sequence contains up to 3 questions, with 17,553 questions in total.", "The data is partitioned into training (83%) and test (17%) splits.", "We use 4/5 of the original train split as our training set and the remaining 1/5 as the dev set.", "We evaluate using exact match on answer.", "Previous state-of-theart result on the SQA dataset is 44.7% accuracy, using maximum margin reward learning.", "Semantic Parser Our semantic parser is based on DynSP (Iyyer et al., 2017) , which contains a set of SQL actions, such as adding a clause (e.g., Select Column) or adding an operator (e.g., Max).", "Each action has an associated neural network module that generates the score for the action based on the instruction, the table and the list of past actions.", "The score of the entire program is given by the sum of scores of all actions.", "We modified DynSP to improve its representational capacity.", "We refer to the new parser as DynSP++.", "Most notably, we included new features and introduced two additional parser actions.", "See Appendix 8.2 for more details.", "While these improvements help us achieve state-of-the-art results, the majority of the gain comes from the learning contributions described in this paper.", "Hyperparameters For each experiment, we train the model for 30 epochs.", "We find the optimal stopping epoch by evaluating the model on the dev set.", "We then train on train+dev set till the stopping epoch and evaluate the model on the held-out test set.", "Model parameters are trained using stochastic gradient descent with learning rate of 0.1.", "We set the hyperparameter ⌘ for policy shaping to 5.", "All hyperparameters were tuned on the dev set.", "We use 40 lexical pairs for defining the co-occur score.", "We used common English superlatives (e.g., highest, most) and comparators (e.g., more, larger) and did not fit the lexical pairs based on the dataset.", "Given the model parameter βœ“, we use a base exploration policy defined in (Iyyer et al., 2017) .", "This exploration policy is given by b βœ“ (y | x, t, z) / exp( Β· R(y, z) + score βœ“ (y, βœ“, z)).", "R(y, z) is the reward function of the incomplete program y, given the answer z.", "We use a reward function R(y, z) given by the Jaccard similarity of the gold answer z and the answer generated by the program y.", "The value of is set to infinity, which essentially is equivalent to sorting the programs based on the reward and using the current model score for tie breaking.", "Further, we prune all syntactically invalid programs.", "For more details, we refer the reader to (Iyyer et al., 2017) .", "Table 2 contains the dev and test results when using our algorithm on the SQA dataset.", "We observe that margin based methods perform better than maximum likelihood methods and policy gradient in our experiment.", "Policy shaping in general improves the performance across different algorithms.", "Our best test results outperform previous SOTA by 5.0%.", "Results Policy Gradient vs Off-Policy Gradient RE-INFORCE, a simple policy gradient method, achieved extremely poor performance.", "This likely due to the problem of exploration and having to sample from a large space of programs.", "This is further corroborated from observing the much superior performance of off-policy policy gradient methods.", "Thus, the sampling policy is an important factor to consider for policy gradient methods.", "The Effect of Policy Shaping We observe that the improvement due to policy shaping is 6.0% on the SQA dataset for MAVER and only 1.3% for maximum marginal likelihood.", "We also observe that as increases, the improvement due to policy shaping for meritocratic update increases.", "This supports our hypothesis that aggressive updates of margin based methods is beneficial when the search method is more accurate as compared to maximum marginal likelihood which hedges its bet between all programs that evaluate to the right answer.", "Stability of MMR In Section 4, the general update equation helps us point out that MMR could be unstable due to the peaky competing distribution.", "MAVER was proposed to increase the stability of the algorithm.", "To measure stability, we cal- Table 4 .", "Policy Shaping vs Model Shaping Critique policy contains useful information that can bias the search away from spurious programs.", "Therefore, one can also consider making the critique policy as part of the model.", "We call this model shaping.", "We define our model to be the shaped policy and train and test using the new model.", "Using MAVER updates, we found that the dev accuracy dropped to 37.1%.", "We conjecture that the strong prior in the critique policy can hinder generalization in model shaping.", "Related Work Semantic Parsing from Denotation Mapping natural language text to formal meaning representation was first studied by Montague (1970) .", "Early work on learning semantic parsers rely on labeled formal representations as the supervision signals Collins, 2005, 2007; Zelle and Mooney, 1993) .", "However, because getting access to gold formal representation generally requires expensive annotations by an expert, distant supervision approaches, where semantic parsers are learned from denotation only, have become the main learning paradigm (e.g., Clarke et al., 2010; Liang et al., 2011; Artzi and Zettlemoyer, 2013; Berant et al., 2013; Iyyer et al., 2017; Krishnamurthy et al., 2017) .", "Guu et al.", "(2017) studied the problem of spurious programs and considered adding noise to diversify the search procedure and introduced meritocratic updates.", "Reinforcement Learning Algorithms Reinforcement learning algorithms have been applied to various NLP problems including dialogue (Li et al., 2016 ), text-based games (Narasimhan et al., 2015) , information extraction (Narasimhan et al., 2016) , coreference resolution (Clark and Man- ning, 2016), semantic parsing (Guu et al., 2017) and instruction following (Misra et al., 2017) .", "Guu et al.", "(2017) show that policy gradient methods underperform maximum marginal likelihood approaches.", "Our result on the SQA dataset supports their observation.", "However, we show that using off-policy sampling, policy gradient methods can provide superior performance to maximum marginal likelihood methods.", "Margin-based Learning Margin-based methods have been considered in the context of SVM learning.", "In the NLP literature, margin based learning has been applied to parsing (Taskar et al., 2004; McDonald et al., 2005) , text classification (Taskar et al., 2003) , machine translation (Watanabe et al., 2007) and semantic parsing (Iyyer et al., 2017) .", "Kummerfeld et al.", "(2015) found that max-margin based methods generally outperform likelihood maximization on a range of tasks.", "Previous work have studied connections between margin based method and likelihood maximization for supervised learning setting.", "We show them as special cases of our unified update equation for distant supervision learning.", "Similar to this work, Lee et al.", "(2016) also found that in the context of supervised learning, margin-based algorithms which update all violated examples perform better than the one that only updates the most violated example.", "Latent Variable Modeling Learning semantic parsers from denotation can be viewed as a latent variable modeling problem, where the program is the latent variable.", "Probabilistic latent variable models have been studied using EM-algorithm and its variant (Dempster et al., 1977) .", "The graphical model literature has studied latent variable learning on margin-based methods (Yu and Joachims, 2009 ) and probabilistic models (Quattoni et al., 2007) .", "Samdani et al.", "(2012) studied various vari-ants of EM algorithm and showed that all of them are special cases of a unified framework.", "Our generalized update framework is similar in spirit.", "Conclusion In this paper, we propose a general update equation from semantic parsing from denotation and propose a policy shaping method for addressing the spurious program challenge.", "For the future, we plan to apply the proposed learning framework to more semantic parsing tasks and consider new methods for policy shaping." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "4", "4.1", "4.2", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Background", "The Semantic Parsing Task", "Learning", "Addressing Spurious Programs: Policy Shaping", "Addressing Update Strategy Selection:", "Commonly Used Learning Algorithms", "Generalized Update Equation", "Experiments", "Setup", "Results", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-103#paper-1267#slide-9
Spurious Programs
Question x, Table t, Answer z, Parameters [Search for Training] With x, t, z, beam search suitable {y} If the model selects a spurious program for update then it increases the chance of selecting spurious programs in future.
Question x, Table t, Answer z, Parameters [Search for Training] With x, t, z, beam search suitable {y} If the model selects a spurious program for update then it increases the chance of selecting spurious programs in future.
[]
GEM-SciDuet-train-103#paper-1267#slide-11
1267
Policy Shaping and Generalized Update Equations for Semantic Parsing from Denotations
Semantic parsing from denotations faces two key challenges in model training: (1) given only the denotations (e.g., answers), search for good candidate semantic parses, and (2) choose the best model update algorithm. We propose effective and general solutions to each of them. Using policy shaping, we bias the search procedure towards semantic parses that are more compatible to the text, which provide better supervision signals for training. In addition, we propose an update equation that generalizes three different families of learning algorithms, which enables fast model exploration. When experimented on a recently proposed sequential question answering dataset, our framework leads to a new state-of-theart model that outperforms previous work by 5.0% absolute on exact match accuracy.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233 ], "paper_content_text": [ "Introduction Semantic parsing from denotations (SpFD) is the problem of mapping text to executable formal representations (or program) in a situated environment and executing them to generate denotations (or answer), in the absence of access to correct representations.", "Several problems have been handled within this framework, including question answering (Berant et al., 2013; Iyyer et al., 2017) and instructions for robots (Artzi and Zettlemoyer, 2013; Misra et al., 2015) .", "Consider the example in Figure 1 .", "Given the question and a table environment, a semantic parser maps the question to an executable program, in this case a SQL query, and then executes the query on the environment to generate the answer England.", "In the SpFD setting, the training data does not contain the correct programs.", "Thus, the existing learning approaches for SpFD perform two steps for every training example, a search step that explores the space of programs Select Nation Where Points is Maximum Program: Answer: Environment: England Figure 1 : An example of semantic parsing from denotations.", "Given the table environment, map the question to an executable program that evaluates to the answer.", "and finds suitable candidates, and an update step that uses these programs to update the model.", "Figure 2 shows the two step training procedure for the above example.", "In this paper, we address two key challenges in model training for SpFD by proposing a novel learning framework, improving both the search and update steps.", "The first challenge, the existence of spurious programs, lies in the search step.", "More specifically, while the success of the search step relies on its ability to find programs that are semantically correct, we can only verify if the program can generate correct answers, given that no gold programs are presented.", "The search step is complicated by spurious programs, which happen to evaluate to the correct answer but do not represent accurately the meaning of the natural language question.", "For example, for the environment in Figure 1 , the program Select Nation Where Name = Karen Andrew is spurious.", "Selecting spurious programs as positive examples can greatly affect the performance of semantic parsers as these programs generally do not gen- Our contributions in this work are twofold.", "To address the first challenge, we propose a policy shaping (Griffith et al., 2013) method that incorporates simple, lightweight domain knowledge, such as a small set of lexical pairs of tokens in the question and program, in the form of a critique policy ( Β§ 3).", "This helps bias the search towards the correct program, an important step to improve supervision signals, which benefits learning regardless of the choice of algorithm.", "To address the second challenge, we prove that the parameter update step in several algorithms are similar and can be viewed as special cases of a generalized update equation ( Β§ 4).", "The equation contains two variable terms that govern the update behavior.", "Changing these two terms effectively defines an infinite class of learning algorithms where different values lead to significantly different results.", "We study this effect and propose a novel learning framework that improves over existing methods.", "We evaluate our methods using the sequential question answering (SQA) dataset (Iyyer et al., 2017) , and show that our proposed improvements to the search and update steps consistently enhance existing approaches.", "The proposed algorithm achieves new state-of-the-art and outperforms existing parsers by 5.0%.", "Background We give a formal problem definition of the semantic parsing task, followed by the general learning framework for solving it.", "The Semantic Parsing Task The problem discussed in this paper can be formally defined as follows.", "Let X be the set of all possible questions, Y programs (e.g., SQL-like queries), T tables (i.e., the structured data in this work) and Z answers.", "We further assume access to an executor : Y β‡₯ T !", "Z, that given a program y 2 Y and a table t 2 T , generates an answer (y, t) 2 Z.", "We assume that the executor and all tables are deterministic and the executor can be called as many times as possible.", "To facilitate discussion in the following sections, we define an environment function e t : Y !", "Z, by applying the executor to the program as e t (y) = (y, t).", "Given a question x and an environment e t , our aim is to generate a program y ⇀ 2 Y and then execute it to produce the answer e t (y ⇀ ).", "Assume that for any y 2 Y, the score of y being a correct program for x is score βœ“ (y, x, t), parameterized by βœ“.", "The inference task is thus: y ⇀ = arg max y2Y score βœ“ (y, x, t) (1) As the size of Y is exponential to the length of the program, a generic search procedure is typically employed for Eq.", "(1), as efficient dynamic algorithms typically do not exist.", "These search procedures generally maintain a beam of program states sorted according to some scoring function, where each program state represents an incomplete program.", "The search then generates a new program state from an existing state by performing an action.", "Each action adds a set of tokens (e.g., Nation) and keyword (e.g., Select) to a program state.", "For example, in order to generate the program in Figure 1 , the DynSP parser (Iyyer et al., 2017) will take the first action as adding the SQL expression Select Nation.", "Notice that score βœ“ can be used in either probabilistic or nonprobabilistic models.", "For probabilistic models, we assume that it is a Boltzmann policy, meaning that p βœ“ (y | x, t) / exp{score βœ“ (y, x, t)}.", "Learning Learning a semantic parser is equivalent to learning the parameters βœ“ in the scoring function, which is a structured learning problem, due to the large, structured output space Y.", "Structured learning algorithms generally consist of two major components: search and update.", "When the gold programs are available during training, the search procedure finds a set of high-scoring incorrect programs.", "These programs are used by the update step to derive loss for updating parameters.", "For example, these programs are used for approximating the partition-function in maximum-likelihood objective (Liang et al., 2011) and finding set of programs causing margin violation in margin based methods (DaumΓ© III and Marcu, 2005) .", "Depending on the exact algorithm being used, these two components are not necessarily separated into isolated steps.", "For instance, parameters can be updated in the middle of search (e.g., Huang et al., 2012) .", "For learning semantic parsers from denotations, where we assume only answers are available in a training set {( x i , t i , z i )} N i=1 of N examples, the basic construction of the learning algorithms remains the same.", "However, the problems that search needs to handle in SpFD is more challenging.", "In addition to finding a set of high-scoring incorrect programs, the search procedure also needs to guess the correct program(s) evaluating to the gold answer z i .", "This problem is further complicated by the presence of spurious programs, which generate the correct answer but are semantically incompatible with the question.", "For example, although all programs in Figure 2 evaluate to the same answer, only one of them is correct.", "The issue of the spurious programs also affects the design of model update.", "For instance, maximum marginal likelihood methods treat all the programs that evaluate to the gold answer equally, while maximum margin reward networks use model score to break tie and pick one of the programs as the correct reference.", "Addressing Spurious Programs: Policy Shaping Given a training example (x, t, z), the aim of the search step is to find a set K(x, t, z) of programs consisting of correct programs that evaluate to z and high-scoring incorrect programs.", "The search step should avoid picking up spurious programs for learning since such programs typically do not generalize.", "For example, in Figure 2, the spurious program Select Nation Where Index is Min will evaluate to an incorrect answer if the indices of the first two rows are swapped 1 .", "This problem is challenging since among the programs that evaluate to the correct answer, most of them are spurious.", "The search step can be viewed as following an exploration policy b βœ“ (y|x, t, z) to explore the set of programs Y.", "This exploration is often performed by beam search and at each step, we either sample from b βœ“ or take the top scoring programs.", "The set K(x, t, z) is then used by the update step for parameter update.", "Most search strategies use an exploration policy which is based on the score function, for example b βœ“ (y|x, t, z) / exp{score βœ“ (y, t)}.", "However, this approach can suffer from a divergence phenomenon whereby the score of spurious programs picked up by the search in the first epoch increases, making it more likely for the search to pick them up in the future.", "Such divergence issues are common with latent-variable learning and often require careful initialization to overcome (Rose, 1998) .", "Unfortunately such initialization schemes are not applicable for deep neural networks which form the model of most successful semantic parsers today (Jia and Liang, 2016; Misra and Artzi, 2016; Iyyer et al., 2017) .", "Prior work, such as ✏-greedy exploration (Guu et al., 2017) , has reduced the severity of this problem by introducing random noise in the search procedure to avoid saturating the search on high-scoring spurious programs.", "However, random noise need not bias the search towards the correct program(s).", "In this paper, we introduce a simple policy-shaping method to guide the search.", "This approach allows incorporating prior knowledge in the exploration policy and can bias the search away from spurious programs.", "Β» Compute generalized gradient updates 6: βœ“ = βœ“ + Β΅ (K) 7: return βœ“ Policy Shaping Policy shaping is a method to introduce prior knowledge into a policy (Griffith et al., 2013) .", "Formally, let the current behavior policy be b βœ“ (y|x, t, z) and a predefined critique policy, the prior knowledge, be p c (y|x, t).", "Policy shaping defines a new shaped behavior policy p b (y|x, t) given by: p b (y|x, t) = b βœ“ (y|x, t, z)p c (y|x, t) P y 0 2Y b βœ“ (y 0 |x, t, z)p c (y 0 |x, t) .", "(2) Using the shaped policy for exploration biases the search towards the critique policy's preference.", "We next describe a simple critique policy that we use in this paper.", "Lexical Policy Shaping We qualitatively observed that correct programs often contains tokens which are also present in the question.", "For example, the correct program in Figure 2 contains the token Points, which is also present in the question.", "We therefore, define a simple surface form similarity feature match(x, y) that computes the ratio of number of non-keyword tokens in the program y that are also present in the question x.", "However, surface-form similarity is often not enough.", "For example, both the first and fourth program in Figure 2 contain the token Points but only the fourth program is correct.", "Therefore, we also use a simple co-occurrence feature that triggers on frequently co-occurring pairs of tokens in the program and instruction.", "For example, the token most is highly likely to co-occur with a correct program containing the keyword Max.", "This happens for the example in Figure 2 .", "Similarly the token not may co-occur with the keyword NotEqual.", "We assume access to a lexicon ⇀ = {(w j , !", "j )} k j=1 containing k lexical pairs of tokens and keywords.", "Each lexical pair (w, !)", "maps the token w in a text to a keyword !", "in a program.", "For a given program y and question x, we define a co-occurrence score as co_occur(y, x) = P (w,!", ")2⇀ {w 2 x^!", "2 y}}.", "We define critique score critique(y, x) as the sum of the match and co_occur scores.", "The critique policy is given by: p c (y|x, t) / exp (⌘ ⇀ critique(y, x)) , (3) where ⌘ is a single scalar hyper-parameter denoting the confidence in the critique policy.", "Addressing Update Strategy Selection: Generalized Update Equation Given the set of programs generated by the search step, one can use many objectives to update the parameters.", "For example, previous work have utilized maximum marginal likelihood (Krishnamurthy et al., 2017; Guu et al., 2017) , reinforcement learning (Zhong et al., 2017; Guu et al., 2017) and margin based methods (Iyyer et al., 2017) .", "It could be difficult to choose the suitable algorithm from these options.", "In this section, we propose a principle and general update equation such that previous update algorithms can be considered as special cases to this equation.", "Having a general update is important for the following reasons.", "First, it allows us to understand existing algorithms better by examining their basic properties.", "Second, the generalized update equation also makes it easy to implement and experiment with various different algorithms.", "Moreover, it provides a framework that enables the development of new variations or extensions of existing learning methods.", "In the following, we describe how the commonly used algorithms are in fact very similartheir update rules can all be viewed as special cases of the proposed generalized update equation.", "Algorithm 1 shows the meta-learning framework.", "For every training example, we first find a set of candidates using an exploration policy (line 4).", "We use the program candidates to update the parameters (line 6).", "Commonly Used Learning Algorithms We briefly describe three algorithms: maximum marginalized likelihood, policy gradient and maximum margin reward.", "Maximum Marginalized Likelihood The maximum marginalized likelihood method maximizes the log-likelihood of the training data by marginalizing over the set of programs.", "J MML = log p(z i |x i , t i ) = log X y2Y p(z i |y, t i )p(y|x i , t i ) (4) Because an answer is deterministically computed given a program and a table, we define p(z | y, t) as 1 or 0 depending upon whether the y evaluates to z given t, or not.", "Let Gen(z, t) βœ“ Y be the set of compatible programs that evaluate to z given the table t. The objective can then be expressed as: J MML = log X y2Gen(zi,ti) p(y|x i , t i ) (5) In practice, the summation over Gen(.)", "is approximated by only using the compatible programs in the set K generated by the search step.", "Policy Gradient Methods Most reinforcement learning approaches for semantic parsing assume access to a reward function R : Y β‡₯ X β‡₯ Z !", "R, giving a scalar reward R(y, z) for a given program y and the correct answer z.", "2 We can further assume without loss of generality that the reward is always in [0, 1].", "Reinforcement learning approaches maximize the expected reward J RL : J RL = X y2Y p(y|x i , t i )R(y, z i ) (6) J RL is hard to approximate using numerical integration since the reward for all programs may not be known a priori.", "Policy gradient methods solve this by approximating the derivative using a sample from the policy.", "When the search space is large, the policy may fail to sample a correct program, which can greatly slow down the learning.", "Therefore, off-policy methods are sometimes introduced to bias the sampling towards high-reward yielding programs.", "In those methods, an additional exploration policy u(y|x i , t i , z i ) is used to improve sampling.", "Importance weights are used to make the gradient unbiased (see Appendix for derivation).", "Maximum Margin Reward For every training example (x i , t i , z i ), the maximum margin reward method finds the highest scoring program y i that evaluates to z i , as the reference program, from the set K of programs generated by the search.", "With a margin function : Y β‡₯Y β‡₯Z !", "R and reference program y, the set of programs V that violate the margin constraint can thus be defined as: V = {y 0 | y 0 2 Y and score βœ“ (y, x, t) ο£Ώ score βœ“ (y 0 , x, t) + (y, y 0 , z)}, (7) where (y, y 0 , z) = R(y, z) R(y 0 , z).", "Similarly, the program that most violates the constraint can be written as: y = arg max y 0 2Y {score βœ“ (y 0 , x, t) + (y, y 0 , z) score βœ“ (y, x, t)} (8) The most-violation margin objective (negative margin loss) is thus defined as: J MMR = max{0, score βœ“ (Θ³, x i , t i ) score βœ“ (y i , x i , t i ) + (y i ,Θ³, z i )} Unlike the previous two learning algorithms, margin methods only update the score of the reference program and the program that violates the margin.", "Generalized Update Equation Although the algorithms described in Β§4.1 seem very different on the surface, the gradients of their loss functions can in fact be described in the same generalized form, given in Eq.", "(9) 3 .", "In addition to the gradient of the model scoring function, this equation has two variable terms, w(Β·), q(Β·).", "We call the first term w(y, x, t, z) intensity, which is a positive scalar value and the second term q(y|x, t) the competing distribution, which is a probability distribution over programs.", "Varying them makes the equation equivalent to the update rule of the algorithms we discussed, as shown in Table 1 .", "We also consider meritocratic update policy which uses a hyperparameter to sharpen or smooth the intensity of maximum marginal likelihood (Guu et al., 2017) .", "Intuitively, w(y, x, t, z) defines the positive part of the update equation, which defines how aggressively the update favors program y.", "Likewise, q(y|x, t) defines the negative part of the learning Generalized Update Equation: (7) ) andΘ³ is the most violating program (cf.", "Eq.", "(8) ).", "For REINFORCE,Ε· is sampled from K using p(.)", "whereas for Off-Policy Policy Gradient,Ε· is sampled using u(.).", "(K) = X y2K w(y, x, t, z) 0 @ r βœ“ score βœ“ (y, x, t) X y 0 2Y q(y 0 |x, t)r βœ“ score βœ“ (y 0 , x, t) algorithm, namely how aggressively the update penalizes the members of the program set.", "The generalized update equation provides a tool for better understanding individual algorithm, and helps shed some light on when a particular method may perform better.", "Intensity versus Search Quality In SpFD, the effectiveness of the algorithms for SpFD is closely related to the quality of the search results given that the gold program is not available.", "Intuitively, if the search quality is good, the update algorithm could be aggressive on updating the model parameters.", "When the search quality is poor, the algorithm should be conservative.", "The intensity w(Β·) is closely related to the aggressiveness of the algorithm.", "For example, the maximum marginal likelihood is less aggressive given that it produces a non-zero intensity over all programs in the program set K that evaluate to the correct answer.", "The intensity for a particular correct program y is proportional to its probability p(y|x, t).", "Further, meritocratic update becomes more aggressive as becomes larger.", "In contrast, REINFORCE and maximum margin reward both have a non-zero intensity only on a single program in K. This value is 1.0 for maximum margin reward, while for reinforcement learning, this value is the reward.", "Maximum margin reward therefore updates most aggressively in favor of its selection while maximum marginal likelihood tends to hedge its bet.", "Therefore, the maximum margin methods should benefit the most when the search quality improves.", "Stability The general equation also allows us to investigate the stability of a model update algorithm.", "In general, the variance of update direction can be high, hence less stable, if the model update algorithm has peaky competing distribution, or it puts all of its intensity on a single program.", "For example, REINFORCE only samples one program and puts non-zero intensity only on that program, so it could be unstable depending on the sampling results.", "The competing distribution affects the stability of the algorithm.", "For example, maximum margin reward penalizes only the most violating program and is benign to other incorrect programs.", "Therefore, the MMR algorithm could be unstable during training.", "New Model Update Algorithm The general equation provides a framework that enables the development of new variations or extensions of existing learning methods.", "For example, in order to improve the stability of the MMR algorithm, we propose a simple variant of maximum margin reward, which penalizes all violating programs instead of only the most violating one.", "We call this approach maximum margin average violation reward (MAVER), which is included in Table 1 as well.", "Given that MAVER effectively considers more negative examples during each update, we expect that it is more stable compared to the MMR algorithm.", "Experiments We describe the setup in Β§5.1 and results in Β§5.2.", "Setup Dataset We use the sequential question answering (SQA) dataset (Iyyer et al., 2017) for our experiments.", "SQA contains 6,066 sequences and each sequence contains up to 3 questions, with 17,553 questions in total.", "The data is partitioned into training (83%) and test (17%) splits.", "We use 4/5 of the original train split as our training set and the remaining 1/5 as the dev set.", "We evaluate using exact match on answer.", "Previous state-of-theart result on the SQA dataset is 44.7% accuracy, using maximum margin reward learning.", "Semantic Parser Our semantic parser is based on DynSP (Iyyer et al., 2017) , which contains a set of SQL actions, such as adding a clause (e.g., Select Column) or adding an operator (e.g., Max).", "Each action has an associated neural network module that generates the score for the action based on the instruction, the table and the list of past actions.", "The score of the entire program is given by the sum of scores of all actions.", "We modified DynSP to improve its representational capacity.", "We refer to the new parser as DynSP++.", "Most notably, we included new features and introduced two additional parser actions.", "See Appendix 8.2 for more details.", "While these improvements help us achieve state-of-the-art results, the majority of the gain comes from the learning contributions described in this paper.", "Hyperparameters For each experiment, we train the model for 30 epochs.", "We find the optimal stopping epoch by evaluating the model on the dev set.", "We then train on train+dev set till the stopping epoch and evaluate the model on the held-out test set.", "Model parameters are trained using stochastic gradient descent with learning rate of 0.1.", "We set the hyperparameter ⌘ for policy shaping to 5.", "All hyperparameters were tuned on the dev set.", "We use 40 lexical pairs for defining the co-occur score.", "We used common English superlatives (e.g., highest, most) and comparators (e.g., more, larger) and did not fit the lexical pairs based on the dataset.", "Given the model parameter βœ“, we use a base exploration policy defined in (Iyyer et al., 2017) .", "This exploration policy is given by b βœ“ (y | x, t, z) / exp( Β· R(y, z) + score βœ“ (y, βœ“, z)).", "R(y, z) is the reward function of the incomplete program y, given the answer z.", "We use a reward function R(y, z) given by the Jaccard similarity of the gold answer z and the answer generated by the program y.", "The value of is set to infinity, which essentially is equivalent to sorting the programs based on the reward and using the current model score for tie breaking.", "Further, we prune all syntactically invalid programs.", "For more details, we refer the reader to (Iyyer et al., 2017) .", "Table 2 contains the dev and test results when using our algorithm on the SQA dataset.", "We observe that margin based methods perform better than maximum likelihood methods and policy gradient in our experiment.", "Policy shaping in general improves the performance across different algorithms.", "Our best test results outperform previous SOTA by 5.0%.", "Results Policy Gradient vs Off-Policy Gradient RE-INFORCE, a simple policy gradient method, achieved extremely poor performance.", "This likely due to the problem of exploration and having to sample from a large space of programs.", "This is further corroborated from observing the much superior performance of off-policy policy gradient methods.", "Thus, the sampling policy is an important factor to consider for policy gradient methods.", "The Effect of Policy Shaping We observe that the improvement due to policy shaping is 6.0% on the SQA dataset for MAVER and only 1.3% for maximum marginal likelihood.", "We also observe that as increases, the improvement due to policy shaping for meritocratic update increases.", "This supports our hypothesis that aggressive updates of margin based methods is beneficial when the search method is more accurate as compared to maximum marginal likelihood which hedges its bet between all programs that evaluate to the right answer.", "Stability of MMR In Section 4, the general update equation helps us point out that MMR could be unstable due to the peaky competing distribution.", "MAVER was proposed to increase the stability of the algorithm.", "To measure stability, we cal- Table 4 .", "Policy Shaping vs Model Shaping Critique policy contains useful information that can bias the search away from spurious programs.", "Therefore, one can also consider making the critique policy as part of the model.", "We call this model shaping.", "We define our model to be the shaped policy and train and test using the new model.", "Using MAVER updates, we found that the dev accuracy dropped to 37.1%.", "We conjecture that the strong prior in the critique policy can hinder generalization in model shaping.", "Related Work Semantic Parsing from Denotation Mapping natural language text to formal meaning representation was first studied by Montague (1970) .", "Early work on learning semantic parsers rely on labeled formal representations as the supervision signals Collins, 2005, 2007; Zelle and Mooney, 1993) .", "However, because getting access to gold formal representation generally requires expensive annotations by an expert, distant supervision approaches, where semantic parsers are learned from denotation only, have become the main learning paradigm (e.g., Clarke et al., 2010; Liang et al., 2011; Artzi and Zettlemoyer, 2013; Berant et al., 2013; Iyyer et al., 2017; Krishnamurthy et al., 2017) .", "Guu et al.", "(2017) studied the problem of spurious programs and considered adding noise to diversify the search procedure and introduced meritocratic updates.", "Reinforcement Learning Algorithms Reinforcement learning algorithms have been applied to various NLP problems including dialogue (Li et al., 2016 ), text-based games (Narasimhan et al., 2015) , information extraction (Narasimhan et al., 2016) , coreference resolution (Clark and Man- ning, 2016), semantic parsing (Guu et al., 2017) and instruction following (Misra et al., 2017) .", "Guu et al.", "(2017) show that policy gradient methods underperform maximum marginal likelihood approaches.", "Our result on the SQA dataset supports their observation.", "However, we show that using off-policy sampling, policy gradient methods can provide superior performance to maximum marginal likelihood methods.", "Margin-based Learning Margin-based methods have been considered in the context of SVM learning.", "In the NLP literature, margin based learning has been applied to parsing (Taskar et al., 2004; McDonald et al., 2005) , text classification (Taskar et al., 2003) , machine translation (Watanabe et al., 2007) and semantic parsing (Iyyer et al., 2017) .", "Kummerfeld et al.", "(2015) found that max-margin based methods generally outperform likelihood maximization on a range of tasks.", "Previous work have studied connections between margin based method and likelihood maximization for supervised learning setting.", "We show them as special cases of our unified update equation for distant supervision learning.", "Similar to this work, Lee et al.", "(2016) also found that in the context of supervised learning, margin-based algorithms which update all violated examples perform better than the one that only updates the most violated example.", "Latent Variable Modeling Learning semantic parsers from denotation can be viewed as a latent variable modeling problem, where the program is the latent variable.", "Probabilistic latent variable models have been studied using EM-algorithm and its variant (Dempster et al., 1977) .", "The graphical model literature has studied latent variable learning on margin-based methods (Yu and Joachims, 2009 ) and probabilistic models (Quattoni et al., 2007) .", "Samdani et al.", "(2012) studied various vari-ants of EM algorithm and showed that all of them are special cases of a unified framework.", "Our generalized update framework is similar in spirit.", "Conclusion In this paper, we propose a general update equation from semantic parsing from denotation and propose a policy shaping method for addressing the spurious program challenge.", "For the future, we plan to apply the proposed learning framework to more semantic parsing tasks and consider new methods for policy shaping." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "4", "4.1", "4.2", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Background", "The Semantic Parsing Task", "Learning", "Addressing Spurious Programs: Policy Shaping", "Addressing Update Strategy Selection:", "Commonly Used Learning Algorithms", "Generalized Update Equation", "Experiments", "Setup", "Results", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-103#paper-1267#slide-11
Search with Shaped Policy
Question x, Table t, Answer z, Parameters [Search for Training] With x, t, z, beam search suitable {y}
Question x, Table t, Answer z, Parameters [Search for Training] With x, t, z, beam search suitable {y}
[]
GEM-SciDuet-train-103#paper-1267#slide-12
1267
Policy Shaping and Generalized Update Equations for Semantic Parsing from Denotations
Semantic parsing from denotations faces two key challenges in model training: (1) given only the denotations (e.g., answers), search for good candidate semantic parses, and (2) choose the best model update algorithm. We propose effective and general solutions to each of them. Using policy shaping, we bias the search procedure towards semantic parses that are more compatible to the text, which provide better supervision signals for training. In addition, we propose an update equation that generalizes three different families of learning algorithms, which enables fast model exploration. When experimented on a recently proposed sequential question answering dataset, our framework leads to a new state-of-theart model that outperforms previous work by 5.0% absolute on exact match accuracy.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233 ], "paper_content_text": [ "Introduction Semantic parsing from denotations (SpFD) is the problem of mapping text to executable formal representations (or program) in a situated environment and executing them to generate denotations (or answer), in the absence of access to correct representations.", "Several problems have been handled within this framework, including question answering (Berant et al., 2013; Iyyer et al., 2017) and instructions for robots (Artzi and Zettlemoyer, 2013; Misra et al., 2015) .", "Consider the example in Figure 1 .", "Given the question and a table environment, a semantic parser maps the question to an executable program, in this case a SQL query, and then executes the query on the environment to generate the answer England.", "In the SpFD setting, the training data does not contain the correct programs.", "Thus, the existing learning approaches for SpFD perform two steps for every training example, a search step that explores the space of programs Select Nation Where Points is Maximum Program: Answer: Environment: England Figure 1 : An example of semantic parsing from denotations.", "Given the table environment, map the question to an executable program that evaluates to the answer.", "and finds suitable candidates, and an update step that uses these programs to update the model.", "Figure 2 shows the two step training procedure for the above example.", "In this paper, we address two key challenges in model training for SpFD by proposing a novel learning framework, improving both the search and update steps.", "The first challenge, the existence of spurious programs, lies in the search step.", "More specifically, while the success of the search step relies on its ability to find programs that are semantically correct, we can only verify if the program can generate correct answers, given that no gold programs are presented.", "The search step is complicated by spurious programs, which happen to evaluate to the correct answer but do not represent accurately the meaning of the natural language question.", "For example, for the environment in Figure 1 , the program Select Nation Where Name = Karen Andrew is spurious.", "Selecting spurious programs as positive examples can greatly affect the performance of semantic parsers as these programs generally do not gen- Our contributions in this work are twofold.", "To address the first challenge, we propose a policy shaping (Griffith et al., 2013) method that incorporates simple, lightweight domain knowledge, such as a small set of lexical pairs of tokens in the question and program, in the form of a critique policy ( Β§ 3).", "This helps bias the search towards the correct program, an important step to improve supervision signals, which benefits learning regardless of the choice of algorithm.", "To address the second challenge, we prove that the parameter update step in several algorithms are similar and can be viewed as special cases of a generalized update equation ( Β§ 4).", "The equation contains two variable terms that govern the update behavior.", "Changing these two terms effectively defines an infinite class of learning algorithms where different values lead to significantly different results.", "We study this effect and propose a novel learning framework that improves over existing methods.", "We evaluate our methods using the sequential question answering (SQA) dataset (Iyyer et al., 2017) , and show that our proposed improvements to the search and update steps consistently enhance existing approaches.", "The proposed algorithm achieves new state-of-the-art and outperforms existing parsers by 5.0%.", "Background We give a formal problem definition of the semantic parsing task, followed by the general learning framework for solving it.", "The Semantic Parsing Task The problem discussed in this paper can be formally defined as follows.", "Let X be the set of all possible questions, Y programs (e.g., SQL-like queries), T tables (i.e., the structured data in this work) and Z answers.", "We further assume access to an executor : Y β‡₯ T !", "Z, that given a program y 2 Y and a table t 2 T , generates an answer (y, t) 2 Z.", "We assume that the executor and all tables are deterministic and the executor can be called as many times as possible.", "To facilitate discussion in the following sections, we define an environment function e t : Y !", "Z, by applying the executor to the program as e t (y) = (y, t).", "Given a question x and an environment e t , our aim is to generate a program y ⇀ 2 Y and then execute it to produce the answer e t (y ⇀ ).", "Assume that for any y 2 Y, the score of y being a correct program for x is score βœ“ (y, x, t), parameterized by βœ“.", "The inference task is thus: y ⇀ = arg max y2Y score βœ“ (y, x, t) (1) As the size of Y is exponential to the length of the program, a generic search procedure is typically employed for Eq.", "(1), as efficient dynamic algorithms typically do not exist.", "These search procedures generally maintain a beam of program states sorted according to some scoring function, where each program state represents an incomplete program.", "The search then generates a new program state from an existing state by performing an action.", "Each action adds a set of tokens (e.g., Nation) and keyword (e.g., Select) to a program state.", "For example, in order to generate the program in Figure 1 , the DynSP parser (Iyyer et al., 2017) will take the first action as adding the SQL expression Select Nation.", "Notice that score βœ“ can be used in either probabilistic or nonprobabilistic models.", "For probabilistic models, we assume that it is a Boltzmann policy, meaning that p βœ“ (y | x, t) / exp{score βœ“ (y, x, t)}.", "Learning Learning a semantic parser is equivalent to learning the parameters βœ“ in the scoring function, which is a structured learning problem, due to the large, structured output space Y.", "Structured learning algorithms generally consist of two major components: search and update.", "When the gold programs are available during training, the search procedure finds a set of high-scoring incorrect programs.", "These programs are used by the update step to derive loss for updating parameters.", "For example, these programs are used for approximating the partition-function in maximum-likelihood objective (Liang et al., 2011) and finding set of programs causing margin violation in margin based methods (DaumΓ© III and Marcu, 2005) .", "Depending on the exact algorithm being used, these two components are not necessarily separated into isolated steps.", "For instance, parameters can be updated in the middle of search (e.g., Huang et al., 2012) .", "For learning semantic parsers from denotations, where we assume only answers are available in a training set {( x i , t i , z i )} N i=1 of N examples, the basic construction of the learning algorithms remains the same.", "However, the problems that search needs to handle in SpFD is more challenging.", "In addition to finding a set of high-scoring incorrect programs, the search procedure also needs to guess the correct program(s) evaluating to the gold answer z i .", "This problem is further complicated by the presence of spurious programs, which generate the correct answer but are semantically incompatible with the question.", "For example, although all programs in Figure 2 evaluate to the same answer, only one of them is correct.", "The issue of the spurious programs also affects the design of model update.", "For instance, maximum marginal likelihood methods treat all the programs that evaluate to the gold answer equally, while maximum margin reward networks use model score to break tie and pick one of the programs as the correct reference.", "Addressing Spurious Programs: Policy Shaping Given a training example (x, t, z), the aim of the search step is to find a set K(x, t, z) of programs consisting of correct programs that evaluate to z and high-scoring incorrect programs.", "The search step should avoid picking up spurious programs for learning since such programs typically do not generalize.", "For example, in Figure 2, the spurious program Select Nation Where Index is Min will evaluate to an incorrect answer if the indices of the first two rows are swapped 1 .", "This problem is challenging since among the programs that evaluate to the correct answer, most of them are spurious.", "The search step can be viewed as following an exploration policy b βœ“ (y|x, t, z) to explore the set of programs Y.", "This exploration is often performed by beam search and at each step, we either sample from b βœ“ or take the top scoring programs.", "The set K(x, t, z) is then used by the update step for parameter update.", "Most search strategies use an exploration policy which is based on the score function, for example b βœ“ (y|x, t, z) / exp{score βœ“ (y, t)}.", "However, this approach can suffer from a divergence phenomenon whereby the score of spurious programs picked up by the search in the first epoch increases, making it more likely for the search to pick them up in the future.", "Such divergence issues are common with latent-variable learning and often require careful initialization to overcome (Rose, 1998) .", "Unfortunately such initialization schemes are not applicable for deep neural networks which form the model of most successful semantic parsers today (Jia and Liang, 2016; Misra and Artzi, 2016; Iyyer et al., 2017) .", "Prior work, such as ✏-greedy exploration (Guu et al., 2017) , has reduced the severity of this problem by introducing random noise in the search procedure to avoid saturating the search on high-scoring spurious programs.", "However, random noise need not bias the search towards the correct program(s).", "In this paper, we introduce a simple policy-shaping method to guide the search.", "This approach allows incorporating prior knowledge in the exploration policy and can bias the search away from spurious programs.", "Β» Compute generalized gradient updates 6: βœ“ = βœ“ + Β΅ (K) 7: return βœ“ Policy Shaping Policy shaping is a method to introduce prior knowledge into a policy (Griffith et al., 2013) .", "Formally, let the current behavior policy be b βœ“ (y|x, t, z) and a predefined critique policy, the prior knowledge, be p c (y|x, t).", "Policy shaping defines a new shaped behavior policy p b (y|x, t) given by: p b (y|x, t) = b βœ“ (y|x, t, z)p c (y|x, t) P y 0 2Y b βœ“ (y 0 |x, t, z)p c (y 0 |x, t) .", "(2) Using the shaped policy for exploration biases the search towards the critique policy's preference.", "We next describe a simple critique policy that we use in this paper.", "Lexical Policy Shaping We qualitatively observed that correct programs often contains tokens which are also present in the question.", "For example, the correct program in Figure 2 contains the token Points, which is also present in the question.", "We therefore, define a simple surface form similarity feature match(x, y) that computes the ratio of number of non-keyword tokens in the program y that are also present in the question x.", "However, surface-form similarity is often not enough.", "For example, both the first and fourth program in Figure 2 contain the token Points but only the fourth program is correct.", "Therefore, we also use a simple co-occurrence feature that triggers on frequently co-occurring pairs of tokens in the program and instruction.", "For example, the token most is highly likely to co-occur with a correct program containing the keyword Max.", "This happens for the example in Figure 2 .", "Similarly the token not may co-occur with the keyword NotEqual.", "We assume access to a lexicon ⇀ = {(w j , !", "j )} k j=1 containing k lexical pairs of tokens and keywords.", "Each lexical pair (w, !)", "maps the token w in a text to a keyword !", "in a program.", "For a given program y and question x, we define a co-occurrence score as co_occur(y, x) = P (w,!", ")2⇀ {w 2 x^!", "2 y}}.", "We define critique score critique(y, x) as the sum of the match and co_occur scores.", "The critique policy is given by: p c (y|x, t) / exp (⌘ ⇀ critique(y, x)) , (3) where ⌘ is a single scalar hyper-parameter denoting the confidence in the critique policy.", "Addressing Update Strategy Selection: Generalized Update Equation Given the set of programs generated by the search step, one can use many objectives to update the parameters.", "For example, previous work have utilized maximum marginal likelihood (Krishnamurthy et al., 2017; Guu et al., 2017) , reinforcement learning (Zhong et al., 2017; Guu et al., 2017) and margin based methods (Iyyer et al., 2017) .", "It could be difficult to choose the suitable algorithm from these options.", "In this section, we propose a principle and general update equation such that previous update algorithms can be considered as special cases to this equation.", "Having a general update is important for the following reasons.", "First, it allows us to understand existing algorithms better by examining their basic properties.", "Second, the generalized update equation also makes it easy to implement and experiment with various different algorithms.", "Moreover, it provides a framework that enables the development of new variations or extensions of existing learning methods.", "In the following, we describe how the commonly used algorithms are in fact very similartheir update rules can all be viewed as special cases of the proposed generalized update equation.", "Algorithm 1 shows the meta-learning framework.", "For every training example, we first find a set of candidates using an exploration policy (line 4).", "We use the program candidates to update the parameters (line 6).", "Commonly Used Learning Algorithms We briefly describe three algorithms: maximum marginalized likelihood, policy gradient and maximum margin reward.", "Maximum Marginalized Likelihood The maximum marginalized likelihood method maximizes the log-likelihood of the training data by marginalizing over the set of programs.", "J MML = log p(z i |x i , t i ) = log X y2Y p(z i |y, t i )p(y|x i , t i ) (4) Because an answer is deterministically computed given a program and a table, we define p(z | y, t) as 1 or 0 depending upon whether the y evaluates to z given t, or not.", "Let Gen(z, t) βœ“ Y be the set of compatible programs that evaluate to z given the table t. The objective can then be expressed as: J MML = log X y2Gen(zi,ti) p(y|x i , t i ) (5) In practice, the summation over Gen(.)", "is approximated by only using the compatible programs in the set K generated by the search step.", "Policy Gradient Methods Most reinforcement learning approaches for semantic parsing assume access to a reward function R : Y β‡₯ X β‡₯ Z !", "R, giving a scalar reward R(y, z) for a given program y and the correct answer z.", "2 We can further assume without loss of generality that the reward is always in [0, 1].", "Reinforcement learning approaches maximize the expected reward J RL : J RL = X y2Y p(y|x i , t i )R(y, z i ) (6) J RL is hard to approximate using numerical integration since the reward for all programs may not be known a priori.", "Policy gradient methods solve this by approximating the derivative using a sample from the policy.", "When the search space is large, the policy may fail to sample a correct program, which can greatly slow down the learning.", "Therefore, off-policy methods are sometimes introduced to bias the sampling towards high-reward yielding programs.", "In those methods, an additional exploration policy u(y|x i , t i , z i ) is used to improve sampling.", "Importance weights are used to make the gradient unbiased (see Appendix for derivation).", "Maximum Margin Reward For every training example (x i , t i , z i ), the maximum margin reward method finds the highest scoring program y i that evaluates to z i , as the reference program, from the set K of programs generated by the search.", "With a margin function : Y β‡₯Y β‡₯Z !", "R and reference program y, the set of programs V that violate the margin constraint can thus be defined as: V = {y 0 | y 0 2 Y and score βœ“ (y, x, t) ο£Ώ score βœ“ (y 0 , x, t) + (y, y 0 , z)}, (7) where (y, y 0 , z) = R(y, z) R(y 0 , z).", "Similarly, the program that most violates the constraint can be written as: y = arg max y 0 2Y {score βœ“ (y 0 , x, t) + (y, y 0 , z) score βœ“ (y, x, t)} (8) The most-violation margin objective (negative margin loss) is thus defined as: J MMR = max{0, score βœ“ (Θ³, x i , t i ) score βœ“ (y i , x i , t i ) + (y i ,Θ³, z i )} Unlike the previous two learning algorithms, margin methods only update the score of the reference program and the program that violates the margin.", "Generalized Update Equation Although the algorithms described in Β§4.1 seem very different on the surface, the gradients of their loss functions can in fact be described in the same generalized form, given in Eq.", "(9) 3 .", "In addition to the gradient of the model scoring function, this equation has two variable terms, w(Β·), q(Β·).", "We call the first term w(y, x, t, z) intensity, which is a positive scalar value and the second term q(y|x, t) the competing distribution, which is a probability distribution over programs.", "Varying them makes the equation equivalent to the update rule of the algorithms we discussed, as shown in Table 1 .", "We also consider meritocratic update policy which uses a hyperparameter to sharpen or smooth the intensity of maximum marginal likelihood (Guu et al., 2017) .", "Intuitively, w(y, x, t, z) defines the positive part of the update equation, which defines how aggressively the update favors program y.", "Likewise, q(y|x, t) defines the negative part of the learning Generalized Update Equation: (7) ) andΘ³ is the most violating program (cf.", "Eq.", "(8) ).", "For REINFORCE,Ε· is sampled from K using p(.)", "whereas for Off-Policy Policy Gradient,Ε· is sampled using u(.).", "(K) = X y2K w(y, x, t, z) 0 @ r βœ“ score βœ“ (y, x, t) X y 0 2Y q(y 0 |x, t)r βœ“ score βœ“ (y 0 , x, t) algorithm, namely how aggressively the update penalizes the members of the program set.", "The generalized update equation provides a tool for better understanding individual algorithm, and helps shed some light on when a particular method may perform better.", "Intensity versus Search Quality In SpFD, the effectiveness of the algorithms for SpFD is closely related to the quality of the search results given that the gold program is not available.", "Intuitively, if the search quality is good, the update algorithm could be aggressive on updating the model parameters.", "When the search quality is poor, the algorithm should be conservative.", "The intensity w(Β·) is closely related to the aggressiveness of the algorithm.", "For example, the maximum marginal likelihood is less aggressive given that it produces a non-zero intensity over all programs in the program set K that evaluate to the correct answer.", "The intensity for a particular correct program y is proportional to its probability p(y|x, t).", "Further, meritocratic update becomes more aggressive as becomes larger.", "In contrast, REINFORCE and maximum margin reward both have a non-zero intensity only on a single program in K. This value is 1.0 for maximum margin reward, while for reinforcement learning, this value is the reward.", "Maximum margin reward therefore updates most aggressively in favor of its selection while maximum marginal likelihood tends to hedge its bet.", "Therefore, the maximum margin methods should benefit the most when the search quality improves.", "Stability The general equation also allows us to investigate the stability of a model update algorithm.", "In general, the variance of update direction can be high, hence less stable, if the model update algorithm has peaky competing distribution, or it puts all of its intensity on a single program.", "For example, REINFORCE only samples one program and puts non-zero intensity only on that program, so it could be unstable depending on the sampling results.", "The competing distribution affects the stability of the algorithm.", "For example, maximum margin reward penalizes only the most violating program and is benign to other incorrect programs.", "Therefore, the MMR algorithm could be unstable during training.", "New Model Update Algorithm The general equation provides a framework that enables the development of new variations or extensions of existing learning methods.", "For example, in order to improve the stability of the MMR algorithm, we propose a simple variant of maximum margin reward, which penalizes all violating programs instead of only the most violating one.", "We call this approach maximum margin average violation reward (MAVER), which is included in Table 1 as well.", "Given that MAVER effectively considers more negative examples during each update, we expect that it is more stable compared to the MMR algorithm.", "Experiments We describe the setup in Β§5.1 and results in Β§5.2.", "Setup Dataset We use the sequential question answering (SQA) dataset (Iyyer et al., 2017) for our experiments.", "SQA contains 6,066 sequences and each sequence contains up to 3 questions, with 17,553 questions in total.", "The data is partitioned into training (83%) and test (17%) splits.", "We use 4/5 of the original train split as our training set and the remaining 1/5 as the dev set.", "We evaluate using exact match on answer.", "Previous state-of-theart result on the SQA dataset is 44.7% accuracy, using maximum margin reward learning.", "Semantic Parser Our semantic parser is based on DynSP (Iyyer et al., 2017) , which contains a set of SQL actions, such as adding a clause (e.g., Select Column) or adding an operator (e.g., Max).", "Each action has an associated neural network module that generates the score for the action based on the instruction, the table and the list of past actions.", "The score of the entire program is given by the sum of scores of all actions.", "We modified DynSP to improve its representational capacity.", "We refer to the new parser as DynSP++.", "Most notably, we included new features and introduced two additional parser actions.", "See Appendix 8.2 for more details.", "While these improvements help us achieve state-of-the-art results, the majority of the gain comes from the learning contributions described in this paper.", "Hyperparameters For each experiment, we train the model for 30 epochs.", "We find the optimal stopping epoch by evaluating the model on the dev set.", "We then train on train+dev set till the stopping epoch and evaluate the model on the held-out test set.", "Model parameters are trained using stochastic gradient descent with learning rate of 0.1.", "We set the hyperparameter ⌘ for policy shaping to 5.", "All hyperparameters were tuned on the dev set.", "We use 40 lexical pairs for defining the co-occur score.", "We used common English superlatives (e.g., highest, most) and comparators (e.g., more, larger) and did not fit the lexical pairs based on the dataset.", "Given the model parameter βœ“, we use a base exploration policy defined in (Iyyer et al., 2017) .", "This exploration policy is given by b βœ“ (y | x, t, z) / exp( Β· R(y, z) + score βœ“ (y, βœ“, z)).", "R(y, z) is the reward function of the incomplete program y, given the answer z.", "We use a reward function R(y, z) given by the Jaccard similarity of the gold answer z and the answer generated by the program y.", "The value of is set to infinity, which essentially is equivalent to sorting the programs based on the reward and using the current model score for tie breaking.", "Further, we prune all syntactically invalid programs.", "For more details, we refer the reader to (Iyyer et al., 2017) .", "Table 2 contains the dev and test results when using our algorithm on the SQA dataset.", "We observe that margin based methods perform better than maximum likelihood methods and policy gradient in our experiment.", "Policy shaping in general improves the performance across different algorithms.", "Our best test results outperform previous SOTA by 5.0%.", "Results Policy Gradient vs Off-Policy Gradient RE-INFORCE, a simple policy gradient method, achieved extremely poor performance.", "This likely due to the problem of exploration and having to sample from a large space of programs.", "This is further corroborated from observing the much superior performance of off-policy policy gradient methods.", "Thus, the sampling policy is an important factor to consider for policy gradient methods.", "The Effect of Policy Shaping We observe that the improvement due to policy shaping is 6.0% on the SQA dataset for MAVER and only 1.3% for maximum marginal likelihood.", "We also observe that as increases, the improvement due to policy shaping for meritocratic update increases.", "This supports our hypothesis that aggressive updates of margin based methods is beneficial when the search method is more accurate as compared to maximum marginal likelihood which hedges its bet between all programs that evaluate to the right answer.", "Stability of MMR In Section 4, the general update equation helps us point out that MMR could be unstable due to the peaky competing distribution.", "MAVER was proposed to increase the stability of the algorithm.", "To measure stability, we cal- Table 4 .", "Policy Shaping vs Model Shaping Critique policy contains useful information that can bias the search away from spurious programs.", "Therefore, one can also consider making the critique policy as part of the model.", "We call this model shaping.", "We define our model to be the shaped policy and train and test using the new model.", "Using MAVER updates, we found that the dev accuracy dropped to 37.1%.", "We conjecture that the strong prior in the critique policy can hinder generalization in model shaping.", "Related Work Semantic Parsing from Denotation Mapping natural language text to formal meaning representation was first studied by Montague (1970) .", "Early work on learning semantic parsers rely on labeled formal representations as the supervision signals Collins, 2005, 2007; Zelle and Mooney, 1993) .", "However, because getting access to gold formal representation generally requires expensive annotations by an expert, distant supervision approaches, where semantic parsers are learned from denotation only, have become the main learning paradigm (e.g., Clarke et al., 2010; Liang et al., 2011; Artzi and Zettlemoyer, 2013; Berant et al., 2013; Iyyer et al., 2017; Krishnamurthy et al., 2017) .", "Guu et al.", "(2017) studied the problem of spurious programs and considered adding noise to diversify the search procedure and introduced meritocratic updates.", "Reinforcement Learning Algorithms Reinforcement learning algorithms have been applied to various NLP problems including dialogue (Li et al., 2016 ), text-based games (Narasimhan et al., 2015) , information extraction (Narasimhan et al., 2016) , coreference resolution (Clark and Man- ning, 2016), semantic parsing (Guu et al., 2017) and instruction following (Misra et al., 2017) .", "Guu et al.", "(2017) show that policy gradient methods underperform maximum marginal likelihood approaches.", "Our result on the SQA dataset supports their observation.", "However, we show that using off-policy sampling, policy gradient methods can provide superior performance to maximum marginal likelihood methods.", "Margin-based Learning Margin-based methods have been considered in the context of SVM learning.", "In the NLP literature, margin based learning has been applied to parsing (Taskar et al., 2004; McDonald et al., 2005) , text classification (Taskar et al., 2003) , machine translation (Watanabe et al., 2007) and semantic parsing (Iyyer et al., 2017) .", "Kummerfeld et al.", "(2015) found that max-margin based methods generally outperform likelihood maximization on a range of tasks.", "Previous work have studied connections between margin based method and likelihood maximization for supervised learning setting.", "We show them as special cases of our unified update equation for distant supervision learning.", "Similar to this work, Lee et al.", "(2016) also found that in the context of supervised learning, margin-based algorithms which update all violated examples perform better than the one that only updates the most violated example.", "Latent Variable Modeling Learning semantic parsers from denotation can be viewed as a latent variable modeling problem, where the program is the latent variable.", "Probabilistic latent variable models have been studied using EM-algorithm and its variant (Dempster et al., 1977) .", "The graphical model literature has studied latent variable learning on margin-based methods (Yu and Joachims, 2009 ) and probabilistic models (Quattoni et al., 2007) .", "Samdani et al.", "(2012) studied various vari-ants of EM algorithm and showed that all of them are special cases of a unified framework.", "Our generalized update framework is similar in spirit.", "Conclusion In this paper, we propose a general update equation from semantic parsing from denotation and propose a policy shaping method for addressing the spurious program challenge.", "For the future, we plan to apply the proposed learning framework to more semantic parsing tasks and consider new methods for policy shaping." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "4", "4.1", "4.2", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Background", "The Semantic Parsing Task", "Learning", "Addressing Spurious Programs: Policy Shaping", "Addressing Update Strategy Selection:", "Commonly Used Learning Algorithms", "Generalized Update Equation", "Experiments", "Setup", "Results", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-103#paper-1267#slide-12
Critique Policy
1. Surface-form Match: Features triggered for constants in the program that match a token in the question. . Lexical Pair Score: Features triggered between keywords and tokens (e.g., Maximum and most).
1. Surface-form Match: Features triggered for constants in the program that match a token in the question. . Lexical Pair Score: Features triggered between keywords and tokens (e.g., Maximum and most).
[]
GEM-SciDuet-train-103#paper-1267#slide-13
1267
Policy Shaping and Generalized Update Equations for Semantic Parsing from Denotations
Semantic parsing from denotations faces two key challenges in model training: (1) given only the denotations (e.g., answers), search for good candidate semantic parses, and (2) choose the best model update algorithm. We propose effective and general solutions to each of them. Using policy shaping, we bias the search procedure towards semantic parses that are more compatible to the text, which provide better supervision signals for training. In addition, we propose an update equation that generalizes three different families of learning algorithms, which enables fast model exploration. When experimented on a recently proposed sequential question answering dataset, our framework leads to a new state-of-theart model that outperforms previous work by 5.0% absolute on exact match accuracy.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233 ], "paper_content_text": [ "Introduction Semantic parsing from denotations (SpFD) is the problem of mapping text to executable formal representations (or program) in a situated environment and executing them to generate denotations (or answer), in the absence of access to correct representations.", "Several problems have been handled within this framework, including question answering (Berant et al., 2013; Iyyer et al., 2017) and instructions for robots (Artzi and Zettlemoyer, 2013; Misra et al., 2015) .", "Consider the example in Figure 1 .", "Given the question and a table environment, a semantic parser maps the question to an executable program, in this case a SQL query, and then executes the query on the environment to generate the answer England.", "In the SpFD setting, the training data does not contain the correct programs.", "Thus, the existing learning approaches for SpFD perform two steps for every training example, a search step that explores the space of programs Select Nation Where Points is Maximum Program: Answer: Environment: England Figure 1 : An example of semantic parsing from denotations.", "Given the table environment, map the question to an executable program that evaluates to the answer.", "and finds suitable candidates, and an update step that uses these programs to update the model.", "Figure 2 shows the two step training procedure for the above example.", "In this paper, we address two key challenges in model training for SpFD by proposing a novel learning framework, improving both the search and update steps.", "The first challenge, the existence of spurious programs, lies in the search step.", "More specifically, while the success of the search step relies on its ability to find programs that are semantically correct, we can only verify if the program can generate correct answers, given that no gold programs are presented.", "The search step is complicated by spurious programs, which happen to evaluate to the correct answer but do not represent accurately the meaning of the natural language question.", "For example, for the environment in Figure 1 , the program Select Nation Where Name = Karen Andrew is spurious.", "Selecting spurious programs as positive examples can greatly affect the performance of semantic parsers as these programs generally do not gen- Our contributions in this work are twofold.", "To address the first challenge, we propose a policy shaping (Griffith et al., 2013) method that incorporates simple, lightweight domain knowledge, such as a small set of lexical pairs of tokens in the question and program, in the form of a critique policy ( Β§ 3).", "This helps bias the search towards the correct program, an important step to improve supervision signals, which benefits learning regardless of the choice of algorithm.", "To address the second challenge, we prove that the parameter update step in several algorithms are similar and can be viewed as special cases of a generalized update equation ( Β§ 4).", "The equation contains two variable terms that govern the update behavior.", "Changing these two terms effectively defines an infinite class of learning algorithms where different values lead to significantly different results.", "We study this effect and propose a novel learning framework that improves over existing methods.", "We evaluate our methods using the sequential question answering (SQA) dataset (Iyyer et al., 2017) , and show that our proposed improvements to the search and update steps consistently enhance existing approaches.", "The proposed algorithm achieves new state-of-the-art and outperforms existing parsers by 5.0%.", "Background We give a formal problem definition of the semantic parsing task, followed by the general learning framework for solving it.", "The Semantic Parsing Task The problem discussed in this paper can be formally defined as follows.", "Let X be the set of all possible questions, Y programs (e.g., SQL-like queries), T tables (i.e., the structured data in this work) and Z answers.", "We further assume access to an executor : Y β‡₯ T !", "Z, that given a program y 2 Y and a table t 2 T , generates an answer (y, t) 2 Z.", "We assume that the executor and all tables are deterministic and the executor can be called as many times as possible.", "To facilitate discussion in the following sections, we define an environment function e t : Y !", "Z, by applying the executor to the program as e t (y) = (y, t).", "Given a question x and an environment e t , our aim is to generate a program y ⇀ 2 Y and then execute it to produce the answer e t (y ⇀ ).", "Assume that for any y 2 Y, the score of y being a correct program for x is score βœ“ (y, x, t), parameterized by βœ“.", "The inference task is thus: y ⇀ = arg max y2Y score βœ“ (y, x, t) (1) As the size of Y is exponential to the length of the program, a generic search procedure is typically employed for Eq.", "(1), as efficient dynamic algorithms typically do not exist.", "These search procedures generally maintain a beam of program states sorted according to some scoring function, where each program state represents an incomplete program.", "The search then generates a new program state from an existing state by performing an action.", "Each action adds a set of tokens (e.g., Nation) and keyword (e.g., Select) to a program state.", "For example, in order to generate the program in Figure 1 , the DynSP parser (Iyyer et al., 2017) will take the first action as adding the SQL expression Select Nation.", "Notice that score βœ“ can be used in either probabilistic or nonprobabilistic models.", "For probabilistic models, we assume that it is a Boltzmann policy, meaning that p βœ“ (y | x, t) / exp{score βœ“ (y, x, t)}.", "Learning Learning a semantic parser is equivalent to learning the parameters βœ“ in the scoring function, which is a structured learning problem, due to the large, structured output space Y.", "Structured learning algorithms generally consist of two major components: search and update.", "When the gold programs are available during training, the search procedure finds a set of high-scoring incorrect programs.", "These programs are used by the update step to derive loss for updating parameters.", "For example, these programs are used for approximating the partition-function in maximum-likelihood objective (Liang et al., 2011) and finding set of programs causing margin violation in margin based methods (DaumΓ© III and Marcu, 2005) .", "Depending on the exact algorithm being used, these two components are not necessarily separated into isolated steps.", "For instance, parameters can be updated in the middle of search (e.g., Huang et al., 2012) .", "For learning semantic parsers from denotations, where we assume only answers are available in a training set {( x i , t i , z i )} N i=1 of N examples, the basic construction of the learning algorithms remains the same.", "However, the problems that search needs to handle in SpFD is more challenging.", "In addition to finding a set of high-scoring incorrect programs, the search procedure also needs to guess the correct program(s) evaluating to the gold answer z i .", "This problem is further complicated by the presence of spurious programs, which generate the correct answer but are semantically incompatible with the question.", "For example, although all programs in Figure 2 evaluate to the same answer, only one of them is correct.", "The issue of the spurious programs also affects the design of model update.", "For instance, maximum marginal likelihood methods treat all the programs that evaluate to the gold answer equally, while maximum margin reward networks use model score to break tie and pick one of the programs as the correct reference.", "Addressing Spurious Programs: Policy Shaping Given a training example (x, t, z), the aim of the search step is to find a set K(x, t, z) of programs consisting of correct programs that evaluate to z and high-scoring incorrect programs.", "The search step should avoid picking up spurious programs for learning since such programs typically do not generalize.", "For example, in Figure 2, the spurious program Select Nation Where Index is Min will evaluate to an incorrect answer if the indices of the first two rows are swapped 1 .", "This problem is challenging since among the programs that evaluate to the correct answer, most of them are spurious.", "The search step can be viewed as following an exploration policy b βœ“ (y|x, t, z) to explore the set of programs Y.", "This exploration is often performed by beam search and at each step, we either sample from b βœ“ or take the top scoring programs.", "The set K(x, t, z) is then used by the update step for parameter update.", "Most search strategies use an exploration policy which is based on the score function, for example b βœ“ (y|x, t, z) / exp{score βœ“ (y, t)}.", "However, this approach can suffer from a divergence phenomenon whereby the score of spurious programs picked up by the search in the first epoch increases, making it more likely for the search to pick them up in the future.", "Such divergence issues are common with latent-variable learning and often require careful initialization to overcome (Rose, 1998) .", "Unfortunately such initialization schemes are not applicable for deep neural networks which form the model of most successful semantic parsers today (Jia and Liang, 2016; Misra and Artzi, 2016; Iyyer et al., 2017) .", "Prior work, such as ✏-greedy exploration (Guu et al., 2017) , has reduced the severity of this problem by introducing random noise in the search procedure to avoid saturating the search on high-scoring spurious programs.", "However, random noise need not bias the search towards the correct program(s).", "In this paper, we introduce a simple policy-shaping method to guide the search.", "This approach allows incorporating prior knowledge in the exploration policy and can bias the search away from spurious programs.", "Β» Compute generalized gradient updates 6: βœ“ = βœ“ + Β΅ (K) 7: return βœ“ Policy Shaping Policy shaping is a method to introduce prior knowledge into a policy (Griffith et al., 2013) .", "Formally, let the current behavior policy be b βœ“ (y|x, t, z) and a predefined critique policy, the prior knowledge, be p c (y|x, t).", "Policy shaping defines a new shaped behavior policy p b (y|x, t) given by: p b (y|x, t) = b βœ“ (y|x, t, z)p c (y|x, t) P y 0 2Y b βœ“ (y 0 |x, t, z)p c (y 0 |x, t) .", "(2) Using the shaped policy for exploration biases the search towards the critique policy's preference.", "We next describe a simple critique policy that we use in this paper.", "Lexical Policy Shaping We qualitatively observed that correct programs often contains tokens which are also present in the question.", "For example, the correct program in Figure 2 contains the token Points, which is also present in the question.", "We therefore, define a simple surface form similarity feature match(x, y) that computes the ratio of number of non-keyword tokens in the program y that are also present in the question x.", "However, surface-form similarity is often not enough.", "For example, both the first and fourth program in Figure 2 contain the token Points but only the fourth program is correct.", "Therefore, we also use a simple co-occurrence feature that triggers on frequently co-occurring pairs of tokens in the program and instruction.", "For example, the token most is highly likely to co-occur with a correct program containing the keyword Max.", "This happens for the example in Figure 2 .", "Similarly the token not may co-occur with the keyword NotEqual.", "We assume access to a lexicon ⇀ = {(w j , !", "j )} k j=1 containing k lexical pairs of tokens and keywords.", "Each lexical pair (w, !)", "maps the token w in a text to a keyword !", "in a program.", "For a given program y and question x, we define a co-occurrence score as co_occur(y, x) = P (w,!", ")2⇀ {w 2 x^!", "2 y}}.", "We define critique score critique(y, x) as the sum of the match and co_occur scores.", "The critique policy is given by: p c (y|x, t) / exp (⌘ ⇀ critique(y, x)) , (3) where ⌘ is a single scalar hyper-parameter denoting the confidence in the critique policy.", "Addressing Update Strategy Selection: Generalized Update Equation Given the set of programs generated by the search step, one can use many objectives to update the parameters.", "For example, previous work have utilized maximum marginal likelihood (Krishnamurthy et al., 2017; Guu et al., 2017) , reinforcement learning (Zhong et al., 2017; Guu et al., 2017) and margin based methods (Iyyer et al., 2017) .", "It could be difficult to choose the suitable algorithm from these options.", "In this section, we propose a principle and general update equation such that previous update algorithms can be considered as special cases to this equation.", "Having a general update is important for the following reasons.", "First, it allows us to understand existing algorithms better by examining their basic properties.", "Second, the generalized update equation also makes it easy to implement and experiment with various different algorithms.", "Moreover, it provides a framework that enables the development of new variations or extensions of existing learning methods.", "In the following, we describe how the commonly used algorithms are in fact very similartheir update rules can all be viewed as special cases of the proposed generalized update equation.", "Algorithm 1 shows the meta-learning framework.", "For every training example, we first find a set of candidates using an exploration policy (line 4).", "We use the program candidates to update the parameters (line 6).", "Commonly Used Learning Algorithms We briefly describe three algorithms: maximum marginalized likelihood, policy gradient and maximum margin reward.", "Maximum Marginalized Likelihood The maximum marginalized likelihood method maximizes the log-likelihood of the training data by marginalizing over the set of programs.", "J MML = log p(z i |x i , t i ) = log X y2Y p(z i |y, t i )p(y|x i , t i ) (4) Because an answer is deterministically computed given a program and a table, we define p(z | y, t) as 1 or 0 depending upon whether the y evaluates to z given t, or not.", "Let Gen(z, t) βœ“ Y be the set of compatible programs that evaluate to z given the table t. The objective can then be expressed as: J MML = log X y2Gen(zi,ti) p(y|x i , t i ) (5) In practice, the summation over Gen(.)", "is approximated by only using the compatible programs in the set K generated by the search step.", "Policy Gradient Methods Most reinforcement learning approaches for semantic parsing assume access to a reward function R : Y β‡₯ X β‡₯ Z !", "R, giving a scalar reward R(y, z) for a given program y and the correct answer z.", "2 We can further assume without loss of generality that the reward is always in [0, 1].", "Reinforcement learning approaches maximize the expected reward J RL : J RL = X y2Y p(y|x i , t i )R(y, z i ) (6) J RL is hard to approximate using numerical integration since the reward for all programs may not be known a priori.", "Policy gradient methods solve this by approximating the derivative using a sample from the policy.", "When the search space is large, the policy may fail to sample a correct program, which can greatly slow down the learning.", "Therefore, off-policy methods are sometimes introduced to bias the sampling towards high-reward yielding programs.", "In those methods, an additional exploration policy u(y|x i , t i , z i ) is used to improve sampling.", "Importance weights are used to make the gradient unbiased (see Appendix for derivation).", "Maximum Margin Reward For every training example (x i , t i , z i ), the maximum margin reward method finds the highest scoring program y i that evaluates to z i , as the reference program, from the set K of programs generated by the search.", "With a margin function : Y β‡₯Y β‡₯Z !", "R and reference program y, the set of programs V that violate the margin constraint can thus be defined as: V = {y 0 | y 0 2 Y and score βœ“ (y, x, t) ο£Ώ score βœ“ (y 0 , x, t) + (y, y 0 , z)}, (7) where (y, y 0 , z) = R(y, z) R(y 0 , z).", "Similarly, the program that most violates the constraint can be written as: y = arg max y 0 2Y {score βœ“ (y 0 , x, t) + (y, y 0 , z) score βœ“ (y, x, t)} (8) The most-violation margin objective (negative margin loss) is thus defined as: J MMR = max{0, score βœ“ (Θ³, x i , t i ) score βœ“ (y i , x i , t i ) + (y i ,Θ³, z i )} Unlike the previous two learning algorithms, margin methods only update the score of the reference program and the program that violates the margin.", "Generalized Update Equation Although the algorithms described in Β§4.1 seem very different on the surface, the gradients of their loss functions can in fact be described in the same generalized form, given in Eq.", "(9) 3 .", "In addition to the gradient of the model scoring function, this equation has two variable terms, w(Β·), q(Β·).", "We call the first term w(y, x, t, z) intensity, which is a positive scalar value and the second term q(y|x, t) the competing distribution, which is a probability distribution over programs.", "Varying them makes the equation equivalent to the update rule of the algorithms we discussed, as shown in Table 1 .", "We also consider meritocratic update policy which uses a hyperparameter to sharpen or smooth the intensity of maximum marginal likelihood (Guu et al., 2017) .", "Intuitively, w(y, x, t, z) defines the positive part of the update equation, which defines how aggressively the update favors program y.", "Likewise, q(y|x, t) defines the negative part of the learning Generalized Update Equation: (7) ) andΘ³ is the most violating program (cf.", "Eq.", "(8) ).", "For REINFORCE,Ε· is sampled from K using p(.)", "whereas for Off-Policy Policy Gradient,Ε· is sampled using u(.).", "(K) = X y2K w(y, x, t, z) 0 @ r βœ“ score βœ“ (y, x, t) X y 0 2Y q(y 0 |x, t)r βœ“ score βœ“ (y 0 , x, t) algorithm, namely how aggressively the update penalizes the members of the program set.", "The generalized update equation provides a tool for better understanding individual algorithm, and helps shed some light on when a particular method may perform better.", "Intensity versus Search Quality In SpFD, the effectiveness of the algorithms for SpFD is closely related to the quality of the search results given that the gold program is not available.", "Intuitively, if the search quality is good, the update algorithm could be aggressive on updating the model parameters.", "When the search quality is poor, the algorithm should be conservative.", "The intensity w(Β·) is closely related to the aggressiveness of the algorithm.", "For example, the maximum marginal likelihood is less aggressive given that it produces a non-zero intensity over all programs in the program set K that evaluate to the correct answer.", "The intensity for a particular correct program y is proportional to its probability p(y|x, t).", "Further, meritocratic update becomes more aggressive as becomes larger.", "In contrast, REINFORCE and maximum margin reward both have a non-zero intensity only on a single program in K. This value is 1.0 for maximum margin reward, while for reinforcement learning, this value is the reward.", "Maximum margin reward therefore updates most aggressively in favor of its selection while maximum marginal likelihood tends to hedge its bet.", "Therefore, the maximum margin methods should benefit the most when the search quality improves.", "Stability The general equation also allows us to investigate the stability of a model update algorithm.", "In general, the variance of update direction can be high, hence less stable, if the model update algorithm has peaky competing distribution, or it puts all of its intensity on a single program.", "For example, REINFORCE only samples one program and puts non-zero intensity only on that program, so it could be unstable depending on the sampling results.", "The competing distribution affects the stability of the algorithm.", "For example, maximum margin reward penalizes only the most violating program and is benign to other incorrect programs.", "Therefore, the MMR algorithm could be unstable during training.", "New Model Update Algorithm The general equation provides a framework that enables the development of new variations or extensions of existing learning methods.", "For example, in order to improve the stability of the MMR algorithm, we propose a simple variant of maximum margin reward, which penalizes all violating programs instead of only the most violating one.", "We call this approach maximum margin average violation reward (MAVER), which is included in Table 1 as well.", "Given that MAVER effectively considers more negative examples during each update, we expect that it is more stable compared to the MMR algorithm.", "Experiments We describe the setup in Β§5.1 and results in Β§5.2.", "Setup Dataset We use the sequential question answering (SQA) dataset (Iyyer et al., 2017) for our experiments.", "SQA contains 6,066 sequences and each sequence contains up to 3 questions, with 17,553 questions in total.", "The data is partitioned into training (83%) and test (17%) splits.", "We use 4/5 of the original train split as our training set and the remaining 1/5 as the dev set.", "We evaluate using exact match on answer.", "Previous state-of-theart result on the SQA dataset is 44.7% accuracy, using maximum margin reward learning.", "Semantic Parser Our semantic parser is based on DynSP (Iyyer et al., 2017) , which contains a set of SQL actions, such as adding a clause (e.g., Select Column) or adding an operator (e.g., Max).", "Each action has an associated neural network module that generates the score for the action based on the instruction, the table and the list of past actions.", "The score of the entire program is given by the sum of scores of all actions.", "We modified DynSP to improve its representational capacity.", "We refer to the new parser as DynSP++.", "Most notably, we included new features and introduced two additional parser actions.", "See Appendix 8.2 for more details.", "While these improvements help us achieve state-of-the-art results, the majority of the gain comes from the learning contributions described in this paper.", "Hyperparameters For each experiment, we train the model for 30 epochs.", "We find the optimal stopping epoch by evaluating the model on the dev set.", "We then train on train+dev set till the stopping epoch and evaluate the model on the held-out test set.", "Model parameters are trained using stochastic gradient descent with learning rate of 0.1.", "We set the hyperparameter ⌘ for policy shaping to 5.", "All hyperparameters were tuned on the dev set.", "We use 40 lexical pairs for defining the co-occur score.", "We used common English superlatives (e.g., highest, most) and comparators (e.g., more, larger) and did not fit the lexical pairs based on the dataset.", "Given the model parameter βœ“, we use a base exploration policy defined in (Iyyer et al., 2017) .", "This exploration policy is given by b βœ“ (y | x, t, z) / exp( Β· R(y, z) + score βœ“ (y, βœ“, z)).", "R(y, z) is the reward function of the incomplete program y, given the answer z.", "We use a reward function R(y, z) given by the Jaccard similarity of the gold answer z and the answer generated by the program y.", "The value of is set to infinity, which essentially is equivalent to sorting the programs based on the reward and using the current model score for tie breaking.", "Further, we prune all syntactically invalid programs.", "For more details, we refer the reader to (Iyyer et al., 2017) .", "Table 2 contains the dev and test results when using our algorithm on the SQA dataset.", "We observe that margin based methods perform better than maximum likelihood methods and policy gradient in our experiment.", "Policy shaping in general improves the performance across different algorithms.", "Our best test results outperform previous SOTA by 5.0%.", "Results Policy Gradient vs Off-Policy Gradient RE-INFORCE, a simple policy gradient method, achieved extremely poor performance.", "This likely due to the problem of exploration and having to sample from a large space of programs.", "This is further corroborated from observing the much superior performance of off-policy policy gradient methods.", "Thus, the sampling policy is an important factor to consider for policy gradient methods.", "The Effect of Policy Shaping We observe that the improvement due to policy shaping is 6.0% on the SQA dataset for MAVER and only 1.3% for maximum marginal likelihood.", "We also observe that as increases, the improvement due to policy shaping for meritocratic update increases.", "This supports our hypothesis that aggressive updates of margin based methods is beneficial when the search method is more accurate as compared to maximum marginal likelihood which hedges its bet between all programs that evaluate to the right answer.", "Stability of MMR In Section 4, the general update equation helps us point out that MMR could be unstable due to the peaky competing distribution.", "MAVER was proposed to increase the stability of the algorithm.", "To measure stability, we cal- Table 4 .", "Policy Shaping vs Model Shaping Critique policy contains useful information that can bias the search away from spurious programs.", "Therefore, one can also consider making the critique policy as part of the model.", "We call this model shaping.", "We define our model to be the shaped policy and train and test using the new model.", "Using MAVER updates, we found that the dev accuracy dropped to 37.1%.", "We conjecture that the strong prior in the critique policy can hinder generalization in model shaping.", "Related Work Semantic Parsing from Denotation Mapping natural language text to formal meaning representation was first studied by Montague (1970) .", "Early work on learning semantic parsers rely on labeled formal representations as the supervision signals Collins, 2005, 2007; Zelle and Mooney, 1993) .", "However, because getting access to gold formal representation generally requires expensive annotations by an expert, distant supervision approaches, where semantic parsers are learned from denotation only, have become the main learning paradigm (e.g., Clarke et al., 2010; Liang et al., 2011; Artzi and Zettlemoyer, 2013; Berant et al., 2013; Iyyer et al., 2017; Krishnamurthy et al., 2017) .", "Guu et al.", "(2017) studied the problem of spurious programs and considered adding noise to diversify the search procedure and introduced meritocratic updates.", "Reinforcement Learning Algorithms Reinforcement learning algorithms have been applied to various NLP problems including dialogue (Li et al., 2016 ), text-based games (Narasimhan et al., 2015) , information extraction (Narasimhan et al., 2016) , coreference resolution (Clark and Man- ning, 2016), semantic parsing (Guu et al., 2017) and instruction following (Misra et al., 2017) .", "Guu et al.", "(2017) show that policy gradient methods underperform maximum marginal likelihood approaches.", "Our result on the SQA dataset supports their observation.", "However, we show that using off-policy sampling, policy gradient methods can provide superior performance to maximum marginal likelihood methods.", "Margin-based Learning Margin-based methods have been considered in the context of SVM learning.", "In the NLP literature, margin based learning has been applied to parsing (Taskar et al., 2004; McDonald et al., 2005) , text classification (Taskar et al., 2003) , machine translation (Watanabe et al., 2007) and semantic parsing (Iyyer et al., 2017) .", "Kummerfeld et al.", "(2015) found that max-margin based methods generally outperform likelihood maximization on a range of tasks.", "Previous work have studied connections between margin based method and likelihood maximization for supervised learning setting.", "We show them as special cases of our unified update equation for distant supervision learning.", "Similar to this work, Lee et al.", "(2016) also found that in the context of supervised learning, margin-based algorithms which update all violated examples perform better than the one that only updates the most violated example.", "Latent Variable Modeling Learning semantic parsers from denotation can be viewed as a latent variable modeling problem, where the program is the latent variable.", "Probabilistic latent variable models have been studied using EM-algorithm and its variant (Dempster et al., 1977) .", "The graphical model literature has studied latent variable learning on margin-based methods (Yu and Joachims, 2009 ) and probabilistic models (Quattoni et al., 2007) .", "Samdani et al.", "(2012) studied various vari-ants of EM algorithm and showed that all of them are special cases of a unified framework.", "Our generalized update framework is similar in spirit.", "Conclusion In this paper, we propose a general update equation from semantic parsing from denotation and propose a policy shaping method for addressing the spurious program challenge.", "For the future, we plan to apply the proposed learning framework to more semantic parsing tasks and consider new methods for policy shaping." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "4", "4.1", "4.2", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Background", "The Semantic Parsing Task", "Learning", "Addressing Spurious Programs: Policy Shaping", "Addressing Update Strategy Selection:", "Commonly Used Learning Algorithms", "Generalized Update Equation", "Experiments", "Setup", "Results", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-103#paper-1267#slide-13
Critique Policy Features
Question: What nation scored the most points? Select Nation Where Points is M aximum Select Nation Where Index is Mi nimum Select Nation Where Pts/game i s Maximum Select Nation Where Name = Ka ren Andrew
Question: What nation scored the most points? Select Nation Where Points is M aximum Select Nation Where Index is Mi nimum Select Nation Where Pts/game i s Maximum Select Nation Where Name = Ka ren Andrew
[]
GEM-SciDuet-train-103#paper-1267#slide-14
1267
Policy Shaping and Generalized Update Equations for Semantic Parsing from Denotations
Semantic parsing from denotations faces two key challenges in model training: (1) given only the denotations (e.g., answers), search for good candidate semantic parses, and (2) choose the best model update algorithm. We propose effective and general solutions to each of them. Using policy shaping, we bias the search procedure towards semantic parses that are more compatible to the text, which provide better supervision signals for training. In addition, we propose an update equation that generalizes three different families of learning algorithms, which enables fast model exploration. When experimented on a recently proposed sequential question answering dataset, our framework leads to a new state-of-theart model that outperforms previous work by 5.0% absolute on exact match accuracy.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233 ], "paper_content_text": [ "Introduction Semantic parsing from denotations (SpFD) is the problem of mapping text to executable formal representations (or program) in a situated environment and executing them to generate denotations (or answer), in the absence of access to correct representations.", "Several problems have been handled within this framework, including question answering (Berant et al., 2013; Iyyer et al., 2017) and instructions for robots (Artzi and Zettlemoyer, 2013; Misra et al., 2015) .", "Consider the example in Figure 1 .", "Given the question and a table environment, a semantic parser maps the question to an executable program, in this case a SQL query, and then executes the query on the environment to generate the answer England.", "In the SpFD setting, the training data does not contain the correct programs.", "Thus, the existing learning approaches for SpFD perform two steps for every training example, a search step that explores the space of programs Select Nation Where Points is Maximum Program: Answer: Environment: England Figure 1 : An example of semantic parsing from denotations.", "Given the table environment, map the question to an executable program that evaluates to the answer.", "and finds suitable candidates, and an update step that uses these programs to update the model.", "Figure 2 shows the two step training procedure for the above example.", "In this paper, we address two key challenges in model training for SpFD by proposing a novel learning framework, improving both the search and update steps.", "The first challenge, the existence of spurious programs, lies in the search step.", "More specifically, while the success of the search step relies on its ability to find programs that are semantically correct, we can only verify if the program can generate correct answers, given that no gold programs are presented.", "The search step is complicated by spurious programs, which happen to evaluate to the correct answer but do not represent accurately the meaning of the natural language question.", "For example, for the environment in Figure 1 , the program Select Nation Where Name = Karen Andrew is spurious.", "Selecting spurious programs as positive examples can greatly affect the performance of semantic parsers as these programs generally do not gen- Our contributions in this work are twofold.", "To address the first challenge, we propose a policy shaping (Griffith et al., 2013) method that incorporates simple, lightweight domain knowledge, such as a small set of lexical pairs of tokens in the question and program, in the form of a critique policy ( Β§ 3).", "This helps bias the search towards the correct program, an important step to improve supervision signals, which benefits learning regardless of the choice of algorithm.", "To address the second challenge, we prove that the parameter update step in several algorithms are similar and can be viewed as special cases of a generalized update equation ( Β§ 4).", "The equation contains two variable terms that govern the update behavior.", "Changing these two terms effectively defines an infinite class of learning algorithms where different values lead to significantly different results.", "We study this effect and propose a novel learning framework that improves over existing methods.", "We evaluate our methods using the sequential question answering (SQA) dataset (Iyyer et al., 2017) , and show that our proposed improvements to the search and update steps consistently enhance existing approaches.", "The proposed algorithm achieves new state-of-the-art and outperforms existing parsers by 5.0%.", "Background We give a formal problem definition of the semantic parsing task, followed by the general learning framework for solving it.", "The Semantic Parsing Task The problem discussed in this paper can be formally defined as follows.", "Let X be the set of all possible questions, Y programs (e.g., SQL-like queries), T tables (i.e., the structured data in this work) and Z answers.", "We further assume access to an executor : Y β‡₯ T !", "Z, that given a program y 2 Y and a table t 2 T , generates an answer (y, t) 2 Z.", "We assume that the executor and all tables are deterministic and the executor can be called as many times as possible.", "To facilitate discussion in the following sections, we define an environment function e t : Y !", "Z, by applying the executor to the program as e t (y) = (y, t).", "Given a question x and an environment e t , our aim is to generate a program y ⇀ 2 Y and then execute it to produce the answer e t (y ⇀ ).", "Assume that for any y 2 Y, the score of y being a correct program for x is score βœ“ (y, x, t), parameterized by βœ“.", "The inference task is thus: y ⇀ = arg max y2Y score βœ“ (y, x, t) (1) As the size of Y is exponential to the length of the program, a generic search procedure is typically employed for Eq.", "(1), as efficient dynamic algorithms typically do not exist.", "These search procedures generally maintain a beam of program states sorted according to some scoring function, where each program state represents an incomplete program.", "The search then generates a new program state from an existing state by performing an action.", "Each action adds a set of tokens (e.g., Nation) and keyword (e.g., Select) to a program state.", "For example, in order to generate the program in Figure 1 , the DynSP parser (Iyyer et al., 2017) will take the first action as adding the SQL expression Select Nation.", "Notice that score βœ“ can be used in either probabilistic or nonprobabilistic models.", "For probabilistic models, we assume that it is a Boltzmann policy, meaning that p βœ“ (y | x, t) / exp{score βœ“ (y, x, t)}.", "Learning Learning a semantic parser is equivalent to learning the parameters βœ“ in the scoring function, which is a structured learning problem, due to the large, structured output space Y.", "Structured learning algorithms generally consist of two major components: search and update.", "When the gold programs are available during training, the search procedure finds a set of high-scoring incorrect programs.", "These programs are used by the update step to derive loss for updating parameters.", "For example, these programs are used for approximating the partition-function in maximum-likelihood objective (Liang et al., 2011) and finding set of programs causing margin violation in margin based methods (DaumΓ© III and Marcu, 2005) .", "Depending on the exact algorithm being used, these two components are not necessarily separated into isolated steps.", "For instance, parameters can be updated in the middle of search (e.g., Huang et al., 2012) .", "For learning semantic parsers from denotations, where we assume only answers are available in a training set {( x i , t i , z i )} N i=1 of N examples, the basic construction of the learning algorithms remains the same.", "However, the problems that search needs to handle in SpFD is more challenging.", "In addition to finding a set of high-scoring incorrect programs, the search procedure also needs to guess the correct program(s) evaluating to the gold answer z i .", "This problem is further complicated by the presence of spurious programs, which generate the correct answer but are semantically incompatible with the question.", "For example, although all programs in Figure 2 evaluate to the same answer, only one of them is correct.", "The issue of the spurious programs also affects the design of model update.", "For instance, maximum marginal likelihood methods treat all the programs that evaluate to the gold answer equally, while maximum margin reward networks use model score to break tie and pick one of the programs as the correct reference.", "Addressing Spurious Programs: Policy Shaping Given a training example (x, t, z), the aim of the search step is to find a set K(x, t, z) of programs consisting of correct programs that evaluate to z and high-scoring incorrect programs.", "The search step should avoid picking up spurious programs for learning since such programs typically do not generalize.", "For example, in Figure 2, the spurious program Select Nation Where Index is Min will evaluate to an incorrect answer if the indices of the first two rows are swapped 1 .", "This problem is challenging since among the programs that evaluate to the correct answer, most of them are spurious.", "The search step can be viewed as following an exploration policy b βœ“ (y|x, t, z) to explore the set of programs Y.", "This exploration is often performed by beam search and at each step, we either sample from b βœ“ or take the top scoring programs.", "The set K(x, t, z) is then used by the update step for parameter update.", "Most search strategies use an exploration policy which is based on the score function, for example b βœ“ (y|x, t, z) / exp{score βœ“ (y, t)}.", "However, this approach can suffer from a divergence phenomenon whereby the score of spurious programs picked up by the search in the first epoch increases, making it more likely for the search to pick them up in the future.", "Such divergence issues are common with latent-variable learning and often require careful initialization to overcome (Rose, 1998) .", "Unfortunately such initialization schemes are not applicable for deep neural networks which form the model of most successful semantic parsers today (Jia and Liang, 2016; Misra and Artzi, 2016; Iyyer et al., 2017) .", "Prior work, such as ✏-greedy exploration (Guu et al., 2017) , has reduced the severity of this problem by introducing random noise in the search procedure to avoid saturating the search on high-scoring spurious programs.", "However, random noise need not bias the search towards the correct program(s).", "In this paper, we introduce a simple policy-shaping method to guide the search.", "This approach allows incorporating prior knowledge in the exploration policy and can bias the search away from spurious programs.", "Β» Compute generalized gradient updates 6: βœ“ = βœ“ + Β΅ (K) 7: return βœ“ Policy Shaping Policy shaping is a method to introduce prior knowledge into a policy (Griffith et al., 2013) .", "Formally, let the current behavior policy be b βœ“ (y|x, t, z) and a predefined critique policy, the prior knowledge, be p c (y|x, t).", "Policy shaping defines a new shaped behavior policy p b (y|x, t) given by: p b (y|x, t) = b βœ“ (y|x, t, z)p c (y|x, t) P y 0 2Y b βœ“ (y 0 |x, t, z)p c (y 0 |x, t) .", "(2) Using the shaped policy for exploration biases the search towards the critique policy's preference.", "We next describe a simple critique policy that we use in this paper.", "Lexical Policy Shaping We qualitatively observed that correct programs often contains tokens which are also present in the question.", "For example, the correct program in Figure 2 contains the token Points, which is also present in the question.", "We therefore, define a simple surface form similarity feature match(x, y) that computes the ratio of number of non-keyword tokens in the program y that are also present in the question x.", "However, surface-form similarity is often not enough.", "For example, both the first and fourth program in Figure 2 contain the token Points but only the fourth program is correct.", "Therefore, we also use a simple co-occurrence feature that triggers on frequently co-occurring pairs of tokens in the program and instruction.", "For example, the token most is highly likely to co-occur with a correct program containing the keyword Max.", "This happens for the example in Figure 2 .", "Similarly the token not may co-occur with the keyword NotEqual.", "We assume access to a lexicon ⇀ = {(w j , !", "j )} k j=1 containing k lexical pairs of tokens and keywords.", "Each lexical pair (w, !)", "maps the token w in a text to a keyword !", "in a program.", "For a given program y and question x, we define a co-occurrence score as co_occur(y, x) = P (w,!", ")2⇀ {w 2 x^!", "2 y}}.", "We define critique score critique(y, x) as the sum of the match and co_occur scores.", "The critique policy is given by: p c (y|x, t) / exp (⌘ ⇀ critique(y, x)) , (3) where ⌘ is a single scalar hyper-parameter denoting the confidence in the critique policy.", "Addressing Update Strategy Selection: Generalized Update Equation Given the set of programs generated by the search step, one can use many objectives to update the parameters.", "For example, previous work have utilized maximum marginal likelihood (Krishnamurthy et al., 2017; Guu et al., 2017) , reinforcement learning (Zhong et al., 2017; Guu et al., 2017) and margin based methods (Iyyer et al., 2017) .", "It could be difficult to choose the suitable algorithm from these options.", "In this section, we propose a principle and general update equation such that previous update algorithms can be considered as special cases to this equation.", "Having a general update is important for the following reasons.", "First, it allows us to understand existing algorithms better by examining their basic properties.", "Second, the generalized update equation also makes it easy to implement and experiment with various different algorithms.", "Moreover, it provides a framework that enables the development of new variations or extensions of existing learning methods.", "In the following, we describe how the commonly used algorithms are in fact very similartheir update rules can all be viewed as special cases of the proposed generalized update equation.", "Algorithm 1 shows the meta-learning framework.", "For every training example, we first find a set of candidates using an exploration policy (line 4).", "We use the program candidates to update the parameters (line 6).", "Commonly Used Learning Algorithms We briefly describe three algorithms: maximum marginalized likelihood, policy gradient and maximum margin reward.", "Maximum Marginalized Likelihood The maximum marginalized likelihood method maximizes the log-likelihood of the training data by marginalizing over the set of programs.", "J MML = log p(z i |x i , t i ) = log X y2Y p(z i |y, t i )p(y|x i , t i ) (4) Because an answer is deterministically computed given a program and a table, we define p(z | y, t) as 1 or 0 depending upon whether the y evaluates to z given t, or not.", "Let Gen(z, t) βœ“ Y be the set of compatible programs that evaluate to z given the table t. The objective can then be expressed as: J MML = log X y2Gen(zi,ti) p(y|x i , t i ) (5) In practice, the summation over Gen(.)", "is approximated by only using the compatible programs in the set K generated by the search step.", "Policy Gradient Methods Most reinforcement learning approaches for semantic parsing assume access to a reward function R : Y β‡₯ X β‡₯ Z !", "R, giving a scalar reward R(y, z) for a given program y and the correct answer z.", "2 We can further assume without loss of generality that the reward is always in [0, 1].", "Reinforcement learning approaches maximize the expected reward J RL : J RL = X y2Y p(y|x i , t i )R(y, z i ) (6) J RL is hard to approximate using numerical integration since the reward for all programs may not be known a priori.", "Policy gradient methods solve this by approximating the derivative using a sample from the policy.", "When the search space is large, the policy may fail to sample a correct program, which can greatly slow down the learning.", "Therefore, off-policy methods are sometimes introduced to bias the sampling towards high-reward yielding programs.", "In those methods, an additional exploration policy u(y|x i , t i , z i ) is used to improve sampling.", "Importance weights are used to make the gradient unbiased (see Appendix for derivation).", "Maximum Margin Reward For every training example (x i , t i , z i ), the maximum margin reward method finds the highest scoring program y i that evaluates to z i , as the reference program, from the set K of programs generated by the search.", "With a margin function : Y β‡₯Y β‡₯Z !", "R and reference program y, the set of programs V that violate the margin constraint can thus be defined as: V = {y 0 | y 0 2 Y and score βœ“ (y, x, t) ο£Ώ score βœ“ (y 0 , x, t) + (y, y 0 , z)}, (7) where (y, y 0 , z) = R(y, z) R(y 0 , z).", "Similarly, the program that most violates the constraint can be written as: y = arg max y 0 2Y {score βœ“ (y 0 , x, t) + (y, y 0 , z) score βœ“ (y, x, t)} (8) The most-violation margin objective (negative margin loss) is thus defined as: J MMR = max{0, score βœ“ (Θ³, x i , t i ) score βœ“ (y i , x i , t i ) + (y i ,Θ³, z i )} Unlike the previous two learning algorithms, margin methods only update the score of the reference program and the program that violates the margin.", "Generalized Update Equation Although the algorithms described in Β§4.1 seem very different on the surface, the gradients of their loss functions can in fact be described in the same generalized form, given in Eq.", "(9) 3 .", "In addition to the gradient of the model scoring function, this equation has two variable terms, w(Β·), q(Β·).", "We call the first term w(y, x, t, z) intensity, which is a positive scalar value and the second term q(y|x, t) the competing distribution, which is a probability distribution over programs.", "Varying them makes the equation equivalent to the update rule of the algorithms we discussed, as shown in Table 1 .", "We also consider meritocratic update policy which uses a hyperparameter to sharpen or smooth the intensity of maximum marginal likelihood (Guu et al., 2017) .", "Intuitively, w(y, x, t, z) defines the positive part of the update equation, which defines how aggressively the update favors program y.", "Likewise, q(y|x, t) defines the negative part of the learning Generalized Update Equation: (7) ) andΘ³ is the most violating program (cf.", "Eq.", "(8) ).", "For REINFORCE,Ε· is sampled from K using p(.)", "whereas for Off-Policy Policy Gradient,Ε· is sampled using u(.).", "(K) = X y2K w(y, x, t, z) 0 @ r βœ“ score βœ“ (y, x, t) X y 0 2Y q(y 0 |x, t)r βœ“ score βœ“ (y 0 , x, t) algorithm, namely how aggressively the update penalizes the members of the program set.", "The generalized update equation provides a tool for better understanding individual algorithm, and helps shed some light on when a particular method may perform better.", "Intensity versus Search Quality In SpFD, the effectiveness of the algorithms for SpFD is closely related to the quality of the search results given that the gold program is not available.", "Intuitively, if the search quality is good, the update algorithm could be aggressive on updating the model parameters.", "When the search quality is poor, the algorithm should be conservative.", "The intensity w(Β·) is closely related to the aggressiveness of the algorithm.", "For example, the maximum marginal likelihood is less aggressive given that it produces a non-zero intensity over all programs in the program set K that evaluate to the correct answer.", "The intensity for a particular correct program y is proportional to its probability p(y|x, t).", "Further, meritocratic update becomes more aggressive as becomes larger.", "In contrast, REINFORCE and maximum margin reward both have a non-zero intensity only on a single program in K. This value is 1.0 for maximum margin reward, while for reinforcement learning, this value is the reward.", "Maximum margin reward therefore updates most aggressively in favor of its selection while maximum marginal likelihood tends to hedge its bet.", "Therefore, the maximum margin methods should benefit the most when the search quality improves.", "Stability The general equation also allows us to investigate the stability of a model update algorithm.", "In general, the variance of update direction can be high, hence less stable, if the model update algorithm has peaky competing distribution, or it puts all of its intensity on a single program.", "For example, REINFORCE only samples one program and puts non-zero intensity only on that program, so it could be unstable depending on the sampling results.", "The competing distribution affects the stability of the algorithm.", "For example, maximum margin reward penalizes only the most violating program and is benign to other incorrect programs.", "Therefore, the MMR algorithm could be unstable during training.", "New Model Update Algorithm The general equation provides a framework that enables the development of new variations or extensions of existing learning methods.", "For example, in order to improve the stability of the MMR algorithm, we propose a simple variant of maximum margin reward, which penalizes all violating programs instead of only the most violating one.", "We call this approach maximum margin average violation reward (MAVER), which is included in Table 1 as well.", "Given that MAVER effectively considers more negative examples during each update, we expect that it is more stable compared to the MMR algorithm.", "Experiments We describe the setup in Β§5.1 and results in Β§5.2.", "Setup Dataset We use the sequential question answering (SQA) dataset (Iyyer et al., 2017) for our experiments.", "SQA contains 6,066 sequences and each sequence contains up to 3 questions, with 17,553 questions in total.", "The data is partitioned into training (83%) and test (17%) splits.", "We use 4/5 of the original train split as our training set and the remaining 1/5 as the dev set.", "We evaluate using exact match on answer.", "Previous state-of-theart result on the SQA dataset is 44.7% accuracy, using maximum margin reward learning.", "Semantic Parser Our semantic parser is based on DynSP (Iyyer et al., 2017) , which contains a set of SQL actions, such as adding a clause (e.g., Select Column) or adding an operator (e.g., Max).", "Each action has an associated neural network module that generates the score for the action based on the instruction, the table and the list of past actions.", "The score of the entire program is given by the sum of scores of all actions.", "We modified DynSP to improve its representational capacity.", "We refer to the new parser as DynSP++.", "Most notably, we included new features and introduced two additional parser actions.", "See Appendix 8.2 for more details.", "While these improvements help us achieve state-of-the-art results, the majority of the gain comes from the learning contributions described in this paper.", "Hyperparameters For each experiment, we train the model for 30 epochs.", "We find the optimal stopping epoch by evaluating the model on the dev set.", "We then train on train+dev set till the stopping epoch and evaluate the model on the held-out test set.", "Model parameters are trained using stochastic gradient descent with learning rate of 0.1.", "We set the hyperparameter ⌘ for policy shaping to 5.", "All hyperparameters were tuned on the dev set.", "We use 40 lexical pairs for defining the co-occur score.", "We used common English superlatives (e.g., highest, most) and comparators (e.g., more, larger) and did not fit the lexical pairs based on the dataset.", "Given the model parameter βœ“, we use a base exploration policy defined in (Iyyer et al., 2017) .", "This exploration policy is given by b βœ“ (y | x, t, z) / exp( Β· R(y, z) + score βœ“ (y, βœ“, z)).", "R(y, z) is the reward function of the incomplete program y, given the answer z.", "We use a reward function R(y, z) given by the Jaccard similarity of the gold answer z and the answer generated by the program y.", "The value of is set to infinity, which essentially is equivalent to sorting the programs based on the reward and using the current model score for tie breaking.", "Further, we prune all syntactically invalid programs.", "For more details, we refer the reader to (Iyyer et al., 2017) .", "Table 2 contains the dev and test results when using our algorithm on the SQA dataset.", "We observe that margin based methods perform better than maximum likelihood methods and policy gradient in our experiment.", "Policy shaping in general improves the performance across different algorithms.", "Our best test results outperform previous SOTA by 5.0%.", "Results Policy Gradient vs Off-Policy Gradient RE-INFORCE, a simple policy gradient method, achieved extremely poor performance.", "This likely due to the problem of exploration and having to sample from a large space of programs.", "This is further corroborated from observing the much superior performance of off-policy policy gradient methods.", "Thus, the sampling policy is an important factor to consider for policy gradient methods.", "The Effect of Policy Shaping We observe that the improvement due to policy shaping is 6.0% on the SQA dataset for MAVER and only 1.3% for maximum marginal likelihood.", "We also observe that as increases, the improvement due to policy shaping for meritocratic update increases.", "This supports our hypothesis that aggressive updates of margin based methods is beneficial when the search method is more accurate as compared to maximum marginal likelihood which hedges its bet between all programs that evaluate to the right answer.", "Stability of MMR In Section 4, the general update equation helps us point out that MMR could be unstable due to the peaky competing distribution.", "MAVER was proposed to increase the stability of the algorithm.", "To measure stability, we cal- Table 4 .", "Policy Shaping vs Model Shaping Critique policy contains useful information that can bias the search away from spurious programs.", "Therefore, one can also consider making the critique policy as part of the model.", "We call this model shaping.", "We define our model to be the shaped policy and train and test using the new model.", "Using MAVER updates, we found that the dev accuracy dropped to 37.1%.", "We conjecture that the strong prior in the critique policy can hinder generalization in model shaping.", "Related Work Semantic Parsing from Denotation Mapping natural language text to formal meaning representation was first studied by Montague (1970) .", "Early work on learning semantic parsers rely on labeled formal representations as the supervision signals Collins, 2005, 2007; Zelle and Mooney, 1993) .", "However, because getting access to gold formal representation generally requires expensive annotations by an expert, distant supervision approaches, where semantic parsers are learned from denotation only, have become the main learning paradigm (e.g., Clarke et al., 2010; Liang et al., 2011; Artzi and Zettlemoyer, 2013; Berant et al., 2013; Iyyer et al., 2017; Krishnamurthy et al., 2017) .", "Guu et al.", "(2017) studied the problem of spurious programs and considered adding noise to diversify the search procedure and introduced meritocratic updates.", "Reinforcement Learning Algorithms Reinforcement learning algorithms have been applied to various NLP problems including dialogue (Li et al., 2016 ), text-based games (Narasimhan et al., 2015) , information extraction (Narasimhan et al., 2016) , coreference resolution (Clark and Man- ning, 2016), semantic parsing (Guu et al., 2017) and instruction following (Misra et al., 2017) .", "Guu et al.", "(2017) show that policy gradient methods underperform maximum marginal likelihood approaches.", "Our result on the SQA dataset supports their observation.", "However, we show that using off-policy sampling, policy gradient methods can provide superior performance to maximum marginal likelihood methods.", "Margin-based Learning Margin-based methods have been considered in the context of SVM learning.", "In the NLP literature, margin based learning has been applied to parsing (Taskar et al., 2004; McDonald et al., 2005) , text classification (Taskar et al., 2003) , machine translation (Watanabe et al., 2007) and semantic parsing (Iyyer et al., 2017) .", "Kummerfeld et al.", "(2015) found that max-margin based methods generally outperform likelihood maximization on a range of tasks.", "Previous work have studied connections between margin based method and likelihood maximization for supervised learning setting.", "We show them as special cases of our unified update equation for distant supervision learning.", "Similar to this work, Lee et al.", "(2016) also found that in the context of supervised learning, margin-based algorithms which update all violated examples perform better than the one that only updates the most violated example.", "Latent Variable Modeling Learning semantic parsers from denotation can be viewed as a latent variable modeling problem, where the program is the latent variable.", "Probabilistic latent variable models have been studied using EM-algorithm and its variant (Dempster et al., 1977) .", "The graphical model literature has studied latent variable learning on margin-based methods (Yu and Joachims, 2009 ) and probabilistic models (Quattoni et al., 2007) .", "Samdani et al.", "(2012) studied various vari-ants of EM algorithm and showed that all of them are special cases of a unified framework.", "Our generalized update framework is similar in spirit.", "Conclusion In this paper, we propose a general update equation from semantic parsing from denotation and propose a policy shaping method for addressing the spurious program challenge.", "For the future, we plan to apply the proposed learning framework to more semantic parsing tasks and consider new methods for policy shaping." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "4", "4.1", "4.2", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Background", "The Semantic Parsing Task", "Learning", "Addressing Spurious Programs: Policy Shaping", "Addressing Update Strategy Selection:", "Commonly Used Learning Algorithms", "Generalized Update Equation", "Experiments", "Setup", "Results", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-103#paper-1267#slide-14
Learning Pipeline Revisited
[Search for Training] With x, t, z, beam search suitable ={y} Using policy shaping to find better K Shaping affects here [Update] Update , according K = {y} What is the better objective function J?
[Search for Training] With x, t, z, beam search suitable ={y} Using policy shaping to find better K Shaping affects here [Update] Update , according K = {y} What is the better objective function J?
[]
GEM-SciDuet-train-103#paper-1267#slide-15
1267
Policy Shaping and Generalized Update Equations for Semantic Parsing from Denotations
Semantic parsing from denotations faces two key challenges in model training: (1) given only the denotations (e.g., answers), search for good candidate semantic parses, and (2) choose the best model update algorithm. We propose effective and general solutions to each of them. Using policy shaping, we bias the search procedure towards semantic parses that are more compatible to the text, which provide better supervision signals for training. In addition, we propose an update equation that generalizes three different families of learning algorithms, which enables fast model exploration. When experimented on a recently proposed sequential question answering dataset, our framework leads to a new state-of-theart model that outperforms previous work by 5.0% absolute on exact match accuracy.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233 ], "paper_content_text": [ "Introduction Semantic parsing from denotations (SpFD) is the problem of mapping text to executable formal representations (or program) in a situated environment and executing them to generate denotations (or answer), in the absence of access to correct representations.", "Several problems have been handled within this framework, including question answering (Berant et al., 2013; Iyyer et al., 2017) and instructions for robots (Artzi and Zettlemoyer, 2013; Misra et al., 2015) .", "Consider the example in Figure 1 .", "Given the question and a table environment, a semantic parser maps the question to an executable program, in this case a SQL query, and then executes the query on the environment to generate the answer England.", "In the SpFD setting, the training data does not contain the correct programs.", "Thus, the existing learning approaches for SpFD perform two steps for every training example, a search step that explores the space of programs Select Nation Where Points is Maximum Program: Answer: Environment: England Figure 1 : An example of semantic parsing from denotations.", "Given the table environment, map the question to an executable program that evaluates to the answer.", "and finds suitable candidates, and an update step that uses these programs to update the model.", "Figure 2 shows the two step training procedure for the above example.", "In this paper, we address two key challenges in model training for SpFD by proposing a novel learning framework, improving both the search and update steps.", "The first challenge, the existence of spurious programs, lies in the search step.", "More specifically, while the success of the search step relies on its ability to find programs that are semantically correct, we can only verify if the program can generate correct answers, given that no gold programs are presented.", "The search step is complicated by spurious programs, which happen to evaluate to the correct answer but do not represent accurately the meaning of the natural language question.", "For example, for the environment in Figure 1 , the program Select Nation Where Name = Karen Andrew is spurious.", "Selecting spurious programs as positive examples can greatly affect the performance of semantic parsers as these programs generally do not gen- Our contributions in this work are twofold.", "To address the first challenge, we propose a policy shaping (Griffith et al., 2013) method that incorporates simple, lightweight domain knowledge, such as a small set of lexical pairs of tokens in the question and program, in the form of a critique policy ( Β§ 3).", "This helps bias the search towards the correct program, an important step to improve supervision signals, which benefits learning regardless of the choice of algorithm.", "To address the second challenge, we prove that the parameter update step in several algorithms are similar and can be viewed as special cases of a generalized update equation ( Β§ 4).", "The equation contains two variable terms that govern the update behavior.", "Changing these two terms effectively defines an infinite class of learning algorithms where different values lead to significantly different results.", "We study this effect and propose a novel learning framework that improves over existing methods.", "We evaluate our methods using the sequential question answering (SQA) dataset (Iyyer et al., 2017) , and show that our proposed improvements to the search and update steps consistently enhance existing approaches.", "The proposed algorithm achieves new state-of-the-art and outperforms existing parsers by 5.0%.", "Background We give a formal problem definition of the semantic parsing task, followed by the general learning framework for solving it.", "The Semantic Parsing Task The problem discussed in this paper can be formally defined as follows.", "Let X be the set of all possible questions, Y programs (e.g., SQL-like queries), T tables (i.e., the structured data in this work) and Z answers.", "We further assume access to an executor : Y β‡₯ T !", "Z, that given a program y 2 Y and a table t 2 T , generates an answer (y, t) 2 Z.", "We assume that the executor and all tables are deterministic and the executor can be called as many times as possible.", "To facilitate discussion in the following sections, we define an environment function e t : Y !", "Z, by applying the executor to the program as e t (y) = (y, t).", "Given a question x and an environment e t , our aim is to generate a program y ⇀ 2 Y and then execute it to produce the answer e t (y ⇀ ).", "Assume that for any y 2 Y, the score of y being a correct program for x is score βœ“ (y, x, t), parameterized by βœ“.", "The inference task is thus: y ⇀ = arg max y2Y score βœ“ (y, x, t) (1) As the size of Y is exponential to the length of the program, a generic search procedure is typically employed for Eq.", "(1), as efficient dynamic algorithms typically do not exist.", "These search procedures generally maintain a beam of program states sorted according to some scoring function, where each program state represents an incomplete program.", "The search then generates a new program state from an existing state by performing an action.", "Each action adds a set of tokens (e.g., Nation) and keyword (e.g., Select) to a program state.", "For example, in order to generate the program in Figure 1 , the DynSP parser (Iyyer et al., 2017) will take the first action as adding the SQL expression Select Nation.", "Notice that score βœ“ can be used in either probabilistic or nonprobabilistic models.", "For probabilistic models, we assume that it is a Boltzmann policy, meaning that p βœ“ (y | x, t) / exp{score βœ“ (y, x, t)}.", "Learning Learning a semantic parser is equivalent to learning the parameters βœ“ in the scoring function, which is a structured learning problem, due to the large, structured output space Y.", "Structured learning algorithms generally consist of two major components: search and update.", "When the gold programs are available during training, the search procedure finds a set of high-scoring incorrect programs.", "These programs are used by the update step to derive loss for updating parameters.", "For example, these programs are used for approximating the partition-function in maximum-likelihood objective (Liang et al., 2011) and finding set of programs causing margin violation in margin based methods (DaumΓ© III and Marcu, 2005) .", "Depending on the exact algorithm being used, these two components are not necessarily separated into isolated steps.", "For instance, parameters can be updated in the middle of search (e.g., Huang et al., 2012) .", "For learning semantic parsers from denotations, where we assume only answers are available in a training set {( x i , t i , z i )} N i=1 of N examples, the basic construction of the learning algorithms remains the same.", "However, the problems that search needs to handle in SpFD is more challenging.", "In addition to finding a set of high-scoring incorrect programs, the search procedure also needs to guess the correct program(s) evaluating to the gold answer z i .", "This problem is further complicated by the presence of spurious programs, which generate the correct answer but are semantically incompatible with the question.", "For example, although all programs in Figure 2 evaluate to the same answer, only one of them is correct.", "The issue of the spurious programs also affects the design of model update.", "For instance, maximum marginal likelihood methods treat all the programs that evaluate to the gold answer equally, while maximum margin reward networks use model score to break tie and pick one of the programs as the correct reference.", "Addressing Spurious Programs: Policy Shaping Given a training example (x, t, z), the aim of the search step is to find a set K(x, t, z) of programs consisting of correct programs that evaluate to z and high-scoring incorrect programs.", "The search step should avoid picking up spurious programs for learning since such programs typically do not generalize.", "For example, in Figure 2, the spurious program Select Nation Where Index is Min will evaluate to an incorrect answer if the indices of the first two rows are swapped 1 .", "This problem is challenging since among the programs that evaluate to the correct answer, most of them are spurious.", "The search step can be viewed as following an exploration policy b βœ“ (y|x, t, z) to explore the set of programs Y.", "This exploration is often performed by beam search and at each step, we either sample from b βœ“ or take the top scoring programs.", "The set K(x, t, z) is then used by the update step for parameter update.", "Most search strategies use an exploration policy which is based on the score function, for example b βœ“ (y|x, t, z) / exp{score βœ“ (y, t)}.", "However, this approach can suffer from a divergence phenomenon whereby the score of spurious programs picked up by the search in the first epoch increases, making it more likely for the search to pick them up in the future.", "Such divergence issues are common with latent-variable learning and often require careful initialization to overcome (Rose, 1998) .", "Unfortunately such initialization schemes are not applicable for deep neural networks which form the model of most successful semantic parsers today (Jia and Liang, 2016; Misra and Artzi, 2016; Iyyer et al., 2017) .", "Prior work, such as ✏-greedy exploration (Guu et al., 2017) , has reduced the severity of this problem by introducing random noise in the search procedure to avoid saturating the search on high-scoring spurious programs.", "However, random noise need not bias the search towards the correct program(s).", "In this paper, we introduce a simple policy-shaping method to guide the search.", "This approach allows incorporating prior knowledge in the exploration policy and can bias the search away from spurious programs.", "Β» Compute generalized gradient updates 6: βœ“ = βœ“ + Β΅ (K) 7: return βœ“ Policy Shaping Policy shaping is a method to introduce prior knowledge into a policy (Griffith et al., 2013) .", "Formally, let the current behavior policy be b βœ“ (y|x, t, z) and a predefined critique policy, the prior knowledge, be p c (y|x, t).", "Policy shaping defines a new shaped behavior policy p b (y|x, t) given by: p b (y|x, t) = b βœ“ (y|x, t, z)p c (y|x, t) P y 0 2Y b βœ“ (y 0 |x, t, z)p c (y 0 |x, t) .", "(2) Using the shaped policy for exploration biases the search towards the critique policy's preference.", "We next describe a simple critique policy that we use in this paper.", "Lexical Policy Shaping We qualitatively observed that correct programs often contains tokens which are also present in the question.", "For example, the correct program in Figure 2 contains the token Points, which is also present in the question.", "We therefore, define a simple surface form similarity feature match(x, y) that computes the ratio of number of non-keyword tokens in the program y that are also present in the question x.", "However, surface-form similarity is often not enough.", "For example, both the first and fourth program in Figure 2 contain the token Points but only the fourth program is correct.", "Therefore, we also use a simple co-occurrence feature that triggers on frequently co-occurring pairs of tokens in the program and instruction.", "For example, the token most is highly likely to co-occur with a correct program containing the keyword Max.", "This happens for the example in Figure 2 .", "Similarly the token not may co-occur with the keyword NotEqual.", "We assume access to a lexicon ⇀ = {(w j , !", "j )} k j=1 containing k lexical pairs of tokens and keywords.", "Each lexical pair (w, !)", "maps the token w in a text to a keyword !", "in a program.", "For a given program y and question x, we define a co-occurrence score as co_occur(y, x) = P (w,!", ")2⇀ {w 2 x^!", "2 y}}.", "We define critique score critique(y, x) as the sum of the match and co_occur scores.", "The critique policy is given by: p c (y|x, t) / exp (⌘ ⇀ critique(y, x)) , (3) where ⌘ is a single scalar hyper-parameter denoting the confidence in the critique policy.", "Addressing Update Strategy Selection: Generalized Update Equation Given the set of programs generated by the search step, one can use many objectives to update the parameters.", "For example, previous work have utilized maximum marginal likelihood (Krishnamurthy et al., 2017; Guu et al., 2017) , reinforcement learning (Zhong et al., 2017; Guu et al., 2017) and margin based methods (Iyyer et al., 2017) .", "It could be difficult to choose the suitable algorithm from these options.", "In this section, we propose a principle and general update equation such that previous update algorithms can be considered as special cases to this equation.", "Having a general update is important for the following reasons.", "First, it allows us to understand existing algorithms better by examining their basic properties.", "Second, the generalized update equation also makes it easy to implement and experiment with various different algorithms.", "Moreover, it provides a framework that enables the development of new variations or extensions of existing learning methods.", "In the following, we describe how the commonly used algorithms are in fact very similartheir update rules can all be viewed as special cases of the proposed generalized update equation.", "Algorithm 1 shows the meta-learning framework.", "For every training example, we first find a set of candidates using an exploration policy (line 4).", "We use the program candidates to update the parameters (line 6).", "Commonly Used Learning Algorithms We briefly describe three algorithms: maximum marginalized likelihood, policy gradient and maximum margin reward.", "Maximum Marginalized Likelihood The maximum marginalized likelihood method maximizes the log-likelihood of the training data by marginalizing over the set of programs.", "J MML = log p(z i |x i , t i ) = log X y2Y p(z i |y, t i )p(y|x i , t i ) (4) Because an answer is deterministically computed given a program and a table, we define p(z | y, t) as 1 or 0 depending upon whether the y evaluates to z given t, or not.", "Let Gen(z, t) βœ“ Y be the set of compatible programs that evaluate to z given the table t. The objective can then be expressed as: J MML = log X y2Gen(zi,ti) p(y|x i , t i ) (5) In practice, the summation over Gen(.)", "is approximated by only using the compatible programs in the set K generated by the search step.", "Policy Gradient Methods Most reinforcement learning approaches for semantic parsing assume access to a reward function R : Y β‡₯ X β‡₯ Z !", "R, giving a scalar reward R(y, z) for a given program y and the correct answer z.", "2 We can further assume without loss of generality that the reward is always in [0, 1].", "Reinforcement learning approaches maximize the expected reward J RL : J RL = X y2Y p(y|x i , t i )R(y, z i ) (6) J RL is hard to approximate using numerical integration since the reward for all programs may not be known a priori.", "Policy gradient methods solve this by approximating the derivative using a sample from the policy.", "When the search space is large, the policy may fail to sample a correct program, which can greatly slow down the learning.", "Therefore, off-policy methods are sometimes introduced to bias the sampling towards high-reward yielding programs.", "In those methods, an additional exploration policy u(y|x i , t i , z i ) is used to improve sampling.", "Importance weights are used to make the gradient unbiased (see Appendix for derivation).", "Maximum Margin Reward For every training example (x i , t i , z i ), the maximum margin reward method finds the highest scoring program y i that evaluates to z i , as the reference program, from the set K of programs generated by the search.", "With a margin function : Y β‡₯Y β‡₯Z !", "R and reference program y, the set of programs V that violate the margin constraint can thus be defined as: V = {y 0 | y 0 2 Y and score βœ“ (y, x, t) ο£Ώ score βœ“ (y 0 , x, t) + (y, y 0 , z)}, (7) where (y, y 0 , z) = R(y, z) R(y 0 , z).", "Similarly, the program that most violates the constraint can be written as: y = arg max y 0 2Y {score βœ“ (y 0 , x, t) + (y, y 0 , z) score βœ“ (y, x, t)} (8) The most-violation margin objective (negative margin loss) is thus defined as: J MMR = max{0, score βœ“ (Θ³, x i , t i ) score βœ“ (y i , x i , t i ) + (y i ,Θ³, z i )} Unlike the previous two learning algorithms, margin methods only update the score of the reference program and the program that violates the margin.", "Generalized Update Equation Although the algorithms described in Β§4.1 seem very different on the surface, the gradients of their loss functions can in fact be described in the same generalized form, given in Eq.", "(9) 3 .", "In addition to the gradient of the model scoring function, this equation has two variable terms, w(Β·), q(Β·).", "We call the first term w(y, x, t, z) intensity, which is a positive scalar value and the second term q(y|x, t) the competing distribution, which is a probability distribution over programs.", "Varying them makes the equation equivalent to the update rule of the algorithms we discussed, as shown in Table 1 .", "We also consider meritocratic update policy which uses a hyperparameter to sharpen or smooth the intensity of maximum marginal likelihood (Guu et al., 2017) .", "Intuitively, w(y, x, t, z) defines the positive part of the update equation, which defines how aggressively the update favors program y.", "Likewise, q(y|x, t) defines the negative part of the learning Generalized Update Equation: (7) ) andΘ³ is the most violating program (cf.", "Eq.", "(8) ).", "For REINFORCE,Ε· is sampled from K using p(.)", "whereas for Off-Policy Policy Gradient,Ε· is sampled using u(.).", "(K) = X y2K w(y, x, t, z) 0 @ r βœ“ score βœ“ (y, x, t) X y 0 2Y q(y 0 |x, t)r βœ“ score βœ“ (y 0 , x, t) algorithm, namely how aggressively the update penalizes the members of the program set.", "The generalized update equation provides a tool for better understanding individual algorithm, and helps shed some light on when a particular method may perform better.", "Intensity versus Search Quality In SpFD, the effectiveness of the algorithms for SpFD is closely related to the quality of the search results given that the gold program is not available.", "Intuitively, if the search quality is good, the update algorithm could be aggressive on updating the model parameters.", "When the search quality is poor, the algorithm should be conservative.", "The intensity w(Β·) is closely related to the aggressiveness of the algorithm.", "For example, the maximum marginal likelihood is less aggressive given that it produces a non-zero intensity over all programs in the program set K that evaluate to the correct answer.", "The intensity for a particular correct program y is proportional to its probability p(y|x, t).", "Further, meritocratic update becomes more aggressive as becomes larger.", "In contrast, REINFORCE and maximum margin reward both have a non-zero intensity only on a single program in K. This value is 1.0 for maximum margin reward, while for reinforcement learning, this value is the reward.", "Maximum margin reward therefore updates most aggressively in favor of its selection while maximum marginal likelihood tends to hedge its bet.", "Therefore, the maximum margin methods should benefit the most when the search quality improves.", "Stability The general equation also allows us to investigate the stability of a model update algorithm.", "In general, the variance of update direction can be high, hence less stable, if the model update algorithm has peaky competing distribution, or it puts all of its intensity on a single program.", "For example, REINFORCE only samples one program and puts non-zero intensity only on that program, so it could be unstable depending on the sampling results.", "The competing distribution affects the stability of the algorithm.", "For example, maximum margin reward penalizes only the most violating program and is benign to other incorrect programs.", "Therefore, the MMR algorithm could be unstable during training.", "New Model Update Algorithm The general equation provides a framework that enables the development of new variations or extensions of existing learning methods.", "For example, in order to improve the stability of the MMR algorithm, we propose a simple variant of maximum margin reward, which penalizes all violating programs instead of only the most violating one.", "We call this approach maximum margin average violation reward (MAVER), which is included in Table 1 as well.", "Given that MAVER effectively considers more negative examples during each update, we expect that it is more stable compared to the MMR algorithm.", "Experiments We describe the setup in Β§5.1 and results in Β§5.2.", "Setup Dataset We use the sequential question answering (SQA) dataset (Iyyer et al., 2017) for our experiments.", "SQA contains 6,066 sequences and each sequence contains up to 3 questions, with 17,553 questions in total.", "The data is partitioned into training (83%) and test (17%) splits.", "We use 4/5 of the original train split as our training set and the remaining 1/5 as the dev set.", "We evaluate using exact match on answer.", "Previous state-of-theart result on the SQA dataset is 44.7% accuracy, using maximum margin reward learning.", "Semantic Parser Our semantic parser is based on DynSP (Iyyer et al., 2017) , which contains a set of SQL actions, such as adding a clause (e.g., Select Column) or adding an operator (e.g., Max).", "Each action has an associated neural network module that generates the score for the action based on the instruction, the table and the list of past actions.", "The score of the entire program is given by the sum of scores of all actions.", "We modified DynSP to improve its representational capacity.", "We refer to the new parser as DynSP++.", "Most notably, we included new features and introduced two additional parser actions.", "See Appendix 8.2 for more details.", "While these improvements help us achieve state-of-the-art results, the majority of the gain comes from the learning contributions described in this paper.", "Hyperparameters For each experiment, we train the model for 30 epochs.", "We find the optimal stopping epoch by evaluating the model on the dev set.", "We then train on train+dev set till the stopping epoch and evaluate the model on the held-out test set.", "Model parameters are trained using stochastic gradient descent with learning rate of 0.1.", "We set the hyperparameter ⌘ for policy shaping to 5.", "All hyperparameters were tuned on the dev set.", "We use 40 lexical pairs for defining the co-occur score.", "We used common English superlatives (e.g., highest, most) and comparators (e.g., more, larger) and did not fit the lexical pairs based on the dataset.", "Given the model parameter βœ“, we use a base exploration policy defined in (Iyyer et al., 2017) .", "This exploration policy is given by b βœ“ (y | x, t, z) / exp( Β· R(y, z) + score βœ“ (y, βœ“, z)).", "R(y, z) is the reward function of the incomplete program y, given the answer z.", "We use a reward function R(y, z) given by the Jaccard similarity of the gold answer z and the answer generated by the program y.", "The value of is set to infinity, which essentially is equivalent to sorting the programs based on the reward and using the current model score for tie breaking.", "Further, we prune all syntactically invalid programs.", "For more details, we refer the reader to (Iyyer et al., 2017) .", "Table 2 contains the dev and test results when using our algorithm on the SQA dataset.", "We observe that margin based methods perform better than maximum likelihood methods and policy gradient in our experiment.", "Policy shaping in general improves the performance across different algorithms.", "Our best test results outperform previous SOTA by 5.0%.", "Results Policy Gradient vs Off-Policy Gradient RE-INFORCE, a simple policy gradient method, achieved extremely poor performance.", "This likely due to the problem of exploration and having to sample from a large space of programs.", "This is further corroborated from observing the much superior performance of off-policy policy gradient methods.", "Thus, the sampling policy is an important factor to consider for policy gradient methods.", "The Effect of Policy Shaping We observe that the improvement due to policy shaping is 6.0% on the SQA dataset for MAVER and only 1.3% for maximum marginal likelihood.", "We also observe that as increases, the improvement due to policy shaping for meritocratic update increases.", "This supports our hypothesis that aggressive updates of margin based methods is beneficial when the search method is more accurate as compared to maximum marginal likelihood which hedges its bet between all programs that evaluate to the right answer.", "Stability of MMR In Section 4, the general update equation helps us point out that MMR could be unstable due to the peaky competing distribution.", "MAVER was proposed to increase the stability of the algorithm.", "To measure stability, we cal- Table 4 .", "Policy Shaping vs Model Shaping Critique policy contains useful information that can bias the search away from spurious programs.", "Therefore, one can also consider making the critique policy as part of the model.", "We call this model shaping.", "We define our model to be the shaped policy and train and test using the new model.", "Using MAVER updates, we found that the dev accuracy dropped to 37.1%.", "We conjecture that the strong prior in the critique policy can hinder generalization in model shaping.", "Related Work Semantic Parsing from Denotation Mapping natural language text to formal meaning representation was first studied by Montague (1970) .", "Early work on learning semantic parsers rely on labeled formal representations as the supervision signals Collins, 2005, 2007; Zelle and Mooney, 1993) .", "However, because getting access to gold formal representation generally requires expensive annotations by an expert, distant supervision approaches, where semantic parsers are learned from denotation only, have become the main learning paradigm (e.g., Clarke et al., 2010; Liang et al., 2011; Artzi and Zettlemoyer, 2013; Berant et al., 2013; Iyyer et al., 2017; Krishnamurthy et al., 2017) .", "Guu et al.", "(2017) studied the problem of spurious programs and considered adding noise to diversify the search procedure and introduced meritocratic updates.", "Reinforcement Learning Algorithms Reinforcement learning algorithms have been applied to various NLP problems including dialogue (Li et al., 2016 ), text-based games (Narasimhan et al., 2015) , information extraction (Narasimhan et al., 2016) , coreference resolution (Clark and Man- ning, 2016), semantic parsing (Guu et al., 2017) and instruction following (Misra et al., 2017) .", "Guu et al.", "(2017) show that policy gradient methods underperform maximum marginal likelihood approaches.", "Our result on the SQA dataset supports their observation.", "However, we show that using off-policy sampling, policy gradient methods can provide superior performance to maximum marginal likelihood methods.", "Margin-based Learning Margin-based methods have been considered in the context of SVM learning.", "In the NLP literature, margin based learning has been applied to parsing (Taskar et al., 2004; McDonald et al., 2005) , text classification (Taskar et al., 2003) , machine translation (Watanabe et al., 2007) and semantic parsing (Iyyer et al., 2017) .", "Kummerfeld et al.", "(2015) found that max-margin based methods generally outperform likelihood maximization on a range of tasks.", "Previous work have studied connections between margin based method and likelihood maximization for supervised learning setting.", "We show them as special cases of our unified update equation for distant supervision learning.", "Similar to this work, Lee et al.", "(2016) also found that in the context of supervised learning, margin-based algorithms which update all violated examples perform better than the one that only updates the most violated example.", "Latent Variable Modeling Learning semantic parsers from denotation can be viewed as a latent variable modeling problem, where the program is the latent variable.", "Probabilistic latent variable models have been studied using EM-algorithm and its variant (Dempster et al., 1977) .", "The graphical model literature has studied latent variable learning on margin-based methods (Yu and Joachims, 2009 ) and probabilistic models (Quattoni et al., 2007) .", "Samdani et al.", "(2012) studied various vari-ants of EM algorithm and showed that all of them are special cases of a unified framework.", "Our generalized update framework is similar in spirit.", "Conclusion In this paper, we propose a general update equation from semantic parsing from denotation and propose a policy shaping method for addressing the spurious program challenge.", "For the future, we plan to apply the proposed learning framework to more semantic parsing tasks and consider new methods for policy shaping." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "4", "4.1", "4.2", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Background", "The Semantic Parsing Task", "Learning", "Addressing Spurious Programs: Policy Shaping", "Addressing Update Strategy Selection:", "Commonly Used Learning Algorithms", "Generalized Update Equation", "Experiments", "Setup", "Results", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-103#paper-1267#slide-15
Objective Functions Look Different
Maximum Marginal Likelihood (MML) Maximum Margin Reward (MMR) Maximum Reward Program Most violated program generated according to reward augment inference
Maximum Marginal Likelihood (MML) Maximum Margin Reward (MMR) Maximum Reward Program Most violated program generated according to reward augment inference
[]
GEM-SciDuet-train-103#paper-1267#slide-16
1267
Policy Shaping and Generalized Update Equations for Semantic Parsing from Denotations
Semantic parsing from denotations faces two key challenges in model training: (1) given only the denotations (e.g., answers), search for good candidate semantic parses, and (2) choose the best model update algorithm. We propose effective and general solutions to each of them. Using policy shaping, we bias the search procedure towards semantic parses that are more compatible to the text, which provide better supervision signals for training. In addition, we propose an update equation that generalizes three different families of learning algorithms, which enables fast model exploration. When experimented on a recently proposed sequential question answering dataset, our framework leads to a new state-of-theart model that outperforms previous work by 5.0% absolute on exact match accuracy.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233 ], "paper_content_text": [ "Introduction Semantic parsing from denotations (SpFD) is the problem of mapping text to executable formal representations (or program) in a situated environment and executing them to generate denotations (or answer), in the absence of access to correct representations.", "Several problems have been handled within this framework, including question answering (Berant et al., 2013; Iyyer et al., 2017) and instructions for robots (Artzi and Zettlemoyer, 2013; Misra et al., 2015) .", "Consider the example in Figure 1 .", "Given the question and a table environment, a semantic parser maps the question to an executable program, in this case a SQL query, and then executes the query on the environment to generate the answer England.", "In the SpFD setting, the training data does not contain the correct programs.", "Thus, the existing learning approaches for SpFD perform two steps for every training example, a search step that explores the space of programs Select Nation Where Points is Maximum Program: Answer: Environment: England Figure 1 : An example of semantic parsing from denotations.", "Given the table environment, map the question to an executable program that evaluates to the answer.", "and finds suitable candidates, and an update step that uses these programs to update the model.", "Figure 2 shows the two step training procedure for the above example.", "In this paper, we address two key challenges in model training for SpFD by proposing a novel learning framework, improving both the search and update steps.", "The first challenge, the existence of spurious programs, lies in the search step.", "More specifically, while the success of the search step relies on its ability to find programs that are semantically correct, we can only verify if the program can generate correct answers, given that no gold programs are presented.", "The search step is complicated by spurious programs, which happen to evaluate to the correct answer but do not represent accurately the meaning of the natural language question.", "For example, for the environment in Figure 1 , the program Select Nation Where Name = Karen Andrew is spurious.", "Selecting spurious programs as positive examples can greatly affect the performance of semantic parsers as these programs generally do not gen- Our contributions in this work are twofold.", "To address the first challenge, we propose a policy shaping (Griffith et al., 2013) method that incorporates simple, lightweight domain knowledge, such as a small set of lexical pairs of tokens in the question and program, in the form of a critique policy ( Β§ 3).", "This helps bias the search towards the correct program, an important step to improve supervision signals, which benefits learning regardless of the choice of algorithm.", "To address the second challenge, we prove that the parameter update step in several algorithms are similar and can be viewed as special cases of a generalized update equation ( Β§ 4).", "The equation contains two variable terms that govern the update behavior.", "Changing these two terms effectively defines an infinite class of learning algorithms where different values lead to significantly different results.", "We study this effect and propose a novel learning framework that improves over existing methods.", "We evaluate our methods using the sequential question answering (SQA) dataset (Iyyer et al., 2017) , and show that our proposed improvements to the search and update steps consistently enhance existing approaches.", "The proposed algorithm achieves new state-of-the-art and outperforms existing parsers by 5.0%.", "Background We give a formal problem definition of the semantic parsing task, followed by the general learning framework for solving it.", "The Semantic Parsing Task The problem discussed in this paper can be formally defined as follows.", "Let X be the set of all possible questions, Y programs (e.g., SQL-like queries), T tables (i.e., the structured data in this work) and Z answers.", "We further assume access to an executor : Y β‡₯ T !", "Z, that given a program y 2 Y and a table t 2 T , generates an answer (y, t) 2 Z.", "We assume that the executor and all tables are deterministic and the executor can be called as many times as possible.", "To facilitate discussion in the following sections, we define an environment function e t : Y !", "Z, by applying the executor to the program as e t (y) = (y, t).", "Given a question x and an environment e t , our aim is to generate a program y ⇀ 2 Y and then execute it to produce the answer e t (y ⇀ ).", "Assume that for any y 2 Y, the score of y being a correct program for x is score βœ“ (y, x, t), parameterized by βœ“.", "The inference task is thus: y ⇀ = arg max y2Y score βœ“ (y, x, t) (1) As the size of Y is exponential to the length of the program, a generic search procedure is typically employed for Eq.", "(1), as efficient dynamic algorithms typically do not exist.", "These search procedures generally maintain a beam of program states sorted according to some scoring function, where each program state represents an incomplete program.", "The search then generates a new program state from an existing state by performing an action.", "Each action adds a set of tokens (e.g., Nation) and keyword (e.g., Select) to a program state.", "For example, in order to generate the program in Figure 1 , the DynSP parser (Iyyer et al., 2017) will take the first action as adding the SQL expression Select Nation.", "Notice that score βœ“ can be used in either probabilistic or nonprobabilistic models.", "For probabilistic models, we assume that it is a Boltzmann policy, meaning that p βœ“ (y | x, t) / exp{score βœ“ (y, x, t)}.", "Learning Learning a semantic parser is equivalent to learning the parameters βœ“ in the scoring function, which is a structured learning problem, due to the large, structured output space Y.", "Structured learning algorithms generally consist of two major components: search and update.", "When the gold programs are available during training, the search procedure finds a set of high-scoring incorrect programs.", "These programs are used by the update step to derive loss for updating parameters.", "For example, these programs are used for approximating the partition-function in maximum-likelihood objective (Liang et al., 2011) and finding set of programs causing margin violation in margin based methods (DaumΓ© III and Marcu, 2005) .", "Depending on the exact algorithm being used, these two components are not necessarily separated into isolated steps.", "For instance, parameters can be updated in the middle of search (e.g., Huang et al., 2012) .", "For learning semantic parsers from denotations, where we assume only answers are available in a training set {( x i , t i , z i )} N i=1 of N examples, the basic construction of the learning algorithms remains the same.", "However, the problems that search needs to handle in SpFD is more challenging.", "In addition to finding a set of high-scoring incorrect programs, the search procedure also needs to guess the correct program(s) evaluating to the gold answer z i .", "This problem is further complicated by the presence of spurious programs, which generate the correct answer but are semantically incompatible with the question.", "For example, although all programs in Figure 2 evaluate to the same answer, only one of them is correct.", "The issue of the spurious programs also affects the design of model update.", "For instance, maximum marginal likelihood methods treat all the programs that evaluate to the gold answer equally, while maximum margin reward networks use model score to break tie and pick one of the programs as the correct reference.", "Addressing Spurious Programs: Policy Shaping Given a training example (x, t, z), the aim of the search step is to find a set K(x, t, z) of programs consisting of correct programs that evaluate to z and high-scoring incorrect programs.", "The search step should avoid picking up spurious programs for learning since such programs typically do not generalize.", "For example, in Figure 2, the spurious program Select Nation Where Index is Min will evaluate to an incorrect answer if the indices of the first two rows are swapped 1 .", "This problem is challenging since among the programs that evaluate to the correct answer, most of them are spurious.", "The search step can be viewed as following an exploration policy b βœ“ (y|x, t, z) to explore the set of programs Y.", "This exploration is often performed by beam search and at each step, we either sample from b βœ“ or take the top scoring programs.", "The set K(x, t, z) is then used by the update step for parameter update.", "Most search strategies use an exploration policy which is based on the score function, for example b βœ“ (y|x, t, z) / exp{score βœ“ (y, t)}.", "However, this approach can suffer from a divergence phenomenon whereby the score of spurious programs picked up by the search in the first epoch increases, making it more likely for the search to pick them up in the future.", "Such divergence issues are common with latent-variable learning and often require careful initialization to overcome (Rose, 1998) .", "Unfortunately such initialization schemes are not applicable for deep neural networks which form the model of most successful semantic parsers today (Jia and Liang, 2016; Misra and Artzi, 2016; Iyyer et al., 2017) .", "Prior work, such as ✏-greedy exploration (Guu et al., 2017) , has reduced the severity of this problem by introducing random noise in the search procedure to avoid saturating the search on high-scoring spurious programs.", "However, random noise need not bias the search towards the correct program(s).", "In this paper, we introduce a simple policy-shaping method to guide the search.", "This approach allows incorporating prior knowledge in the exploration policy and can bias the search away from spurious programs.", "Β» Compute generalized gradient updates 6: βœ“ = βœ“ + Β΅ (K) 7: return βœ“ Policy Shaping Policy shaping is a method to introduce prior knowledge into a policy (Griffith et al., 2013) .", "Formally, let the current behavior policy be b βœ“ (y|x, t, z) and a predefined critique policy, the prior knowledge, be p c (y|x, t).", "Policy shaping defines a new shaped behavior policy p b (y|x, t) given by: p b (y|x, t) = b βœ“ (y|x, t, z)p c (y|x, t) P y 0 2Y b βœ“ (y 0 |x, t, z)p c (y 0 |x, t) .", "(2) Using the shaped policy for exploration biases the search towards the critique policy's preference.", "We next describe a simple critique policy that we use in this paper.", "Lexical Policy Shaping We qualitatively observed that correct programs often contains tokens which are also present in the question.", "For example, the correct program in Figure 2 contains the token Points, which is also present in the question.", "We therefore, define a simple surface form similarity feature match(x, y) that computes the ratio of number of non-keyword tokens in the program y that are also present in the question x.", "However, surface-form similarity is often not enough.", "For example, both the first and fourth program in Figure 2 contain the token Points but only the fourth program is correct.", "Therefore, we also use a simple co-occurrence feature that triggers on frequently co-occurring pairs of tokens in the program and instruction.", "For example, the token most is highly likely to co-occur with a correct program containing the keyword Max.", "This happens for the example in Figure 2 .", "Similarly the token not may co-occur with the keyword NotEqual.", "We assume access to a lexicon ⇀ = {(w j , !", "j )} k j=1 containing k lexical pairs of tokens and keywords.", "Each lexical pair (w, !)", "maps the token w in a text to a keyword !", "in a program.", "For a given program y and question x, we define a co-occurrence score as co_occur(y, x) = P (w,!", ")2⇀ {w 2 x^!", "2 y}}.", "We define critique score critique(y, x) as the sum of the match and co_occur scores.", "The critique policy is given by: p c (y|x, t) / exp (⌘ ⇀ critique(y, x)) , (3) where ⌘ is a single scalar hyper-parameter denoting the confidence in the critique policy.", "Addressing Update Strategy Selection: Generalized Update Equation Given the set of programs generated by the search step, one can use many objectives to update the parameters.", "For example, previous work have utilized maximum marginal likelihood (Krishnamurthy et al., 2017; Guu et al., 2017) , reinforcement learning (Zhong et al., 2017; Guu et al., 2017) and margin based methods (Iyyer et al., 2017) .", "It could be difficult to choose the suitable algorithm from these options.", "In this section, we propose a principle and general update equation such that previous update algorithms can be considered as special cases to this equation.", "Having a general update is important for the following reasons.", "First, it allows us to understand existing algorithms better by examining their basic properties.", "Second, the generalized update equation also makes it easy to implement and experiment with various different algorithms.", "Moreover, it provides a framework that enables the development of new variations or extensions of existing learning methods.", "In the following, we describe how the commonly used algorithms are in fact very similartheir update rules can all be viewed as special cases of the proposed generalized update equation.", "Algorithm 1 shows the meta-learning framework.", "For every training example, we first find a set of candidates using an exploration policy (line 4).", "We use the program candidates to update the parameters (line 6).", "Commonly Used Learning Algorithms We briefly describe three algorithms: maximum marginalized likelihood, policy gradient and maximum margin reward.", "Maximum Marginalized Likelihood The maximum marginalized likelihood method maximizes the log-likelihood of the training data by marginalizing over the set of programs.", "J MML = log p(z i |x i , t i ) = log X y2Y p(z i |y, t i )p(y|x i , t i ) (4) Because an answer is deterministically computed given a program and a table, we define p(z | y, t) as 1 or 0 depending upon whether the y evaluates to z given t, or not.", "Let Gen(z, t) βœ“ Y be the set of compatible programs that evaluate to z given the table t. The objective can then be expressed as: J MML = log X y2Gen(zi,ti) p(y|x i , t i ) (5) In practice, the summation over Gen(.)", "is approximated by only using the compatible programs in the set K generated by the search step.", "Policy Gradient Methods Most reinforcement learning approaches for semantic parsing assume access to a reward function R : Y β‡₯ X β‡₯ Z !", "R, giving a scalar reward R(y, z) for a given program y and the correct answer z.", "2 We can further assume without loss of generality that the reward is always in [0, 1].", "Reinforcement learning approaches maximize the expected reward J RL : J RL = X y2Y p(y|x i , t i )R(y, z i ) (6) J RL is hard to approximate using numerical integration since the reward for all programs may not be known a priori.", "Policy gradient methods solve this by approximating the derivative using a sample from the policy.", "When the search space is large, the policy may fail to sample a correct program, which can greatly slow down the learning.", "Therefore, off-policy methods are sometimes introduced to bias the sampling towards high-reward yielding programs.", "In those methods, an additional exploration policy u(y|x i , t i , z i ) is used to improve sampling.", "Importance weights are used to make the gradient unbiased (see Appendix for derivation).", "Maximum Margin Reward For every training example (x i , t i , z i ), the maximum margin reward method finds the highest scoring program y i that evaluates to z i , as the reference program, from the set K of programs generated by the search.", "With a margin function : Y β‡₯Y β‡₯Z !", "R and reference program y, the set of programs V that violate the margin constraint can thus be defined as: V = {y 0 | y 0 2 Y and score βœ“ (y, x, t) ο£Ώ score βœ“ (y 0 , x, t) + (y, y 0 , z)}, (7) where (y, y 0 , z) = R(y, z) R(y 0 , z).", "Similarly, the program that most violates the constraint can be written as: y = arg max y 0 2Y {score βœ“ (y 0 , x, t) + (y, y 0 , z) score βœ“ (y, x, t)} (8) The most-violation margin objective (negative margin loss) is thus defined as: J MMR = max{0, score βœ“ (Θ³, x i , t i ) score βœ“ (y i , x i , t i ) + (y i ,Θ³, z i )} Unlike the previous two learning algorithms, margin methods only update the score of the reference program and the program that violates the margin.", "Generalized Update Equation Although the algorithms described in Β§4.1 seem very different on the surface, the gradients of their loss functions can in fact be described in the same generalized form, given in Eq.", "(9) 3 .", "In addition to the gradient of the model scoring function, this equation has two variable terms, w(Β·), q(Β·).", "We call the first term w(y, x, t, z) intensity, which is a positive scalar value and the second term q(y|x, t) the competing distribution, which is a probability distribution over programs.", "Varying them makes the equation equivalent to the update rule of the algorithms we discussed, as shown in Table 1 .", "We also consider meritocratic update policy which uses a hyperparameter to sharpen or smooth the intensity of maximum marginal likelihood (Guu et al., 2017) .", "Intuitively, w(y, x, t, z) defines the positive part of the update equation, which defines how aggressively the update favors program y.", "Likewise, q(y|x, t) defines the negative part of the learning Generalized Update Equation: (7) ) andΘ³ is the most violating program (cf.", "Eq.", "(8) ).", "For REINFORCE,Ε· is sampled from K using p(.)", "whereas for Off-Policy Policy Gradient,Ε· is sampled using u(.).", "(K) = X y2K w(y, x, t, z) 0 @ r βœ“ score βœ“ (y, x, t) X y 0 2Y q(y 0 |x, t)r βœ“ score βœ“ (y 0 , x, t) algorithm, namely how aggressively the update penalizes the members of the program set.", "The generalized update equation provides a tool for better understanding individual algorithm, and helps shed some light on when a particular method may perform better.", "Intensity versus Search Quality In SpFD, the effectiveness of the algorithms for SpFD is closely related to the quality of the search results given that the gold program is not available.", "Intuitively, if the search quality is good, the update algorithm could be aggressive on updating the model parameters.", "When the search quality is poor, the algorithm should be conservative.", "The intensity w(Β·) is closely related to the aggressiveness of the algorithm.", "For example, the maximum marginal likelihood is less aggressive given that it produces a non-zero intensity over all programs in the program set K that evaluate to the correct answer.", "The intensity for a particular correct program y is proportional to its probability p(y|x, t).", "Further, meritocratic update becomes more aggressive as becomes larger.", "In contrast, REINFORCE and maximum margin reward both have a non-zero intensity only on a single program in K. This value is 1.0 for maximum margin reward, while for reinforcement learning, this value is the reward.", "Maximum margin reward therefore updates most aggressively in favor of its selection while maximum marginal likelihood tends to hedge its bet.", "Therefore, the maximum margin methods should benefit the most when the search quality improves.", "Stability The general equation also allows us to investigate the stability of a model update algorithm.", "In general, the variance of update direction can be high, hence less stable, if the model update algorithm has peaky competing distribution, or it puts all of its intensity on a single program.", "For example, REINFORCE only samples one program and puts non-zero intensity only on that program, so it could be unstable depending on the sampling results.", "The competing distribution affects the stability of the algorithm.", "For example, maximum margin reward penalizes only the most violating program and is benign to other incorrect programs.", "Therefore, the MMR algorithm could be unstable during training.", "New Model Update Algorithm The general equation provides a framework that enables the development of new variations or extensions of existing learning methods.", "For example, in order to improve the stability of the MMR algorithm, we propose a simple variant of maximum margin reward, which penalizes all violating programs instead of only the most violating one.", "We call this approach maximum margin average violation reward (MAVER), which is included in Table 1 as well.", "Given that MAVER effectively considers more negative examples during each update, we expect that it is more stable compared to the MMR algorithm.", "Experiments We describe the setup in Β§5.1 and results in Β§5.2.", "Setup Dataset We use the sequential question answering (SQA) dataset (Iyyer et al., 2017) for our experiments.", "SQA contains 6,066 sequences and each sequence contains up to 3 questions, with 17,553 questions in total.", "The data is partitioned into training (83%) and test (17%) splits.", "We use 4/5 of the original train split as our training set and the remaining 1/5 as the dev set.", "We evaluate using exact match on answer.", "Previous state-of-theart result on the SQA dataset is 44.7% accuracy, using maximum margin reward learning.", "Semantic Parser Our semantic parser is based on DynSP (Iyyer et al., 2017) , which contains a set of SQL actions, such as adding a clause (e.g., Select Column) or adding an operator (e.g., Max).", "Each action has an associated neural network module that generates the score for the action based on the instruction, the table and the list of past actions.", "The score of the entire program is given by the sum of scores of all actions.", "We modified DynSP to improve its representational capacity.", "We refer to the new parser as DynSP++.", "Most notably, we included new features and introduced two additional parser actions.", "See Appendix 8.2 for more details.", "While these improvements help us achieve state-of-the-art results, the majority of the gain comes from the learning contributions described in this paper.", "Hyperparameters For each experiment, we train the model for 30 epochs.", "We find the optimal stopping epoch by evaluating the model on the dev set.", "We then train on train+dev set till the stopping epoch and evaluate the model on the held-out test set.", "Model parameters are trained using stochastic gradient descent with learning rate of 0.1.", "We set the hyperparameter ⌘ for policy shaping to 5.", "All hyperparameters were tuned on the dev set.", "We use 40 lexical pairs for defining the co-occur score.", "We used common English superlatives (e.g., highest, most) and comparators (e.g., more, larger) and did not fit the lexical pairs based on the dataset.", "Given the model parameter βœ“, we use a base exploration policy defined in (Iyyer et al., 2017) .", "This exploration policy is given by b βœ“ (y | x, t, z) / exp( Β· R(y, z) + score βœ“ (y, βœ“, z)).", "R(y, z) is the reward function of the incomplete program y, given the answer z.", "We use a reward function R(y, z) given by the Jaccard similarity of the gold answer z and the answer generated by the program y.", "The value of is set to infinity, which essentially is equivalent to sorting the programs based on the reward and using the current model score for tie breaking.", "Further, we prune all syntactically invalid programs.", "For more details, we refer the reader to (Iyyer et al., 2017) .", "Table 2 contains the dev and test results when using our algorithm on the SQA dataset.", "We observe that margin based methods perform better than maximum likelihood methods and policy gradient in our experiment.", "Policy shaping in general improves the performance across different algorithms.", "Our best test results outperform previous SOTA by 5.0%.", "Results Policy Gradient vs Off-Policy Gradient RE-INFORCE, a simple policy gradient method, achieved extremely poor performance.", "This likely due to the problem of exploration and having to sample from a large space of programs.", "This is further corroborated from observing the much superior performance of off-policy policy gradient methods.", "Thus, the sampling policy is an important factor to consider for policy gradient methods.", "The Effect of Policy Shaping We observe that the improvement due to policy shaping is 6.0% on the SQA dataset for MAVER and only 1.3% for maximum marginal likelihood.", "We also observe that as increases, the improvement due to policy shaping for meritocratic update increases.", "This supports our hypothesis that aggressive updates of margin based methods is beneficial when the search method is more accurate as compared to maximum marginal likelihood which hedges its bet between all programs that evaluate to the right answer.", "Stability of MMR In Section 4, the general update equation helps us point out that MMR could be unstable due to the peaky competing distribution.", "MAVER was proposed to increase the stability of the algorithm.", "To measure stability, we cal- Table 4 .", "Policy Shaping vs Model Shaping Critique policy contains useful information that can bias the search away from spurious programs.", "Therefore, one can also consider making the critique policy as part of the model.", "We call this model shaping.", "We define our model to be the shaped policy and train and test using the new model.", "Using MAVER updates, we found that the dev accuracy dropped to 37.1%.", "We conjecture that the strong prior in the critique policy can hinder generalization in model shaping.", "Related Work Semantic Parsing from Denotation Mapping natural language text to formal meaning representation was first studied by Montague (1970) .", "Early work on learning semantic parsers rely on labeled formal representations as the supervision signals Collins, 2005, 2007; Zelle and Mooney, 1993) .", "However, because getting access to gold formal representation generally requires expensive annotations by an expert, distant supervision approaches, where semantic parsers are learned from denotation only, have become the main learning paradigm (e.g., Clarke et al., 2010; Liang et al., 2011; Artzi and Zettlemoyer, 2013; Berant et al., 2013; Iyyer et al., 2017; Krishnamurthy et al., 2017) .", "Guu et al.", "(2017) studied the problem of spurious programs and considered adding noise to diversify the search procedure and introduced meritocratic updates.", "Reinforcement Learning Algorithms Reinforcement learning algorithms have been applied to various NLP problems including dialogue (Li et al., 2016 ), text-based games (Narasimhan et al., 2015) , information extraction (Narasimhan et al., 2016) , coreference resolution (Clark and Man- ning, 2016), semantic parsing (Guu et al., 2017) and instruction following (Misra et al., 2017) .", "Guu et al.", "(2017) show that policy gradient methods underperform maximum marginal likelihood approaches.", "Our result on the SQA dataset supports their observation.", "However, we show that using off-policy sampling, policy gradient methods can provide superior performance to maximum marginal likelihood methods.", "Margin-based Learning Margin-based methods have been considered in the context of SVM learning.", "In the NLP literature, margin based learning has been applied to parsing (Taskar et al., 2004; McDonald et al., 2005) , text classification (Taskar et al., 2003) , machine translation (Watanabe et al., 2007) and semantic parsing (Iyyer et al., 2017) .", "Kummerfeld et al.", "(2015) found that max-margin based methods generally outperform likelihood maximization on a range of tasks.", "Previous work have studied connections between margin based method and likelihood maximization for supervised learning setting.", "We show them as special cases of our unified update equation for distant supervision learning.", "Similar to this work, Lee et al.", "(2016) also found that in the context of supervised learning, margin-based algorithms which update all violated examples perform better than the one that only updates the most violated example.", "Latent Variable Modeling Learning semantic parsers from denotation can be viewed as a latent variable modeling problem, where the program is the latent variable.", "Probabilistic latent variable models have been studied using EM-algorithm and its variant (Dempster et al., 1977) .", "The graphical model literature has studied latent variable learning on margin-based methods (Yu and Joachims, 2009 ) and probabilistic models (Quattoni et al., 2007) .", "Samdani et al.", "(2012) studied various vari-ants of EM algorithm and showed that all of them are special cases of a unified framework.", "Our generalized update framework is similar in spirit.", "Conclusion In this paper, we propose a general update equation from semantic parsing from denotation and propose a policy shaping method for addressing the spurious program challenge.", "For the future, we plan to apply the proposed learning framework to more semantic parsing tasks and consider new methods for policy shaping." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "4", "4.1", "4.2", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Background", "The Semantic Parsing Task", "Learning", "Addressing Spurious Programs: Policy Shaping", "Addressing Update Strategy Selection:", "Commonly Used Learning Algorithms", "Generalized Update Equation", "Experiments", "Setup", "Results", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-103#paper-1267#slide-16
Update Rules are Similar
Maximum Marginal Likelihood (MML)
Maximum Marginal Likelihood (MML)
[]
GEM-SciDuet-train-103#paper-1267#slide-17
1267
Policy Shaping and Generalized Update Equations for Semantic Parsing from Denotations
Semantic parsing from denotations faces two key challenges in model training: (1) given only the denotations (e.g., answers), search for good candidate semantic parses, and (2) choose the best model update algorithm. We propose effective and general solutions to each of them. Using policy shaping, we bias the search procedure towards semantic parses that are more compatible to the text, which provide better supervision signals for training. In addition, we propose an update equation that generalizes three different families of learning algorithms, which enables fast model exploration. When experimented on a recently proposed sequential question answering dataset, our framework leads to a new state-of-theart model that outperforms previous work by 5.0% absolute on exact match accuracy.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233 ], "paper_content_text": [ "Introduction Semantic parsing from denotations (SpFD) is the problem of mapping text to executable formal representations (or program) in a situated environment and executing them to generate denotations (or answer), in the absence of access to correct representations.", "Several problems have been handled within this framework, including question answering (Berant et al., 2013; Iyyer et al., 2017) and instructions for robots (Artzi and Zettlemoyer, 2013; Misra et al., 2015) .", "Consider the example in Figure 1 .", "Given the question and a table environment, a semantic parser maps the question to an executable program, in this case a SQL query, and then executes the query on the environment to generate the answer England.", "In the SpFD setting, the training data does not contain the correct programs.", "Thus, the existing learning approaches for SpFD perform two steps for every training example, a search step that explores the space of programs Select Nation Where Points is Maximum Program: Answer: Environment: England Figure 1 : An example of semantic parsing from denotations.", "Given the table environment, map the question to an executable program that evaluates to the answer.", "and finds suitable candidates, and an update step that uses these programs to update the model.", "Figure 2 shows the two step training procedure for the above example.", "In this paper, we address two key challenges in model training for SpFD by proposing a novel learning framework, improving both the search and update steps.", "The first challenge, the existence of spurious programs, lies in the search step.", "More specifically, while the success of the search step relies on its ability to find programs that are semantically correct, we can only verify if the program can generate correct answers, given that no gold programs are presented.", "The search step is complicated by spurious programs, which happen to evaluate to the correct answer but do not represent accurately the meaning of the natural language question.", "For example, for the environment in Figure 1 , the program Select Nation Where Name = Karen Andrew is spurious.", "Selecting spurious programs as positive examples can greatly affect the performance of semantic parsers as these programs generally do not gen- Our contributions in this work are twofold.", "To address the first challenge, we propose a policy shaping (Griffith et al., 2013) method that incorporates simple, lightweight domain knowledge, such as a small set of lexical pairs of tokens in the question and program, in the form of a critique policy ( Β§ 3).", "This helps bias the search towards the correct program, an important step to improve supervision signals, which benefits learning regardless of the choice of algorithm.", "To address the second challenge, we prove that the parameter update step in several algorithms are similar and can be viewed as special cases of a generalized update equation ( Β§ 4).", "The equation contains two variable terms that govern the update behavior.", "Changing these two terms effectively defines an infinite class of learning algorithms where different values lead to significantly different results.", "We study this effect and propose a novel learning framework that improves over existing methods.", "We evaluate our methods using the sequential question answering (SQA) dataset (Iyyer et al., 2017) , and show that our proposed improvements to the search and update steps consistently enhance existing approaches.", "The proposed algorithm achieves new state-of-the-art and outperforms existing parsers by 5.0%.", "Background We give a formal problem definition of the semantic parsing task, followed by the general learning framework for solving it.", "The Semantic Parsing Task The problem discussed in this paper can be formally defined as follows.", "Let X be the set of all possible questions, Y programs (e.g., SQL-like queries), T tables (i.e., the structured data in this work) and Z answers.", "We further assume access to an executor : Y β‡₯ T !", "Z, that given a program y 2 Y and a table t 2 T , generates an answer (y, t) 2 Z.", "We assume that the executor and all tables are deterministic and the executor can be called as many times as possible.", "To facilitate discussion in the following sections, we define an environment function e t : Y !", "Z, by applying the executor to the program as e t (y) = (y, t).", "Given a question x and an environment e t , our aim is to generate a program y ⇀ 2 Y and then execute it to produce the answer e t (y ⇀ ).", "Assume that for any y 2 Y, the score of y being a correct program for x is score βœ“ (y, x, t), parameterized by βœ“.", "The inference task is thus: y ⇀ = arg max y2Y score βœ“ (y, x, t) (1) As the size of Y is exponential to the length of the program, a generic search procedure is typically employed for Eq.", "(1), as efficient dynamic algorithms typically do not exist.", "These search procedures generally maintain a beam of program states sorted according to some scoring function, where each program state represents an incomplete program.", "The search then generates a new program state from an existing state by performing an action.", "Each action adds a set of tokens (e.g., Nation) and keyword (e.g., Select) to a program state.", "For example, in order to generate the program in Figure 1 , the DynSP parser (Iyyer et al., 2017) will take the first action as adding the SQL expression Select Nation.", "Notice that score βœ“ can be used in either probabilistic or nonprobabilistic models.", "For probabilistic models, we assume that it is a Boltzmann policy, meaning that p βœ“ (y | x, t) / exp{score βœ“ (y, x, t)}.", "Learning Learning a semantic parser is equivalent to learning the parameters βœ“ in the scoring function, which is a structured learning problem, due to the large, structured output space Y.", "Structured learning algorithms generally consist of two major components: search and update.", "When the gold programs are available during training, the search procedure finds a set of high-scoring incorrect programs.", "These programs are used by the update step to derive loss for updating parameters.", "For example, these programs are used for approximating the partition-function in maximum-likelihood objective (Liang et al., 2011) and finding set of programs causing margin violation in margin based methods (DaumΓ© III and Marcu, 2005) .", "Depending on the exact algorithm being used, these two components are not necessarily separated into isolated steps.", "For instance, parameters can be updated in the middle of search (e.g., Huang et al., 2012) .", "For learning semantic parsers from denotations, where we assume only answers are available in a training set {( x i , t i , z i )} N i=1 of N examples, the basic construction of the learning algorithms remains the same.", "However, the problems that search needs to handle in SpFD is more challenging.", "In addition to finding a set of high-scoring incorrect programs, the search procedure also needs to guess the correct program(s) evaluating to the gold answer z i .", "This problem is further complicated by the presence of spurious programs, which generate the correct answer but are semantically incompatible with the question.", "For example, although all programs in Figure 2 evaluate to the same answer, only one of them is correct.", "The issue of the spurious programs also affects the design of model update.", "For instance, maximum marginal likelihood methods treat all the programs that evaluate to the gold answer equally, while maximum margin reward networks use model score to break tie and pick one of the programs as the correct reference.", "Addressing Spurious Programs: Policy Shaping Given a training example (x, t, z), the aim of the search step is to find a set K(x, t, z) of programs consisting of correct programs that evaluate to z and high-scoring incorrect programs.", "The search step should avoid picking up spurious programs for learning since such programs typically do not generalize.", "For example, in Figure 2, the spurious program Select Nation Where Index is Min will evaluate to an incorrect answer if the indices of the first two rows are swapped 1 .", "This problem is challenging since among the programs that evaluate to the correct answer, most of them are spurious.", "The search step can be viewed as following an exploration policy b βœ“ (y|x, t, z) to explore the set of programs Y.", "This exploration is often performed by beam search and at each step, we either sample from b βœ“ or take the top scoring programs.", "The set K(x, t, z) is then used by the update step for parameter update.", "Most search strategies use an exploration policy which is based on the score function, for example b βœ“ (y|x, t, z) / exp{score βœ“ (y, t)}.", "However, this approach can suffer from a divergence phenomenon whereby the score of spurious programs picked up by the search in the first epoch increases, making it more likely for the search to pick them up in the future.", "Such divergence issues are common with latent-variable learning and often require careful initialization to overcome (Rose, 1998) .", "Unfortunately such initialization schemes are not applicable for deep neural networks which form the model of most successful semantic parsers today (Jia and Liang, 2016; Misra and Artzi, 2016; Iyyer et al., 2017) .", "Prior work, such as ✏-greedy exploration (Guu et al., 2017) , has reduced the severity of this problem by introducing random noise in the search procedure to avoid saturating the search on high-scoring spurious programs.", "However, random noise need not bias the search towards the correct program(s).", "In this paper, we introduce a simple policy-shaping method to guide the search.", "This approach allows incorporating prior knowledge in the exploration policy and can bias the search away from spurious programs.", "Β» Compute generalized gradient updates 6: βœ“ = βœ“ + Β΅ (K) 7: return βœ“ Policy Shaping Policy shaping is a method to introduce prior knowledge into a policy (Griffith et al., 2013) .", "Formally, let the current behavior policy be b βœ“ (y|x, t, z) and a predefined critique policy, the prior knowledge, be p c (y|x, t).", "Policy shaping defines a new shaped behavior policy p b (y|x, t) given by: p b (y|x, t) = b βœ“ (y|x, t, z)p c (y|x, t) P y 0 2Y b βœ“ (y 0 |x, t, z)p c (y 0 |x, t) .", "(2) Using the shaped policy for exploration biases the search towards the critique policy's preference.", "We next describe a simple critique policy that we use in this paper.", "Lexical Policy Shaping We qualitatively observed that correct programs often contains tokens which are also present in the question.", "For example, the correct program in Figure 2 contains the token Points, which is also present in the question.", "We therefore, define a simple surface form similarity feature match(x, y) that computes the ratio of number of non-keyword tokens in the program y that are also present in the question x.", "However, surface-form similarity is often not enough.", "For example, both the first and fourth program in Figure 2 contain the token Points but only the fourth program is correct.", "Therefore, we also use a simple co-occurrence feature that triggers on frequently co-occurring pairs of tokens in the program and instruction.", "For example, the token most is highly likely to co-occur with a correct program containing the keyword Max.", "This happens for the example in Figure 2 .", "Similarly the token not may co-occur with the keyword NotEqual.", "We assume access to a lexicon ⇀ = {(w j , !", "j )} k j=1 containing k lexical pairs of tokens and keywords.", "Each lexical pair (w, !)", "maps the token w in a text to a keyword !", "in a program.", "For a given program y and question x, we define a co-occurrence score as co_occur(y, x) = P (w,!", ")2⇀ {w 2 x^!", "2 y}}.", "We define critique score critique(y, x) as the sum of the match and co_occur scores.", "The critique policy is given by: p c (y|x, t) / exp (⌘ ⇀ critique(y, x)) , (3) where ⌘ is a single scalar hyper-parameter denoting the confidence in the critique policy.", "Addressing Update Strategy Selection: Generalized Update Equation Given the set of programs generated by the search step, one can use many objectives to update the parameters.", "For example, previous work have utilized maximum marginal likelihood (Krishnamurthy et al., 2017; Guu et al., 2017) , reinforcement learning (Zhong et al., 2017; Guu et al., 2017) and margin based methods (Iyyer et al., 2017) .", "It could be difficult to choose the suitable algorithm from these options.", "In this section, we propose a principle and general update equation such that previous update algorithms can be considered as special cases to this equation.", "Having a general update is important for the following reasons.", "First, it allows us to understand existing algorithms better by examining their basic properties.", "Second, the generalized update equation also makes it easy to implement and experiment with various different algorithms.", "Moreover, it provides a framework that enables the development of new variations or extensions of existing learning methods.", "In the following, we describe how the commonly used algorithms are in fact very similartheir update rules can all be viewed as special cases of the proposed generalized update equation.", "Algorithm 1 shows the meta-learning framework.", "For every training example, we first find a set of candidates using an exploration policy (line 4).", "We use the program candidates to update the parameters (line 6).", "Commonly Used Learning Algorithms We briefly describe three algorithms: maximum marginalized likelihood, policy gradient and maximum margin reward.", "Maximum Marginalized Likelihood The maximum marginalized likelihood method maximizes the log-likelihood of the training data by marginalizing over the set of programs.", "J MML = log p(z i |x i , t i ) = log X y2Y p(z i |y, t i )p(y|x i , t i ) (4) Because an answer is deterministically computed given a program and a table, we define p(z | y, t) as 1 or 0 depending upon whether the y evaluates to z given t, or not.", "Let Gen(z, t) βœ“ Y be the set of compatible programs that evaluate to z given the table t. The objective can then be expressed as: J MML = log X y2Gen(zi,ti) p(y|x i , t i ) (5) In practice, the summation over Gen(.)", "is approximated by only using the compatible programs in the set K generated by the search step.", "Policy Gradient Methods Most reinforcement learning approaches for semantic parsing assume access to a reward function R : Y β‡₯ X β‡₯ Z !", "R, giving a scalar reward R(y, z) for a given program y and the correct answer z.", "2 We can further assume without loss of generality that the reward is always in [0, 1].", "Reinforcement learning approaches maximize the expected reward J RL : J RL = X y2Y p(y|x i , t i )R(y, z i ) (6) J RL is hard to approximate using numerical integration since the reward for all programs may not be known a priori.", "Policy gradient methods solve this by approximating the derivative using a sample from the policy.", "When the search space is large, the policy may fail to sample a correct program, which can greatly slow down the learning.", "Therefore, off-policy methods are sometimes introduced to bias the sampling towards high-reward yielding programs.", "In those methods, an additional exploration policy u(y|x i , t i , z i ) is used to improve sampling.", "Importance weights are used to make the gradient unbiased (see Appendix for derivation).", "Maximum Margin Reward For every training example (x i , t i , z i ), the maximum margin reward method finds the highest scoring program y i that evaluates to z i , as the reference program, from the set K of programs generated by the search.", "With a margin function : Y β‡₯Y β‡₯Z !", "R and reference program y, the set of programs V that violate the margin constraint can thus be defined as: V = {y 0 | y 0 2 Y and score βœ“ (y, x, t) ο£Ώ score βœ“ (y 0 , x, t) + (y, y 0 , z)}, (7) where (y, y 0 , z) = R(y, z) R(y 0 , z).", "Similarly, the program that most violates the constraint can be written as: y = arg max y 0 2Y {score βœ“ (y 0 , x, t) + (y, y 0 , z) score βœ“ (y, x, t)} (8) The most-violation margin objective (negative margin loss) is thus defined as: J MMR = max{0, score βœ“ (Θ³, x i , t i ) score βœ“ (y i , x i , t i ) + (y i ,Θ³, z i )} Unlike the previous two learning algorithms, margin methods only update the score of the reference program and the program that violates the margin.", "Generalized Update Equation Although the algorithms described in Β§4.1 seem very different on the surface, the gradients of their loss functions can in fact be described in the same generalized form, given in Eq.", "(9) 3 .", "In addition to the gradient of the model scoring function, this equation has two variable terms, w(Β·), q(Β·).", "We call the first term w(y, x, t, z) intensity, which is a positive scalar value and the second term q(y|x, t) the competing distribution, which is a probability distribution over programs.", "Varying them makes the equation equivalent to the update rule of the algorithms we discussed, as shown in Table 1 .", "We also consider meritocratic update policy which uses a hyperparameter to sharpen or smooth the intensity of maximum marginal likelihood (Guu et al., 2017) .", "Intuitively, w(y, x, t, z) defines the positive part of the update equation, which defines how aggressively the update favors program y.", "Likewise, q(y|x, t) defines the negative part of the learning Generalized Update Equation: (7) ) andΘ³ is the most violating program (cf.", "Eq.", "(8) ).", "For REINFORCE,Ε· is sampled from K using p(.)", "whereas for Off-Policy Policy Gradient,Ε· is sampled using u(.).", "(K) = X y2K w(y, x, t, z) 0 @ r βœ“ score βœ“ (y, x, t) X y 0 2Y q(y 0 |x, t)r βœ“ score βœ“ (y 0 , x, t) algorithm, namely how aggressively the update penalizes the members of the program set.", "The generalized update equation provides a tool for better understanding individual algorithm, and helps shed some light on when a particular method may perform better.", "Intensity versus Search Quality In SpFD, the effectiveness of the algorithms for SpFD is closely related to the quality of the search results given that the gold program is not available.", "Intuitively, if the search quality is good, the update algorithm could be aggressive on updating the model parameters.", "When the search quality is poor, the algorithm should be conservative.", "The intensity w(Β·) is closely related to the aggressiveness of the algorithm.", "For example, the maximum marginal likelihood is less aggressive given that it produces a non-zero intensity over all programs in the program set K that evaluate to the correct answer.", "The intensity for a particular correct program y is proportional to its probability p(y|x, t).", "Further, meritocratic update becomes more aggressive as becomes larger.", "In contrast, REINFORCE and maximum margin reward both have a non-zero intensity only on a single program in K. This value is 1.0 for maximum margin reward, while for reinforcement learning, this value is the reward.", "Maximum margin reward therefore updates most aggressively in favor of its selection while maximum marginal likelihood tends to hedge its bet.", "Therefore, the maximum margin methods should benefit the most when the search quality improves.", "Stability The general equation also allows us to investigate the stability of a model update algorithm.", "In general, the variance of update direction can be high, hence less stable, if the model update algorithm has peaky competing distribution, or it puts all of its intensity on a single program.", "For example, REINFORCE only samples one program and puts non-zero intensity only on that program, so it could be unstable depending on the sampling results.", "The competing distribution affects the stability of the algorithm.", "For example, maximum margin reward penalizes only the most violating program and is benign to other incorrect programs.", "Therefore, the MMR algorithm could be unstable during training.", "New Model Update Algorithm The general equation provides a framework that enables the development of new variations or extensions of existing learning methods.", "For example, in order to improve the stability of the MMR algorithm, we propose a simple variant of maximum margin reward, which penalizes all violating programs instead of only the most violating one.", "We call this approach maximum margin average violation reward (MAVER), which is included in Table 1 as well.", "Given that MAVER effectively considers more negative examples during each update, we expect that it is more stable compared to the MMR algorithm.", "Experiments We describe the setup in Β§5.1 and results in Β§5.2.", "Setup Dataset We use the sequential question answering (SQA) dataset (Iyyer et al., 2017) for our experiments.", "SQA contains 6,066 sequences and each sequence contains up to 3 questions, with 17,553 questions in total.", "The data is partitioned into training (83%) and test (17%) splits.", "We use 4/5 of the original train split as our training set and the remaining 1/5 as the dev set.", "We evaluate using exact match on answer.", "Previous state-of-theart result on the SQA dataset is 44.7% accuracy, using maximum margin reward learning.", "Semantic Parser Our semantic parser is based on DynSP (Iyyer et al., 2017) , which contains a set of SQL actions, such as adding a clause (e.g., Select Column) or adding an operator (e.g., Max).", "Each action has an associated neural network module that generates the score for the action based on the instruction, the table and the list of past actions.", "The score of the entire program is given by the sum of scores of all actions.", "We modified DynSP to improve its representational capacity.", "We refer to the new parser as DynSP++.", "Most notably, we included new features and introduced two additional parser actions.", "See Appendix 8.2 for more details.", "While these improvements help us achieve state-of-the-art results, the majority of the gain comes from the learning contributions described in this paper.", "Hyperparameters For each experiment, we train the model for 30 epochs.", "We find the optimal stopping epoch by evaluating the model on the dev set.", "We then train on train+dev set till the stopping epoch and evaluate the model on the held-out test set.", "Model parameters are trained using stochastic gradient descent with learning rate of 0.1.", "We set the hyperparameter ⌘ for policy shaping to 5.", "All hyperparameters were tuned on the dev set.", "We use 40 lexical pairs for defining the co-occur score.", "We used common English superlatives (e.g., highest, most) and comparators (e.g., more, larger) and did not fit the lexical pairs based on the dataset.", "Given the model parameter βœ“, we use a base exploration policy defined in (Iyyer et al., 2017) .", "This exploration policy is given by b βœ“ (y | x, t, z) / exp( Β· R(y, z) + score βœ“ (y, βœ“, z)).", "R(y, z) is the reward function of the incomplete program y, given the answer z.", "We use a reward function R(y, z) given by the Jaccard similarity of the gold answer z and the answer generated by the program y.", "The value of is set to infinity, which essentially is equivalent to sorting the programs based on the reward and using the current model score for tie breaking.", "Further, we prune all syntactically invalid programs.", "For more details, we refer the reader to (Iyyer et al., 2017) .", "Table 2 contains the dev and test results when using our algorithm on the SQA dataset.", "We observe that margin based methods perform better than maximum likelihood methods and policy gradient in our experiment.", "Policy shaping in general improves the performance across different algorithms.", "Our best test results outperform previous SOTA by 5.0%.", "Results Policy Gradient vs Off-Policy Gradient RE-INFORCE, a simple policy gradient method, achieved extremely poor performance.", "This likely due to the problem of exploration and having to sample from a large space of programs.", "This is further corroborated from observing the much superior performance of off-policy policy gradient methods.", "Thus, the sampling policy is an important factor to consider for policy gradient methods.", "The Effect of Policy Shaping We observe that the improvement due to policy shaping is 6.0% on the SQA dataset for MAVER and only 1.3% for maximum marginal likelihood.", "We also observe that as increases, the improvement due to policy shaping for meritocratic update increases.", "This supports our hypothesis that aggressive updates of margin based methods is beneficial when the search method is more accurate as compared to maximum marginal likelihood which hedges its bet between all programs that evaluate to the right answer.", "Stability of MMR In Section 4, the general update equation helps us point out that MMR could be unstable due to the peaky competing distribution.", "MAVER was proposed to increase the stability of the algorithm.", "To measure stability, we cal- Table 4 .", "Policy Shaping vs Model Shaping Critique policy contains useful information that can bias the search away from spurious programs.", "Therefore, one can also consider making the critique policy as part of the model.", "We call this model shaping.", "We define our model to be the shaped policy and train and test using the new model.", "Using MAVER updates, we found that the dev accuracy dropped to 37.1%.", "We conjecture that the strong prior in the critique policy can hinder generalization in model shaping.", "Related Work Semantic Parsing from Denotation Mapping natural language text to formal meaning representation was first studied by Montague (1970) .", "Early work on learning semantic parsers rely on labeled formal representations as the supervision signals Collins, 2005, 2007; Zelle and Mooney, 1993) .", "However, because getting access to gold formal representation generally requires expensive annotations by an expert, distant supervision approaches, where semantic parsers are learned from denotation only, have become the main learning paradigm (e.g., Clarke et al., 2010; Liang et al., 2011; Artzi and Zettlemoyer, 2013; Berant et al., 2013; Iyyer et al., 2017; Krishnamurthy et al., 2017) .", "Guu et al.", "(2017) studied the problem of spurious programs and considered adding noise to diversify the search procedure and introduced meritocratic updates.", "Reinforcement Learning Algorithms Reinforcement learning algorithms have been applied to various NLP problems including dialogue (Li et al., 2016 ), text-based games (Narasimhan et al., 2015) , information extraction (Narasimhan et al., 2016) , coreference resolution (Clark and Man- ning, 2016), semantic parsing (Guu et al., 2017) and instruction following (Misra et al., 2017) .", "Guu et al.", "(2017) show that policy gradient methods underperform maximum marginal likelihood approaches.", "Our result on the SQA dataset supports their observation.", "However, we show that using off-policy sampling, policy gradient methods can provide superior performance to maximum marginal likelihood methods.", "Margin-based Learning Margin-based methods have been considered in the context of SVM learning.", "In the NLP literature, margin based learning has been applied to parsing (Taskar et al., 2004; McDonald et al., 2005) , text classification (Taskar et al., 2003) , machine translation (Watanabe et al., 2007) and semantic parsing (Iyyer et al., 2017) .", "Kummerfeld et al.", "(2015) found that max-margin based methods generally outperform likelihood maximization on a range of tasks.", "Previous work have studied connections between margin based method and likelihood maximization for supervised learning setting.", "We show them as special cases of our unified update equation for distant supervision learning.", "Similar to this work, Lee et al.", "(2016) also found that in the context of supervised learning, margin-based algorithms which update all violated examples perform better than the one that only updates the most violated example.", "Latent Variable Modeling Learning semantic parsers from denotation can be viewed as a latent variable modeling problem, where the program is the latent variable.", "Probabilistic latent variable models have been studied using EM-algorithm and its variant (Dempster et al., 1977) .", "The graphical model literature has studied latent variable learning on margin-based methods (Yu and Joachims, 2009 ) and probabilistic models (Quattoni et al., 2007) .", "Samdani et al.", "(2012) studied various vari-ants of EM algorithm and showed that all of them are special cases of a unified framework.", "Our generalized update framework is similar in spirit.", "Conclusion In this paper, we propose a general update equation from semantic parsing from denotation and propose a policy shaping method for addressing the spurious program challenge.", "For the future, we plan to apply the proposed learning framework to more semantic parsing tasks and consider new methods for policy shaping." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "4", "4.1", "4.2", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Background", "The Semantic Parsing Task", "Learning", "Addressing Spurious Programs: Policy Shaping", "Addressing Update Strategy Selection:", "Commonly Used Learning Algorithms", "Generalized Update Equation", "Experiments", "Setup", "Results", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-103#paper-1267#slide-17
Generalized Update Equation
[Update] Update , according K = {y}
[Update] Update , according K = {y}
[]
GEM-SciDuet-train-103#paper-1267#slide-19
1267
Policy Shaping and Generalized Update Equations for Semantic Parsing from Denotations
Semantic parsing from denotations faces two key challenges in model training: (1) given only the denotations (e.g., answers), search for good candidate semantic parses, and (2) choose the best model update algorithm. We propose effective and general solutions to each of them. Using policy shaping, we bias the search procedure towards semantic parses that are more compatible to the text, which provide better supervision signals for training. In addition, we propose an update equation that generalizes three different families of learning algorithms, which enables fast model exploration. When experimented on a recently proposed sequential question answering dataset, our framework leads to a new state-of-theart model that outperforms previous work by 5.0% absolute on exact match accuracy.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233 ], "paper_content_text": [ "Introduction Semantic parsing from denotations (SpFD) is the problem of mapping text to executable formal representations (or program) in a situated environment and executing them to generate denotations (or answer), in the absence of access to correct representations.", "Several problems have been handled within this framework, including question answering (Berant et al., 2013; Iyyer et al., 2017) and instructions for robots (Artzi and Zettlemoyer, 2013; Misra et al., 2015) .", "Consider the example in Figure 1 .", "Given the question and a table environment, a semantic parser maps the question to an executable program, in this case a SQL query, and then executes the query on the environment to generate the answer England.", "In the SpFD setting, the training data does not contain the correct programs.", "Thus, the existing learning approaches for SpFD perform two steps for every training example, a search step that explores the space of programs Select Nation Where Points is Maximum Program: Answer: Environment: England Figure 1 : An example of semantic parsing from denotations.", "Given the table environment, map the question to an executable program that evaluates to the answer.", "and finds suitable candidates, and an update step that uses these programs to update the model.", "Figure 2 shows the two step training procedure for the above example.", "In this paper, we address two key challenges in model training for SpFD by proposing a novel learning framework, improving both the search and update steps.", "The first challenge, the existence of spurious programs, lies in the search step.", "More specifically, while the success of the search step relies on its ability to find programs that are semantically correct, we can only verify if the program can generate correct answers, given that no gold programs are presented.", "The search step is complicated by spurious programs, which happen to evaluate to the correct answer but do not represent accurately the meaning of the natural language question.", "For example, for the environment in Figure 1 , the program Select Nation Where Name = Karen Andrew is spurious.", "Selecting spurious programs as positive examples can greatly affect the performance of semantic parsers as these programs generally do not gen- Our contributions in this work are twofold.", "To address the first challenge, we propose a policy shaping (Griffith et al., 2013) method that incorporates simple, lightweight domain knowledge, such as a small set of lexical pairs of tokens in the question and program, in the form of a critique policy ( Β§ 3).", "This helps bias the search towards the correct program, an important step to improve supervision signals, which benefits learning regardless of the choice of algorithm.", "To address the second challenge, we prove that the parameter update step in several algorithms are similar and can be viewed as special cases of a generalized update equation ( Β§ 4).", "The equation contains two variable terms that govern the update behavior.", "Changing these two terms effectively defines an infinite class of learning algorithms where different values lead to significantly different results.", "We study this effect and propose a novel learning framework that improves over existing methods.", "We evaluate our methods using the sequential question answering (SQA) dataset (Iyyer et al., 2017) , and show that our proposed improvements to the search and update steps consistently enhance existing approaches.", "The proposed algorithm achieves new state-of-the-art and outperforms existing parsers by 5.0%.", "Background We give a formal problem definition of the semantic parsing task, followed by the general learning framework for solving it.", "The Semantic Parsing Task The problem discussed in this paper can be formally defined as follows.", "Let X be the set of all possible questions, Y programs (e.g., SQL-like queries), T tables (i.e., the structured data in this work) and Z answers.", "We further assume access to an executor : Y β‡₯ T !", "Z, that given a program y 2 Y and a table t 2 T , generates an answer (y, t) 2 Z.", "We assume that the executor and all tables are deterministic and the executor can be called as many times as possible.", "To facilitate discussion in the following sections, we define an environment function e t : Y !", "Z, by applying the executor to the program as e t (y) = (y, t).", "Given a question x and an environment e t , our aim is to generate a program y ⇀ 2 Y and then execute it to produce the answer e t (y ⇀ ).", "Assume that for any y 2 Y, the score of y being a correct program for x is score βœ“ (y, x, t), parameterized by βœ“.", "The inference task is thus: y ⇀ = arg max y2Y score βœ“ (y, x, t) (1) As the size of Y is exponential to the length of the program, a generic search procedure is typically employed for Eq.", "(1), as efficient dynamic algorithms typically do not exist.", "These search procedures generally maintain a beam of program states sorted according to some scoring function, where each program state represents an incomplete program.", "The search then generates a new program state from an existing state by performing an action.", "Each action adds a set of tokens (e.g., Nation) and keyword (e.g., Select) to a program state.", "For example, in order to generate the program in Figure 1 , the DynSP parser (Iyyer et al., 2017) will take the first action as adding the SQL expression Select Nation.", "Notice that score βœ“ can be used in either probabilistic or nonprobabilistic models.", "For probabilistic models, we assume that it is a Boltzmann policy, meaning that p βœ“ (y | x, t) / exp{score βœ“ (y, x, t)}.", "Learning Learning a semantic parser is equivalent to learning the parameters βœ“ in the scoring function, which is a structured learning problem, due to the large, structured output space Y.", "Structured learning algorithms generally consist of two major components: search and update.", "When the gold programs are available during training, the search procedure finds a set of high-scoring incorrect programs.", "These programs are used by the update step to derive loss for updating parameters.", "For example, these programs are used for approximating the partition-function in maximum-likelihood objective (Liang et al., 2011) and finding set of programs causing margin violation in margin based methods (DaumΓ© III and Marcu, 2005) .", "Depending on the exact algorithm being used, these two components are not necessarily separated into isolated steps.", "For instance, parameters can be updated in the middle of search (e.g., Huang et al., 2012) .", "For learning semantic parsers from denotations, where we assume only answers are available in a training set {( x i , t i , z i )} N i=1 of N examples, the basic construction of the learning algorithms remains the same.", "However, the problems that search needs to handle in SpFD is more challenging.", "In addition to finding a set of high-scoring incorrect programs, the search procedure also needs to guess the correct program(s) evaluating to the gold answer z i .", "This problem is further complicated by the presence of spurious programs, which generate the correct answer but are semantically incompatible with the question.", "For example, although all programs in Figure 2 evaluate to the same answer, only one of them is correct.", "The issue of the spurious programs also affects the design of model update.", "For instance, maximum marginal likelihood methods treat all the programs that evaluate to the gold answer equally, while maximum margin reward networks use model score to break tie and pick one of the programs as the correct reference.", "Addressing Spurious Programs: Policy Shaping Given a training example (x, t, z), the aim of the search step is to find a set K(x, t, z) of programs consisting of correct programs that evaluate to z and high-scoring incorrect programs.", "The search step should avoid picking up spurious programs for learning since such programs typically do not generalize.", "For example, in Figure 2, the spurious program Select Nation Where Index is Min will evaluate to an incorrect answer if the indices of the first two rows are swapped 1 .", "This problem is challenging since among the programs that evaluate to the correct answer, most of them are spurious.", "The search step can be viewed as following an exploration policy b βœ“ (y|x, t, z) to explore the set of programs Y.", "This exploration is often performed by beam search and at each step, we either sample from b βœ“ or take the top scoring programs.", "The set K(x, t, z) is then used by the update step for parameter update.", "Most search strategies use an exploration policy which is based on the score function, for example b βœ“ (y|x, t, z) / exp{score βœ“ (y, t)}.", "However, this approach can suffer from a divergence phenomenon whereby the score of spurious programs picked up by the search in the first epoch increases, making it more likely for the search to pick them up in the future.", "Such divergence issues are common with latent-variable learning and often require careful initialization to overcome (Rose, 1998) .", "Unfortunately such initialization schemes are not applicable for deep neural networks which form the model of most successful semantic parsers today (Jia and Liang, 2016; Misra and Artzi, 2016; Iyyer et al., 2017) .", "Prior work, such as ✏-greedy exploration (Guu et al., 2017) , has reduced the severity of this problem by introducing random noise in the search procedure to avoid saturating the search on high-scoring spurious programs.", "However, random noise need not bias the search towards the correct program(s).", "In this paper, we introduce a simple policy-shaping method to guide the search.", "This approach allows incorporating prior knowledge in the exploration policy and can bias the search away from spurious programs.", "Β» Compute generalized gradient updates 6: βœ“ = βœ“ + Β΅ (K) 7: return βœ“ Policy Shaping Policy shaping is a method to introduce prior knowledge into a policy (Griffith et al., 2013) .", "Formally, let the current behavior policy be b βœ“ (y|x, t, z) and a predefined critique policy, the prior knowledge, be p c (y|x, t).", "Policy shaping defines a new shaped behavior policy p b (y|x, t) given by: p b (y|x, t) = b βœ“ (y|x, t, z)p c (y|x, t) P y 0 2Y b βœ“ (y 0 |x, t, z)p c (y 0 |x, t) .", "(2) Using the shaped policy for exploration biases the search towards the critique policy's preference.", "We next describe a simple critique policy that we use in this paper.", "Lexical Policy Shaping We qualitatively observed that correct programs often contains tokens which are also present in the question.", "For example, the correct program in Figure 2 contains the token Points, which is also present in the question.", "We therefore, define a simple surface form similarity feature match(x, y) that computes the ratio of number of non-keyword tokens in the program y that are also present in the question x.", "However, surface-form similarity is often not enough.", "For example, both the first and fourth program in Figure 2 contain the token Points but only the fourth program is correct.", "Therefore, we also use a simple co-occurrence feature that triggers on frequently co-occurring pairs of tokens in the program and instruction.", "For example, the token most is highly likely to co-occur with a correct program containing the keyword Max.", "This happens for the example in Figure 2 .", "Similarly the token not may co-occur with the keyword NotEqual.", "We assume access to a lexicon ⇀ = {(w j , !", "j )} k j=1 containing k lexical pairs of tokens and keywords.", "Each lexical pair (w, !)", "maps the token w in a text to a keyword !", "in a program.", "For a given program y and question x, we define a co-occurrence score as co_occur(y, x) = P (w,!", ")2⇀ {w 2 x^!", "2 y}}.", "We define critique score critique(y, x) as the sum of the match and co_occur scores.", "The critique policy is given by: p c (y|x, t) / exp (⌘ ⇀ critique(y, x)) , (3) where ⌘ is a single scalar hyper-parameter denoting the confidence in the critique policy.", "Addressing Update Strategy Selection: Generalized Update Equation Given the set of programs generated by the search step, one can use many objectives to update the parameters.", "For example, previous work have utilized maximum marginal likelihood (Krishnamurthy et al., 2017; Guu et al., 2017) , reinforcement learning (Zhong et al., 2017; Guu et al., 2017) and margin based methods (Iyyer et al., 2017) .", "It could be difficult to choose the suitable algorithm from these options.", "In this section, we propose a principle and general update equation such that previous update algorithms can be considered as special cases to this equation.", "Having a general update is important for the following reasons.", "First, it allows us to understand existing algorithms better by examining their basic properties.", "Second, the generalized update equation also makes it easy to implement and experiment with various different algorithms.", "Moreover, it provides a framework that enables the development of new variations or extensions of existing learning methods.", "In the following, we describe how the commonly used algorithms are in fact very similartheir update rules can all be viewed as special cases of the proposed generalized update equation.", "Algorithm 1 shows the meta-learning framework.", "For every training example, we first find a set of candidates using an exploration policy (line 4).", "We use the program candidates to update the parameters (line 6).", "Commonly Used Learning Algorithms We briefly describe three algorithms: maximum marginalized likelihood, policy gradient and maximum margin reward.", "Maximum Marginalized Likelihood The maximum marginalized likelihood method maximizes the log-likelihood of the training data by marginalizing over the set of programs.", "J MML = log p(z i |x i , t i ) = log X y2Y p(z i |y, t i )p(y|x i , t i ) (4) Because an answer is deterministically computed given a program and a table, we define p(z | y, t) as 1 or 0 depending upon whether the y evaluates to z given t, or not.", "Let Gen(z, t) βœ“ Y be the set of compatible programs that evaluate to z given the table t. The objective can then be expressed as: J MML = log X y2Gen(zi,ti) p(y|x i , t i ) (5) In practice, the summation over Gen(.)", "is approximated by only using the compatible programs in the set K generated by the search step.", "Policy Gradient Methods Most reinforcement learning approaches for semantic parsing assume access to a reward function R : Y β‡₯ X β‡₯ Z !", "R, giving a scalar reward R(y, z) for a given program y and the correct answer z.", "2 We can further assume without loss of generality that the reward is always in [0, 1].", "Reinforcement learning approaches maximize the expected reward J RL : J RL = X y2Y p(y|x i , t i )R(y, z i ) (6) J RL is hard to approximate using numerical integration since the reward for all programs may not be known a priori.", "Policy gradient methods solve this by approximating the derivative using a sample from the policy.", "When the search space is large, the policy may fail to sample a correct program, which can greatly slow down the learning.", "Therefore, off-policy methods are sometimes introduced to bias the sampling towards high-reward yielding programs.", "In those methods, an additional exploration policy u(y|x i , t i , z i ) is used to improve sampling.", "Importance weights are used to make the gradient unbiased (see Appendix for derivation).", "Maximum Margin Reward For every training example (x i , t i , z i ), the maximum margin reward method finds the highest scoring program y i that evaluates to z i , as the reference program, from the set K of programs generated by the search.", "With a margin function : Y β‡₯Y β‡₯Z !", "R and reference program y, the set of programs V that violate the margin constraint can thus be defined as: V = {y 0 | y 0 2 Y and score βœ“ (y, x, t) ο£Ώ score βœ“ (y 0 , x, t) + (y, y 0 , z)}, (7) where (y, y 0 , z) = R(y, z) R(y 0 , z).", "Similarly, the program that most violates the constraint can be written as: y = arg max y 0 2Y {score βœ“ (y 0 , x, t) + (y, y 0 , z) score βœ“ (y, x, t)} (8) The most-violation margin objective (negative margin loss) is thus defined as: J MMR = max{0, score βœ“ (Θ³, x i , t i ) score βœ“ (y i , x i , t i ) + (y i ,Θ³, z i )} Unlike the previous two learning algorithms, margin methods only update the score of the reference program and the program that violates the margin.", "Generalized Update Equation Although the algorithms described in Β§4.1 seem very different on the surface, the gradients of their loss functions can in fact be described in the same generalized form, given in Eq.", "(9) 3 .", "In addition to the gradient of the model scoring function, this equation has two variable terms, w(Β·), q(Β·).", "We call the first term w(y, x, t, z) intensity, which is a positive scalar value and the second term q(y|x, t) the competing distribution, which is a probability distribution over programs.", "Varying them makes the equation equivalent to the update rule of the algorithms we discussed, as shown in Table 1 .", "We also consider meritocratic update policy which uses a hyperparameter to sharpen or smooth the intensity of maximum marginal likelihood (Guu et al., 2017) .", "Intuitively, w(y, x, t, z) defines the positive part of the update equation, which defines how aggressively the update favors program y.", "Likewise, q(y|x, t) defines the negative part of the learning Generalized Update Equation: (7) ) andΘ³ is the most violating program (cf.", "Eq.", "(8) ).", "For REINFORCE,Ε· is sampled from K using p(.)", "whereas for Off-Policy Policy Gradient,Ε· is sampled using u(.).", "(K) = X y2K w(y, x, t, z) 0 @ r βœ“ score βœ“ (y, x, t) X y 0 2Y q(y 0 |x, t)r βœ“ score βœ“ (y 0 , x, t) algorithm, namely how aggressively the update penalizes the members of the program set.", "The generalized update equation provides a tool for better understanding individual algorithm, and helps shed some light on when a particular method may perform better.", "Intensity versus Search Quality In SpFD, the effectiveness of the algorithms for SpFD is closely related to the quality of the search results given that the gold program is not available.", "Intuitively, if the search quality is good, the update algorithm could be aggressive on updating the model parameters.", "When the search quality is poor, the algorithm should be conservative.", "The intensity w(Β·) is closely related to the aggressiveness of the algorithm.", "For example, the maximum marginal likelihood is less aggressive given that it produces a non-zero intensity over all programs in the program set K that evaluate to the correct answer.", "The intensity for a particular correct program y is proportional to its probability p(y|x, t).", "Further, meritocratic update becomes more aggressive as becomes larger.", "In contrast, REINFORCE and maximum margin reward both have a non-zero intensity only on a single program in K. This value is 1.0 for maximum margin reward, while for reinforcement learning, this value is the reward.", "Maximum margin reward therefore updates most aggressively in favor of its selection while maximum marginal likelihood tends to hedge its bet.", "Therefore, the maximum margin methods should benefit the most when the search quality improves.", "Stability The general equation also allows us to investigate the stability of a model update algorithm.", "In general, the variance of update direction can be high, hence less stable, if the model update algorithm has peaky competing distribution, or it puts all of its intensity on a single program.", "For example, REINFORCE only samples one program and puts non-zero intensity only on that program, so it could be unstable depending on the sampling results.", "The competing distribution affects the stability of the algorithm.", "For example, maximum margin reward penalizes only the most violating program and is benign to other incorrect programs.", "Therefore, the MMR algorithm could be unstable during training.", "New Model Update Algorithm The general equation provides a framework that enables the development of new variations or extensions of existing learning methods.", "For example, in order to improve the stability of the MMR algorithm, we propose a simple variant of maximum margin reward, which penalizes all violating programs instead of only the most violating one.", "We call this approach maximum margin average violation reward (MAVER), which is included in Table 1 as well.", "Given that MAVER effectively considers more negative examples during each update, we expect that it is more stable compared to the MMR algorithm.", "Experiments We describe the setup in Β§5.1 and results in Β§5.2.", "Setup Dataset We use the sequential question answering (SQA) dataset (Iyyer et al., 2017) for our experiments.", "SQA contains 6,066 sequences and each sequence contains up to 3 questions, with 17,553 questions in total.", "The data is partitioned into training (83%) and test (17%) splits.", "We use 4/5 of the original train split as our training set and the remaining 1/5 as the dev set.", "We evaluate using exact match on answer.", "Previous state-of-theart result on the SQA dataset is 44.7% accuracy, using maximum margin reward learning.", "Semantic Parser Our semantic parser is based on DynSP (Iyyer et al., 2017) , which contains a set of SQL actions, such as adding a clause (e.g., Select Column) or adding an operator (e.g., Max).", "Each action has an associated neural network module that generates the score for the action based on the instruction, the table and the list of past actions.", "The score of the entire program is given by the sum of scores of all actions.", "We modified DynSP to improve its representational capacity.", "We refer to the new parser as DynSP++.", "Most notably, we included new features and introduced two additional parser actions.", "See Appendix 8.2 for more details.", "While these improvements help us achieve state-of-the-art results, the majority of the gain comes from the learning contributions described in this paper.", "Hyperparameters For each experiment, we train the model for 30 epochs.", "We find the optimal stopping epoch by evaluating the model on the dev set.", "We then train on train+dev set till the stopping epoch and evaluate the model on the held-out test set.", "Model parameters are trained using stochastic gradient descent with learning rate of 0.1.", "We set the hyperparameter ⌘ for policy shaping to 5.", "All hyperparameters were tuned on the dev set.", "We use 40 lexical pairs for defining the co-occur score.", "We used common English superlatives (e.g., highest, most) and comparators (e.g., more, larger) and did not fit the lexical pairs based on the dataset.", "Given the model parameter βœ“, we use a base exploration policy defined in (Iyyer et al., 2017) .", "This exploration policy is given by b βœ“ (y | x, t, z) / exp( Β· R(y, z) + score βœ“ (y, βœ“, z)).", "R(y, z) is the reward function of the incomplete program y, given the answer z.", "We use a reward function R(y, z) given by the Jaccard similarity of the gold answer z and the answer generated by the program y.", "The value of is set to infinity, which essentially is equivalent to sorting the programs based on the reward and using the current model score for tie breaking.", "Further, we prune all syntactically invalid programs.", "For more details, we refer the reader to (Iyyer et al., 2017) .", "Table 2 contains the dev and test results when using our algorithm on the SQA dataset.", "We observe that margin based methods perform better than maximum likelihood methods and policy gradient in our experiment.", "Policy shaping in general improves the performance across different algorithms.", "Our best test results outperform previous SOTA by 5.0%.", "Results Policy Gradient vs Off-Policy Gradient RE-INFORCE, a simple policy gradient method, achieved extremely poor performance.", "This likely due to the problem of exploration and having to sample from a large space of programs.", "This is further corroborated from observing the much superior performance of off-policy policy gradient methods.", "Thus, the sampling policy is an important factor to consider for policy gradient methods.", "The Effect of Policy Shaping We observe that the improvement due to policy shaping is 6.0% on the SQA dataset for MAVER and only 1.3% for maximum marginal likelihood.", "We also observe that as increases, the improvement due to policy shaping for meritocratic update increases.", "This supports our hypothesis that aggressive updates of margin based methods is beneficial when the search method is more accurate as compared to maximum marginal likelihood which hedges its bet between all programs that evaluate to the right answer.", "Stability of MMR In Section 4, the general update equation helps us point out that MMR could be unstable due to the peaky competing distribution.", "MAVER was proposed to increase the stability of the algorithm.", "To measure stability, we cal- Table 4 .", "Policy Shaping vs Model Shaping Critique policy contains useful information that can bias the search away from spurious programs.", "Therefore, one can also consider making the critique policy as part of the model.", "We call this model shaping.", "We define our model to be the shaped policy and train and test using the new model.", "Using MAVER updates, we found that the dev accuracy dropped to 37.1%.", "We conjecture that the strong prior in the critique policy can hinder generalization in model shaping.", "Related Work Semantic Parsing from Denotation Mapping natural language text to formal meaning representation was first studied by Montague (1970) .", "Early work on learning semantic parsers rely on labeled formal representations as the supervision signals Collins, 2005, 2007; Zelle and Mooney, 1993) .", "However, because getting access to gold formal representation generally requires expensive annotations by an expert, distant supervision approaches, where semantic parsers are learned from denotation only, have become the main learning paradigm (e.g., Clarke et al., 2010; Liang et al., 2011; Artzi and Zettlemoyer, 2013; Berant et al., 2013; Iyyer et al., 2017; Krishnamurthy et al., 2017) .", "Guu et al.", "(2017) studied the problem of spurious programs and considered adding noise to diversify the search procedure and introduced meritocratic updates.", "Reinforcement Learning Algorithms Reinforcement learning algorithms have been applied to various NLP problems including dialogue (Li et al., 2016 ), text-based games (Narasimhan et al., 2015) , information extraction (Narasimhan et al., 2016) , coreference resolution (Clark and Man- ning, 2016), semantic parsing (Guu et al., 2017) and instruction following (Misra et al., 2017) .", "Guu et al.", "(2017) show that policy gradient methods underperform maximum marginal likelihood approaches.", "Our result on the SQA dataset supports their observation.", "However, we show that using off-policy sampling, policy gradient methods can provide superior performance to maximum marginal likelihood methods.", "Margin-based Learning Margin-based methods have been considered in the context of SVM learning.", "In the NLP literature, margin based learning has been applied to parsing (Taskar et al., 2004; McDonald et al., 2005) , text classification (Taskar et al., 2003) , machine translation (Watanabe et al., 2007) and semantic parsing (Iyyer et al., 2017) .", "Kummerfeld et al.", "(2015) found that max-margin based methods generally outperform likelihood maximization on a range of tasks.", "Previous work have studied connections between margin based method and likelihood maximization for supervised learning setting.", "We show them as special cases of our unified update equation for distant supervision learning.", "Similar to this work, Lee et al.", "(2016) also found that in the context of supervised learning, margin-based algorithms which update all violated examples perform better than the one that only updates the most violated example.", "Latent Variable Modeling Learning semantic parsers from denotation can be viewed as a latent variable modeling problem, where the program is the latent variable.", "Probabilistic latent variable models have been studied using EM-algorithm and its variant (Dempster et al., 1977) .", "The graphical model literature has studied latent variable learning on margin-based methods (Yu and Joachims, 2009 ) and probabilistic models (Quattoni et al., 2007) .", "Samdani et al.", "(2012) studied various vari-ants of EM algorithm and showed that all of them are special cases of a unified framework.", "Our generalized update framework is similar in spirit.", "Conclusion In this paper, we propose a general update equation from semantic parsing from denotation and propose a policy shaping method for addressing the spurious program challenge.", "For the future, we plan to apply the proposed learning framework to more semantic parsing tasks and consider new methods for policy shaping." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "4", "4.1", "4.2", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Background", "The Semantic Parsing Task", "Learning", "Addressing Spurious Programs: Policy Shaping", "Addressing Update Strategy Selection:", "Commonly Used Learning Algorithms", "Generalized Update Equation", "Experiments", "Setup", "Results", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-103#paper-1267#slide-19
Results on SQA Answer Accuracy
Policy shaping helps improve performance. With policy shaping, different updates matters even more Achieves new state-of-the-art (previously 44.7%) on SQA
Policy shaping helps improve performance. With policy shaping, different updates matters even more Achieves new state-of-the-art (previously 44.7%) on SQA
[]
GEM-SciDuet-train-103#paper-1267#slide-20
1267
Policy Shaping and Generalized Update Equations for Semantic Parsing from Denotations
Semantic parsing from denotations faces two key challenges in model training: (1) given only the denotations (e.g., answers), search for good candidate semantic parses, and (2) choose the best model update algorithm. We propose effective and general solutions to each of them. Using policy shaping, we bias the search procedure towards semantic parses that are more compatible to the text, which provide better supervision signals for training. In addition, we propose an update equation that generalizes three different families of learning algorithms, which enables fast model exploration. When experimented on a recently proposed sequential question answering dataset, our framework leads to a new state-of-theart model that outperforms previous work by 5.0% absolute on exact match accuracy.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233 ], "paper_content_text": [ "Introduction Semantic parsing from denotations (SpFD) is the problem of mapping text to executable formal representations (or program) in a situated environment and executing them to generate denotations (or answer), in the absence of access to correct representations.", "Several problems have been handled within this framework, including question answering (Berant et al., 2013; Iyyer et al., 2017) and instructions for robots (Artzi and Zettlemoyer, 2013; Misra et al., 2015) .", "Consider the example in Figure 1 .", "Given the question and a table environment, a semantic parser maps the question to an executable program, in this case a SQL query, and then executes the query on the environment to generate the answer England.", "In the SpFD setting, the training data does not contain the correct programs.", "Thus, the existing learning approaches for SpFD perform two steps for every training example, a search step that explores the space of programs Select Nation Where Points is Maximum Program: Answer: Environment: England Figure 1 : An example of semantic parsing from denotations.", "Given the table environment, map the question to an executable program that evaluates to the answer.", "and finds suitable candidates, and an update step that uses these programs to update the model.", "Figure 2 shows the two step training procedure for the above example.", "In this paper, we address two key challenges in model training for SpFD by proposing a novel learning framework, improving both the search and update steps.", "The first challenge, the existence of spurious programs, lies in the search step.", "More specifically, while the success of the search step relies on its ability to find programs that are semantically correct, we can only verify if the program can generate correct answers, given that no gold programs are presented.", "The search step is complicated by spurious programs, which happen to evaluate to the correct answer but do not represent accurately the meaning of the natural language question.", "For example, for the environment in Figure 1 , the program Select Nation Where Name = Karen Andrew is spurious.", "Selecting spurious programs as positive examples can greatly affect the performance of semantic parsers as these programs generally do not gen- Our contributions in this work are twofold.", "To address the first challenge, we propose a policy shaping (Griffith et al., 2013) method that incorporates simple, lightweight domain knowledge, such as a small set of lexical pairs of tokens in the question and program, in the form of a critique policy ( Β§ 3).", "This helps bias the search towards the correct program, an important step to improve supervision signals, which benefits learning regardless of the choice of algorithm.", "To address the second challenge, we prove that the parameter update step in several algorithms are similar and can be viewed as special cases of a generalized update equation ( Β§ 4).", "The equation contains two variable terms that govern the update behavior.", "Changing these two terms effectively defines an infinite class of learning algorithms where different values lead to significantly different results.", "We study this effect and propose a novel learning framework that improves over existing methods.", "We evaluate our methods using the sequential question answering (SQA) dataset (Iyyer et al., 2017) , and show that our proposed improvements to the search and update steps consistently enhance existing approaches.", "The proposed algorithm achieves new state-of-the-art and outperforms existing parsers by 5.0%.", "Background We give a formal problem definition of the semantic parsing task, followed by the general learning framework for solving it.", "The Semantic Parsing Task The problem discussed in this paper can be formally defined as follows.", "Let X be the set of all possible questions, Y programs (e.g., SQL-like queries), T tables (i.e., the structured data in this work) and Z answers.", "We further assume access to an executor : Y β‡₯ T !", "Z, that given a program y 2 Y and a table t 2 T , generates an answer (y, t) 2 Z.", "We assume that the executor and all tables are deterministic and the executor can be called as many times as possible.", "To facilitate discussion in the following sections, we define an environment function e t : Y !", "Z, by applying the executor to the program as e t (y) = (y, t).", "Given a question x and an environment e t , our aim is to generate a program y ⇀ 2 Y and then execute it to produce the answer e t (y ⇀ ).", "Assume that for any y 2 Y, the score of y being a correct program for x is score βœ“ (y, x, t), parameterized by βœ“.", "The inference task is thus: y ⇀ = arg max y2Y score βœ“ (y, x, t) (1) As the size of Y is exponential to the length of the program, a generic search procedure is typically employed for Eq.", "(1), as efficient dynamic algorithms typically do not exist.", "These search procedures generally maintain a beam of program states sorted according to some scoring function, where each program state represents an incomplete program.", "The search then generates a new program state from an existing state by performing an action.", "Each action adds a set of tokens (e.g., Nation) and keyword (e.g., Select) to a program state.", "For example, in order to generate the program in Figure 1 , the DynSP parser (Iyyer et al., 2017) will take the first action as adding the SQL expression Select Nation.", "Notice that score βœ“ can be used in either probabilistic or nonprobabilistic models.", "For probabilistic models, we assume that it is a Boltzmann policy, meaning that p βœ“ (y | x, t) / exp{score βœ“ (y, x, t)}.", "Learning Learning a semantic parser is equivalent to learning the parameters βœ“ in the scoring function, which is a structured learning problem, due to the large, structured output space Y.", "Structured learning algorithms generally consist of two major components: search and update.", "When the gold programs are available during training, the search procedure finds a set of high-scoring incorrect programs.", "These programs are used by the update step to derive loss for updating parameters.", "For example, these programs are used for approximating the partition-function in maximum-likelihood objective (Liang et al., 2011) and finding set of programs causing margin violation in margin based methods (DaumΓ© III and Marcu, 2005) .", "Depending on the exact algorithm being used, these two components are not necessarily separated into isolated steps.", "For instance, parameters can be updated in the middle of search (e.g., Huang et al., 2012) .", "For learning semantic parsers from denotations, where we assume only answers are available in a training set {( x i , t i , z i )} N i=1 of N examples, the basic construction of the learning algorithms remains the same.", "However, the problems that search needs to handle in SpFD is more challenging.", "In addition to finding a set of high-scoring incorrect programs, the search procedure also needs to guess the correct program(s) evaluating to the gold answer z i .", "This problem is further complicated by the presence of spurious programs, which generate the correct answer but are semantically incompatible with the question.", "For example, although all programs in Figure 2 evaluate to the same answer, only one of them is correct.", "The issue of the spurious programs also affects the design of model update.", "For instance, maximum marginal likelihood methods treat all the programs that evaluate to the gold answer equally, while maximum margin reward networks use model score to break tie and pick one of the programs as the correct reference.", "Addressing Spurious Programs: Policy Shaping Given a training example (x, t, z), the aim of the search step is to find a set K(x, t, z) of programs consisting of correct programs that evaluate to z and high-scoring incorrect programs.", "The search step should avoid picking up spurious programs for learning since such programs typically do not generalize.", "For example, in Figure 2, the spurious program Select Nation Where Index is Min will evaluate to an incorrect answer if the indices of the first two rows are swapped 1 .", "This problem is challenging since among the programs that evaluate to the correct answer, most of them are spurious.", "The search step can be viewed as following an exploration policy b βœ“ (y|x, t, z) to explore the set of programs Y.", "This exploration is often performed by beam search and at each step, we either sample from b βœ“ or take the top scoring programs.", "The set K(x, t, z) is then used by the update step for parameter update.", "Most search strategies use an exploration policy which is based on the score function, for example b βœ“ (y|x, t, z) / exp{score βœ“ (y, t)}.", "However, this approach can suffer from a divergence phenomenon whereby the score of spurious programs picked up by the search in the first epoch increases, making it more likely for the search to pick them up in the future.", "Such divergence issues are common with latent-variable learning and often require careful initialization to overcome (Rose, 1998) .", "Unfortunately such initialization schemes are not applicable for deep neural networks which form the model of most successful semantic parsers today (Jia and Liang, 2016; Misra and Artzi, 2016; Iyyer et al., 2017) .", "Prior work, such as ✏-greedy exploration (Guu et al., 2017) , has reduced the severity of this problem by introducing random noise in the search procedure to avoid saturating the search on high-scoring spurious programs.", "However, random noise need not bias the search towards the correct program(s).", "In this paper, we introduce a simple policy-shaping method to guide the search.", "This approach allows incorporating prior knowledge in the exploration policy and can bias the search away from spurious programs.", "Β» Compute generalized gradient updates 6: βœ“ = βœ“ + Β΅ (K) 7: return βœ“ Policy Shaping Policy shaping is a method to introduce prior knowledge into a policy (Griffith et al., 2013) .", "Formally, let the current behavior policy be b βœ“ (y|x, t, z) and a predefined critique policy, the prior knowledge, be p c (y|x, t).", "Policy shaping defines a new shaped behavior policy p b (y|x, t) given by: p b (y|x, t) = b βœ“ (y|x, t, z)p c (y|x, t) P y 0 2Y b βœ“ (y 0 |x, t, z)p c (y 0 |x, t) .", "(2) Using the shaped policy for exploration biases the search towards the critique policy's preference.", "We next describe a simple critique policy that we use in this paper.", "Lexical Policy Shaping We qualitatively observed that correct programs often contains tokens which are also present in the question.", "For example, the correct program in Figure 2 contains the token Points, which is also present in the question.", "We therefore, define a simple surface form similarity feature match(x, y) that computes the ratio of number of non-keyword tokens in the program y that are also present in the question x.", "However, surface-form similarity is often not enough.", "For example, both the first and fourth program in Figure 2 contain the token Points but only the fourth program is correct.", "Therefore, we also use a simple co-occurrence feature that triggers on frequently co-occurring pairs of tokens in the program and instruction.", "For example, the token most is highly likely to co-occur with a correct program containing the keyword Max.", "This happens for the example in Figure 2 .", "Similarly the token not may co-occur with the keyword NotEqual.", "We assume access to a lexicon ⇀ = {(w j , !", "j )} k j=1 containing k lexical pairs of tokens and keywords.", "Each lexical pair (w, !)", "maps the token w in a text to a keyword !", "in a program.", "For a given program y and question x, we define a co-occurrence score as co_occur(y, x) = P (w,!", ")2⇀ {w 2 x^!", "2 y}}.", "We define critique score critique(y, x) as the sum of the match and co_occur scores.", "The critique policy is given by: p c (y|x, t) / exp (⌘ ⇀ critique(y, x)) , (3) where ⌘ is a single scalar hyper-parameter denoting the confidence in the critique policy.", "Addressing Update Strategy Selection: Generalized Update Equation Given the set of programs generated by the search step, one can use many objectives to update the parameters.", "For example, previous work have utilized maximum marginal likelihood (Krishnamurthy et al., 2017; Guu et al., 2017) , reinforcement learning (Zhong et al., 2017; Guu et al., 2017) and margin based methods (Iyyer et al., 2017) .", "It could be difficult to choose the suitable algorithm from these options.", "In this section, we propose a principle and general update equation such that previous update algorithms can be considered as special cases to this equation.", "Having a general update is important for the following reasons.", "First, it allows us to understand existing algorithms better by examining their basic properties.", "Second, the generalized update equation also makes it easy to implement and experiment with various different algorithms.", "Moreover, it provides a framework that enables the development of new variations or extensions of existing learning methods.", "In the following, we describe how the commonly used algorithms are in fact very similartheir update rules can all be viewed as special cases of the proposed generalized update equation.", "Algorithm 1 shows the meta-learning framework.", "For every training example, we first find a set of candidates using an exploration policy (line 4).", "We use the program candidates to update the parameters (line 6).", "Commonly Used Learning Algorithms We briefly describe three algorithms: maximum marginalized likelihood, policy gradient and maximum margin reward.", "Maximum Marginalized Likelihood The maximum marginalized likelihood method maximizes the log-likelihood of the training data by marginalizing over the set of programs.", "J MML = log p(z i |x i , t i ) = log X y2Y p(z i |y, t i )p(y|x i , t i ) (4) Because an answer is deterministically computed given a program and a table, we define p(z | y, t) as 1 or 0 depending upon whether the y evaluates to z given t, or not.", "Let Gen(z, t) βœ“ Y be the set of compatible programs that evaluate to z given the table t. The objective can then be expressed as: J MML = log X y2Gen(zi,ti) p(y|x i , t i ) (5) In practice, the summation over Gen(.)", "is approximated by only using the compatible programs in the set K generated by the search step.", "Policy Gradient Methods Most reinforcement learning approaches for semantic parsing assume access to a reward function R : Y β‡₯ X β‡₯ Z !", "R, giving a scalar reward R(y, z) for a given program y and the correct answer z.", "2 We can further assume without loss of generality that the reward is always in [0, 1].", "Reinforcement learning approaches maximize the expected reward J RL : J RL = X y2Y p(y|x i , t i )R(y, z i ) (6) J RL is hard to approximate using numerical integration since the reward for all programs may not be known a priori.", "Policy gradient methods solve this by approximating the derivative using a sample from the policy.", "When the search space is large, the policy may fail to sample a correct program, which can greatly slow down the learning.", "Therefore, off-policy methods are sometimes introduced to bias the sampling towards high-reward yielding programs.", "In those methods, an additional exploration policy u(y|x i , t i , z i ) is used to improve sampling.", "Importance weights are used to make the gradient unbiased (see Appendix for derivation).", "Maximum Margin Reward For every training example (x i , t i , z i ), the maximum margin reward method finds the highest scoring program y i that evaluates to z i , as the reference program, from the set K of programs generated by the search.", "With a margin function : Y β‡₯Y β‡₯Z !", "R and reference program y, the set of programs V that violate the margin constraint can thus be defined as: V = {y 0 | y 0 2 Y and score βœ“ (y, x, t) ο£Ώ score βœ“ (y 0 , x, t) + (y, y 0 , z)}, (7) where (y, y 0 , z) = R(y, z) R(y 0 , z).", "Similarly, the program that most violates the constraint can be written as: y = arg max y 0 2Y {score βœ“ (y 0 , x, t) + (y, y 0 , z) score βœ“ (y, x, t)} (8) The most-violation margin objective (negative margin loss) is thus defined as: J MMR = max{0, score βœ“ (Θ³, x i , t i ) score βœ“ (y i , x i , t i ) + (y i ,Θ³, z i )} Unlike the previous two learning algorithms, margin methods only update the score of the reference program and the program that violates the margin.", "Generalized Update Equation Although the algorithms described in Β§4.1 seem very different on the surface, the gradients of their loss functions can in fact be described in the same generalized form, given in Eq.", "(9) 3 .", "In addition to the gradient of the model scoring function, this equation has two variable terms, w(Β·), q(Β·).", "We call the first term w(y, x, t, z) intensity, which is a positive scalar value and the second term q(y|x, t) the competing distribution, which is a probability distribution over programs.", "Varying them makes the equation equivalent to the update rule of the algorithms we discussed, as shown in Table 1 .", "We also consider meritocratic update policy which uses a hyperparameter to sharpen or smooth the intensity of maximum marginal likelihood (Guu et al., 2017) .", "Intuitively, w(y, x, t, z) defines the positive part of the update equation, which defines how aggressively the update favors program y.", "Likewise, q(y|x, t) defines the negative part of the learning Generalized Update Equation: (7) ) andΘ³ is the most violating program (cf.", "Eq.", "(8) ).", "For REINFORCE,Ε· is sampled from K using p(.)", "whereas for Off-Policy Policy Gradient,Ε· is sampled using u(.).", "(K) = X y2K w(y, x, t, z) 0 @ r βœ“ score βœ“ (y, x, t) X y 0 2Y q(y 0 |x, t)r βœ“ score βœ“ (y 0 , x, t) algorithm, namely how aggressively the update penalizes the members of the program set.", "The generalized update equation provides a tool for better understanding individual algorithm, and helps shed some light on when a particular method may perform better.", "Intensity versus Search Quality In SpFD, the effectiveness of the algorithms for SpFD is closely related to the quality of the search results given that the gold program is not available.", "Intuitively, if the search quality is good, the update algorithm could be aggressive on updating the model parameters.", "When the search quality is poor, the algorithm should be conservative.", "The intensity w(Β·) is closely related to the aggressiveness of the algorithm.", "For example, the maximum marginal likelihood is less aggressive given that it produces a non-zero intensity over all programs in the program set K that evaluate to the correct answer.", "The intensity for a particular correct program y is proportional to its probability p(y|x, t).", "Further, meritocratic update becomes more aggressive as becomes larger.", "In contrast, REINFORCE and maximum margin reward both have a non-zero intensity only on a single program in K. This value is 1.0 for maximum margin reward, while for reinforcement learning, this value is the reward.", "Maximum margin reward therefore updates most aggressively in favor of its selection while maximum marginal likelihood tends to hedge its bet.", "Therefore, the maximum margin methods should benefit the most when the search quality improves.", "Stability The general equation also allows us to investigate the stability of a model update algorithm.", "In general, the variance of update direction can be high, hence less stable, if the model update algorithm has peaky competing distribution, or it puts all of its intensity on a single program.", "For example, REINFORCE only samples one program and puts non-zero intensity only on that program, so it could be unstable depending on the sampling results.", "The competing distribution affects the stability of the algorithm.", "For example, maximum margin reward penalizes only the most violating program and is benign to other incorrect programs.", "Therefore, the MMR algorithm could be unstable during training.", "New Model Update Algorithm The general equation provides a framework that enables the development of new variations or extensions of existing learning methods.", "For example, in order to improve the stability of the MMR algorithm, we propose a simple variant of maximum margin reward, which penalizes all violating programs instead of only the most violating one.", "We call this approach maximum margin average violation reward (MAVER), which is included in Table 1 as well.", "Given that MAVER effectively considers more negative examples during each update, we expect that it is more stable compared to the MMR algorithm.", "Experiments We describe the setup in Β§5.1 and results in Β§5.2.", "Setup Dataset We use the sequential question answering (SQA) dataset (Iyyer et al., 2017) for our experiments.", "SQA contains 6,066 sequences and each sequence contains up to 3 questions, with 17,553 questions in total.", "The data is partitioned into training (83%) and test (17%) splits.", "We use 4/5 of the original train split as our training set and the remaining 1/5 as the dev set.", "We evaluate using exact match on answer.", "Previous state-of-theart result on the SQA dataset is 44.7% accuracy, using maximum margin reward learning.", "Semantic Parser Our semantic parser is based on DynSP (Iyyer et al., 2017) , which contains a set of SQL actions, such as adding a clause (e.g., Select Column) or adding an operator (e.g., Max).", "Each action has an associated neural network module that generates the score for the action based on the instruction, the table and the list of past actions.", "The score of the entire program is given by the sum of scores of all actions.", "We modified DynSP to improve its representational capacity.", "We refer to the new parser as DynSP++.", "Most notably, we included new features and introduced two additional parser actions.", "See Appendix 8.2 for more details.", "While these improvements help us achieve state-of-the-art results, the majority of the gain comes from the learning contributions described in this paper.", "Hyperparameters For each experiment, we train the model for 30 epochs.", "We find the optimal stopping epoch by evaluating the model on the dev set.", "We then train on train+dev set till the stopping epoch and evaluate the model on the held-out test set.", "Model parameters are trained using stochastic gradient descent with learning rate of 0.1.", "We set the hyperparameter ⌘ for policy shaping to 5.", "All hyperparameters were tuned on the dev set.", "We use 40 lexical pairs for defining the co-occur score.", "We used common English superlatives (e.g., highest, most) and comparators (e.g., more, larger) and did not fit the lexical pairs based on the dataset.", "Given the model parameter βœ“, we use a base exploration policy defined in (Iyyer et al., 2017) .", "This exploration policy is given by b βœ“ (y | x, t, z) / exp( Β· R(y, z) + score βœ“ (y, βœ“, z)).", "R(y, z) is the reward function of the incomplete program y, given the answer z.", "We use a reward function R(y, z) given by the Jaccard similarity of the gold answer z and the answer generated by the program y.", "The value of is set to infinity, which essentially is equivalent to sorting the programs based on the reward and using the current model score for tie breaking.", "Further, we prune all syntactically invalid programs.", "For more details, we refer the reader to (Iyyer et al., 2017) .", "Table 2 contains the dev and test results when using our algorithm on the SQA dataset.", "We observe that margin based methods perform better than maximum likelihood methods and policy gradient in our experiment.", "Policy shaping in general improves the performance across different algorithms.", "Our best test results outperform previous SOTA by 5.0%.", "Results Policy Gradient vs Off-Policy Gradient RE-INFORCE, a simple policy gradient method, achieved extremely poor performance.", "This likely due to the problem of exploration and having to sample from a large space of programs.", "This is further corroborated from observing the much superior performance of off-policy policy gradient methods.", "Thus, the sampling policy is an important factor to consider for policy gradient methods.", "The Effect of Policy Shaping We observe that the improvement due to policy shaping is 6.0% on the SQA dataset for MAVER and only 1.3% for maximum marginal likelihood.", "We also observe that as increases, the improvement due to policy shaping for meritocratic update increases.", "This supports our hypothesis that aggressive updates of margin based methods is beneficial when the search method is more accurate as compared to maximum marginal likelihood which hedges its bet between all programs that evaluate to the right answer.", "Stability of MMR In Section 4, the general update equation helps us point out that MMR could be unstable due to the peaky competing distribution.", "MAVER was proposed to increase the stability of the algorithm.", "To measure stability, we cal- Table 4 .", "Policy Shaping vs Model Shaping Critique policy contains useful information that can bias the search away from spurious programs.", "Therefore, one can also consider making the critique policy as part of the model.", "We call this model shaping.", "We define our model to be the shaped policy and train and test using the new model.", "Using MAVER updates, we found that the dev accuracy dropped to 37.1%.", "We conjecture that the strong prior in the critique policy can hinder generalization in model shaping.", "Related Work Semantic Parsing from Denotation Mapping natural language text to formal meaning representation was first studied by Montague (1970) .", "Early work on learning semantic parsers rely on labeled formal representations as the supervision signals Collins, 2005, 2007; Zelle and Mooney, 1993) .", "However, because getting access to gold formal representation generally requires expensive annotations by an expert, distant supervision approaches, where semantic parsers are learned from denotation only, have become the main learning paradigm (e.g., Clarke et al., 2010; Liang et al., 2011; Artzi and Zettlemoyer, 2013; Berant et al., 2013; Iyyer et al., 2017; Krishnamurthy et al., 2017) .", "Guu et al.", "(2017) studied the problem of spurious programs and considered adding noise to diversify the search procedure and introduced meritocratic updates.", "Reinforcement Learning Algorithms Reinforcement learning algorithms have been applied to various NLP problems including dialogue (Li et al., 2016 ), text-based games (Narasimhan et al., 2015) , information extraction (Narasimhan et al., 2016) , coreference resolution (Clark and Man- ning, 2016), semantic parsing (Guu et al., 2017) and instruction following (Misra et al., 2017) .", "Guu et al.", "(2017) show that policy gradient methods underperform maximum marginal likelihood approaches.", "Our result on the SQA dataset supports their observation.", "However, we show that using off-policy sampling, policy gradient methods can provide superior performance to maximum marginal likelihood methods.", "Margin-based Learning Margin-based methods have been considered in the context of SVM learning.", "In the NLP literature, margin based learning has been applied to parsing (Taskar et al., 2004; McDonald et al., 2005) , text classification (Taskar et al., 2003) , machine translation (Watanabe et al., 2007) and semantic parsing (Iyyer et al., 2017) .", "Kummerfeld et al.", "(2015) found that max-margin based methods generally outperform likelihood maximization on a range of tasks.", "Previous work have studied connections between margin based method and likelihood maximization for supervised learning setting.", "We show them as special cases of our unified update equation for distant supervision learning.", "Similar to this work, Lee et al.", "(2016) also found that in the context of supervised learning, margin-based algorithms which update all violated examples perform better than the one that only updates the most violated example.", "Latent Variable Modeling Learning semantic parsers from denotation can be viewed as a latent variable modeling problem, where the program is the latent variable.", "Probabilistic latent variable models have been studied using EM-algorithm and its variant (Dempster et al., 1977) .", "The graphical model literature has studied latent variable learning on margin-based methods (Yu and Joachims, 2009 ) and probabilistic models (Quattoni et al., 2007) .", "Samdani et al.", "(2012) studied various vari-ants of EM algorithm and showed that all of them are special cases of a unified framework.", "Our generalized update framework is similar in spirit.", "Conclusion In this paper, we propose a general update equation from semantic parsing from denotation and propose a policy shaping method for addressing the spurious program challenge.", "For the future, we plan to apply the proposed learning framework to more semantic parsing tasks and consider new methods for policy shaping." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "4", "4.1", "4.2", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Background", "The Semantic Parsing Task", "Learning", "Addressing Spurious Programs: Policy Shaping", "Addressing Update Strategy Selection:", "Commonly Used Learning Algorithms", "Generalized Update Equation", "Experiments", "Setup", "Results", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-103#paper-1267#slide-20
Comparing Updates
MMR and MAVER are more aggressive than MML MMR and MAVER update towards to one program MML updates toward to all programs that can generate the correct answer
MMR and MAVER are more aggressive than MML MMR and MAVER update towards to one program MML updates toward to all programs that can generate the correct answer
[]
GEM-SciDuet-train-103#paper-1267#slide-21
1267
Policy Shaping and Generalized Update Equations for Semantic Parsing from Denotations
Semantic parsing from denotations faces two key challenges in model training: (1) given only the denotations (e.g., answers), search for good candidate semantic parses, and (2) choose the best model update algorithm. We propose effective and general solutions to each of them. Using policy shaping, we bias the search procedure towards semantic parses that are more compatible to the text, which provide better supervision signals for training. In addition, we propose an update equation that generalizes three different families of learning algorithms, which enables fast model exploration. When experimented on a recently proposed sequential question answering dataset, our framework leads to a new state-of-theart model that outperforms previous work by 5.0% absolute on exact match accuracy.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233 ], "paper_content_text": [ "Introduction Semantic parsing from denotations (SpFD) is the problem of mapping text to executable formal representations (or program) in a situated environment and executing them to generate denotations (or answer), in the absence of access to correct representations.", "Several problems have been handled within this framework, including question answering (Berant et al., 2013; Iyyer et al., 2017) and instructions for robots (Artzi and Zettlemoyer, 2013; Misra et al., 2015) .", "Consider the example in Figure 1 .", "Given the question and a table environment, a semantic parser maps the question to an executable program, in this case a SQL query, and then executes the query on the environment to generate the answer England.", "In the SpFD setting, the training data does not contain the correct programs.", "Thus, the existing learning approaches for SpFD perform two steps for every training example, a search step that explores the space of programs Select Nation Where Points is Maximum Program: Answer: Environment: England Figure 1 : An example of semantic parsing from denotations.", "Given the table environment, map the question to an executable program that evaluates to the answer.", "and finds suitable candidates, and an update step that uses these programs to update the model.", "Figure 2 shows the two step training procedure for the above example.", "In this paper, we address two key challenges in model training for SpFD by proposing a novel learning framework, improving both the search and update steps.", "The first challenge, the existence of spurious programs, lies in the search step.", "More specifically, while the success of the search step relies on its ability to find programs that are semantically correct, we can only verify if the program can generate correct answers, given that no gold programs are presented.", "The search step is complicated by spurious programs, which happen to evaluate to the correct answer but do not represent accurately the meaning of the natural language question.", "For example, for the environment in Figure 1 , the program Select Nation Where Name = Karen Andrew is spurious.", "Selecting spurious programs as positive examples can greatly affect the performance of semantic parsers as these programs generally do not gen- Our contributions in this work are twofold.", "To address the first challenge, we propose a policy shaping (Griffith et al., 2013) method that incorporates simple, lightweight domain knowledge, such as a small set of lexical pairs of tokens in the question and program, in the form of a critique policy ( Β§ 3).", "This helps bias the search towards the correct program, an important step to improve supervision signals, which benefits learning regardless of the choice of algorithm.", "To address the second challenge, we prove that the parameter update step in several algorithms are similar and can be viewed as special cases of a generalized update equation ( Β§ 4).", "The equation contains two variable terms that govern the update behavior.", "Changing these two terms effectively defines an infinite class of learning algorithms where different values lead to significantly different results.", "We study this effect and propose a novel learning framework that improves over existing methods.", "We evaluate our methods using the sequential question answering (SQA) dataset (Iyyer et al., 2017) , and show that our proposed improvements to the search and update steps consistently enhance existing approaches.", "The proposed algorithm achieves new state-of-the-art and outperforms existing parsers by 5.0%.", "Background We give a formal problem definition of the semantic parsing task, followed by the general learning framework for solving it.", "The Semantic Parsing Task The problem discussed in this paper can be formally defined as follows.", "Let X be the set of all possible questions, Y programs (e.g., SQL-like queries), T tables (i.e., the structured data in this work) and Z answers.", "We further assume access to an executor : Y β‡₯ T !", "Z, that given a program y 2 Y and a table t 2 T , generates an answer (y, t) 2 Z.", "We assume that the executor and all tables are deterministic and the executor can be called as many times as possible.", "To facilitate discussion in the following sections, we define an environment function e t : Y !", "Z, by applying the executor to the program as e t (y) = (y, t).", "Given a question x and an environment e t , our aim is to generate a program y ⇀ 2 Y and then execute it to produce the answer e t (y ⇀ ).", "Assume that for any y 2 Y, the score of y being a correct program for x is score βœ“ (y, x, t), parameterized by βœ“.", "The inference task is thus: y ⇀ = arg max y2Y score βœ“ (y, x, t) (1) As the size of Y is exponential to the length of the program, a generic search procedure is typically employed for Eq.", "(1), as efficient dynamic algorithms typically do not exist.", "These search procedures generally maintain a beam of program states sorted according to some scoring function, where each program state represents an incomplete program.", "The search then generates a new program state from an existing state by performing an action.", "Each action adds a set of tokens (e.g., Nation) and keyword (e.g., Select) to a program state.", "For example, in order to generate the program in Figure 1 , the DynSP parser (Iyyer et al., 2017) will take the first action as adding the SQL expression Select Nation.", "Notice that score βœ“ can be used in either probabilistic or nonprobabilistic models.", "For probabilistic models, we assume that it is a Boltzmann policy, meaning that p βœ“ (y | x, t) / exp{score βœ“ (y, x, t)}.", "Learning Learning a semantic parser is equivalent to learning the parameters βœ“ in the scoring function, which is a structured learning problem, due to the large, structured output space Y.", "Structured learning algorithms generally consist of two major components: search and update.", "When the gold programs are available during training, the search procedure finds a set of high-scoring incorrect programs.", "These programs are used by the update step to derive loss for updating parameters.", "For example, these programs are used for approximating the partition-function in maximum-likelihood objective (Liang et al., 2011) and finding set of programs causing margin violation in margin based methods (DaumΓ© III and Marcu, 2005) .", "Depending on the exact algorithm being used, these two components are not necessarily separated into isolated steps.", "For instance, parameters can be updated in the middle of search (e.g., Huang et al., 2012) .", "For learning semantic parsers from denotations, where we assume only answers are available in a training set {( x i , t i , z i )} N i=1 of N examples, the basic construction of the learning algorithms remains the same.", "However, the problems that search needs to handle in SpFD is more challenging.", "In addition to finding a set of high-scoring incorrect programs, the search procedure also needs to guess the correct program(s) evaluating to the gold answer z i .", "This problem is further complicated by the presence of spurious programs, which generate the correct answer but are semantically incompatible with the question.", "For example, although all programs in Figure 2 evaluate to the same answer, only one of them is correct.", "The issue of the spurious programs also affects the design of model update.", "For instance, maximum marginal likelihood methods treat all the programs that evaluate to the gold answer equally, while maximum margin reward networks use model score to break tie and pick one of the programs as the correct reference.", "Addressing Spurious Programs: Policy Shaping Given a training example (x, t, z), the aim of the search step is to find a set K(x, t, z) of programs consisting of correct programs that evaluate to z and high-scoring incorrect programs.", "The search step should avoid picking up spurious programs for learning since such programs typically do not generalize.", "For example, in Figure 2, the spurious program Select Nation Where Index is Min will evaluate to an incorrect answer if the indices of the first two rows are swapped 1 .", "This problem is challenging since among the programs that evaluate to the correct answer, most of them are spurious.", "The search step can be viewed as following an exploration policy b βœ“ (y|x, t, z) to explore the set of programs Y.", "This exploration is often performed by beam search and at each step, we either sample from b βœ“ or take the top scoring programs.", "The set K(x, t, z) is then used by the update step for parameter update.", "Most search strategies use an exploration policy which is based on the score function, for example b βœ“ (y|x, t, z) / exp{score βœ“ (y, t)}.", "However, this approach can suffer from a divergence phenomenon whereby the score of spurious programs picked up by the search in the first epoch increases, making it more likely for the search to pick them up in the future.", "Such divergence issues are common with latent-variable learning and often require careful initialization to overcome (Rose, 1998) .", "Unfortunately such initialization schemes are not applicable for deep neural networks which form the model of most successful semantic parsers today (Jia and Liang, 2016; Misra and Artzi, 2016; Iyyer et al., 2017) .", "Prior work, such as ✏-greedy exploration (Guu et al., 2017) , has reduced the severity of this problem by introducing random noise in the search procedure to avoid saturating the search on high-scoring spurious programs.", "However, random noise need not bias the search towards the correct program(s).", "In this paper, we introduce a simple policy-shaping method to guide the search.", "This approach allows incorporating prior knowledge in the exploration policy and can bias the search away from spurious programs.", "Β» Compute generalized gradient updates 6: βœ“ = βœ“ + Β΅ (K) 7: return βœ“ Policy Shaping Policy shaping is a method to introduce prior knowledge into a policy (Griffith et al., 2013) .", "Formally, let the current behavior policy be b βœ“ (y|x, t, z) and a predefined critique policy, the prior knowledge, be p c (y|x, t).", "Policy shaping defines a new shaped behavior policy p b (y|x, t) given by: p b (y|x, t) = b βœ“ (y|x, t, z)p c (y|x, t) P y 0 2Y b βœ“ (y 0 |x, t, z)p c (y 0 |x, t) .", "(2) Using the shaped policy for exploration biases the search towards the critique policy's preference.", "We next describe a simple critique policy that we use in this paper.", "Lexical Policy Shaping We qualitatively observed that correct programs often contains tokens which are also present in the question.", "For example, the correct program in Figure 2 contains the token Points, which is also present in the question.", "We therefore, define a simple surface form similarity feature match(x, y) that computes the ratio of number of non-keyword tokens in the program y that are also present in the question x.", "However, surface-form similarity is often not enough.", "For example, both the first and fourth program in Figure 2 contain the token Points but only the fourth program is correct.", "Therefore, we also use a simple co-occurrence feature that triggers on frequently co-occurring pairs of tokens in the program and instruction.", "For example, the token most is highly likely to co-occur with a correct program containing the keyword Max.", "This happens for the example in Figure 2 .", "Similarly the token not may co-occur with the keyword NotEqual.", "We assume access to a lexicon ⇀ = {(w j , !", "j )} k j=1 containing k lexical pairs of tokens and keywords.", "Each lexical pair (w, !)", "maps the token w in a text to a keyword !", "in a program.", "For a given program y and question x, we define a co-occurrence score as co_occur(y, x) = P (w,!", ")2⇀ {w 2 x^!", "2 y}}.", "We define critique score critique(y, x) as the sum of the match and co_occur scores.", "The critique policy is given by: p c (y|x, t) / exp (⌘ ⇀ critique(y, x)) , (3) where ⌘ is a single scalar hyper-parameter denoting the confidence in the critique policy.", "Addressing Update Strategy Selection: Generalized Update Equation Given the set of programs generated by the search step, one can use many objectives to update the parameters.", "For example, previous work have utilized maximum marginal likelihood (Krishnamurthy et al., 2017; Guu et al., 2017) , reinforcement learning (Zhong et al., 2017; Guu et al., 2017) and margin based methods (Iyyer et al., 2017) .", "It could be difficult to choose the suitable algorithm from these options.", "In this section, we propose a principle and general update equation such that previous update algorithms can be considered as special cases to this equation.", "Having a general update is important for the following reasons.", "First, it allows us to understand existing algorithms better by examining their basic properties.", "Second, the generalized update equation also makes it easy to implement and experiment with various different algorithms.", "Moreover, it provides a framework that enables the development of new variations or extensions of existing learning methods.", "In the following, we describe how the commonly used algorithms are in fact very similartheir update rules can all be viewed as special cases of the proposed generalized update equation.", "Algorithm 1 shows the meta-learning framework.", "For every training example, we first find a set of candidates using an exploration policy (line 4).", "We use the program candidates to update the parameters (line 6).", "Commonly Used Learning Algorithms We briefly describe three algorithms: maximum marginalized likelihood, policy gradient and maximum margin reward.", "Maximum Marginalized Likelihood The maximum marginalized likelihood method maximizes the log-likelihood of the training data by marginalizing over the set of programs.", "J MML = log p(z i |x i , t i ) = log X y2Y p(z i |y, t i )p(y|x i , t i ) (4) Because an answer is deterministically computed given a program and a table, we define p(z | y, t) as 1 or 0 depending upon whether the y evaluates to z given t, or not.", "Let Gen(z, t) βœ“ Y be the set of compatible programs that evaluate to z given the table t. The objective can then be expressed as: J MML = log X y2Gen(zi,ti) p(y|x i , t i ) (5) In practice, the summation over Gen(.)", "is approximated by only using the compatible programs in the set K generated by the search step.", "Policy Gradient Methods Most reinforcement learning approaches for semantic parsing assume access to a reward function R : Y β‡₯ X β‡₯ Z !", "R, giving a scalar reward R(y, z) for a given program y and the correct answer z.", "2 We can further assume without loss of generality that the reward is always in [0, 1].", "Reinforcement learning approaches maximize the expected reward J RL : J RL = X y2Y p(y|x i , t i )R(y, z i ) (6) J RL is hard to approximate using numerical integration since the reward for all programs may not be known a priori.", "Policy gradient methods solve this by approximating the derivative using a sample from the policy.", "When the search space is large, the policy may fail to sample a correct program, which can greatly slow down the learning.", "Therefore, off-policy methods are sometimes introduced to bias the sampling towards high-reward yielding programs.", "In those methods, an additional exploration policy u(y|x i , t i , z i ) is used to improve sampling.", "Importance weights are used to make the gradient unbiased (see Appendix for derivation).", "Maximum Margin Reward For every training example (x i , t i , z i ), the maximum margin reward method finds the highest scoring program y i that evaluates to z i , as the reference program, from the set K of programs generated by the search.", "With a margin function : Y β‡₯Y β‡₯Z !", "R and reference program y, the set of programs V that violate the margin constraint can thus be defined as: V = {y 0 | y 0 2 Y and score βœ“ (y, x, t) ο£Ώ score βœ“ (y 0 , x, t) + (y, y 0 , z)}, (7) where (y, y 0 , z) = R(y, z) R(y 0 , z).", "Similarly, the program that most violates the constraint can be written as: y = arg max y 0 2Y {score βœ“ (y 0 , x, t) + (y, y 0 , z) score βœ“ (y, x, t)} (8) The most-violation margin objective (negative margin loss) is thus defined as: J MMR = max{0, score βœ“ (Θ³, x i , t i ) score βœ“ (y i , x i , t i ) + (y i ,Θ³, z i )} Unlike the previous two learning algorithms, margin methods only update the score of the reference program and the program that violates the margin.", "Generalized Update Equation Although the algorithms described in Β§4.1 seem very different on the surface, the gradients of their loss functions can in fact be described in the same generalized form, given in Eq.", "(9) 3 .", "In addition to the gradient of the model scoring function, this equation has two variable terms, w(Β·), q(Β·).", "We call the first term w(y, x, t, z) intensity, which is a positive scalar value and the second term q(y|x, t) the competing distribution, which is a probability distribution over programs.", "Varying them makes the equation equivalent to the update rule of the algorithms we discussed, as shown in Table 1 .", "We also consider meritocratic update policy which uses a hyperparameter to sharpen or smooth the intensity of maximum marginal likelihood (Guu et al., 2017) .", "Intuitively, w(y, x, t, z) defines the positive part of the update equation, which defines how aggressively the update favors program y.", "Likewise, q(y|x, t) defines the negative part of the learning Generalized Update Equation: (7) ) andΘ³ is the most violating program (cf.", "Eq.", "(8) ).", "For REINFORCE,Ε· is sampled from K using p(.)", "whereas for Off-Policy Policy Gradient,Ε· is sampled using u(.).", "(K) = X y2K w(y, x, t, z) 0 @ r βœ“ score βœ“ (y, x, t) X y 0 2Y q(y 0 |x, t)r βœ“ score βœ“ (y 0 , x, t) algorithm, namely how aggressively the update penalizes the members of the program set.", "The generalized update equation provides a tool for better understanding individual algorithm, and helps shed some light on when a particular method may perform better.", "Intensity versus Search Quality In SpFD, the effectiveness of the algorithms for SpFD is closely related to the quality of the search results given that the gold program is not available.", "Intuitively, if the search quality is good, the update algorithm could be aggressive on updating the model parameters.", "When the search quality is poor, the algorithm should be conservative.", "The intensity w(Β·) is closely related to the aggressiveness of the algorithm.", "For example, the maximum marginal likelihood is less aggressive given that it produces a non-zero intensity over all programs in the program set K that evaluate to the correct answer.", "The intensity for a particular correct program y is proportional to its probability p(y|x, t).", "Further, meritocratic update becomes more aggressive as becomes larger.", "In contrast, REINFORCE and maximum margin reward both have a non-zero intensity only on a single program in K. This value is 1.0 for maximum margin reward, while for reinforcement learning, this value is the reward.", "Maximum margin reward therefore updates most aggressively in favor of its selection while maximum marginal likelihood tends to hedge its bet.", "Therefore, the maximum margin methods should benefit the most when the search quality improves.", "Stability The general equation also allows us to investigate the stability of a model update algorithm.", "In general, the variance of update direction can be high, hence less stable, if the model update algorithm has peaky competing distribution, or it puts all of its intensity on a single program.", "For example, REINFORCE only samples one program and puts non-zero intensity only on that program, so it could be unstable depending on the sampling results.", "The competing distribution affects the stability of the algorithm.", "For example, maximum margin reward penalizes only the most violating program and is benign to other incorrect programs.", "Therefore, the MMR algorithm could be unstable during training.", "New Model Update Algorithm The general equation provides a framework that enables the development of new variations or extensions of existing learning methods.", "For example, in order to improve the stability of the MMR algorithm, we propose a simple variant of maximum margin reward, which penalizes all violating programs instead of only the most violating one.", "We call this approach maximum margin average violation reward (MAVER), which is included in Table 1 as well.", "Given that MAVER effectively considers more negative examples during each update, we expect that it is more stable compared to the MMR algorithm.", "Experiments We describe the setup in Β§5.1 and results in Β§5.2.", "Setup Dataset We use the sequential question answering (SQA) dataset (Iyyer et al., 2017) for our experiments.", "SQA contains 6,066 sequences and each sequence contains up to 3 questions, with 17,553 questions in total.", "The data is partitioned into training (83%) and test (17%) splits.", "We use 4/5 of the original train split as our training set and the remaining 1/5 as the dev set.", "We evaluate using exact match on answer.", "Previous state-of-theart result on the SQA dataset is 44.7% accuracy, using maximum margin reward learning.", "Semantic Parser Our semantic parser is based on DynSP (Iyyer et al., 2017) , which contains a set of SQL actions, such as adding a clause (e.g., Select Column) or adding an operator (e.g., Max).", "Each action has an associated neural network module that generates the score for the action based on the instruction, the table and the list of past actions.", "The score of the entire program is given by the sum of scores of all actions.", "We modified DynSP to improve its representational capacity.", "We refer to the new parser as DynSP++.", "Most notably, we included new features and introduced two additional parser actions.", "See Appendix 8.2 for more details.", "While these improvements help us achieve state-of-the-art results, the majority of the gain comes from the learning contributions described in this paper.", "Hyperparameters For each experiment, we train the model for 30 epochs.", "We find the optimal stopping epoch by evaluating the model on the dev set.", "We then train on train+dev set till the stopping epoch and evaluate the model on the held-out test set.", "Model parameters are trained using stochastic gradient descent with learning rate of 0.1.", "We set the hyperparameter ⌘ for policy shaping to 5.", "All hyperparameters were tuned on the dev set.", "We use 40 lexical pairs for defining the co-occur score.", "We used common English superlatives (e.g., highest, most) and comparators (e.g., more, larger) and did not fit the lexical pairs based on the dataset.", "Given the model parameter βœ“, we use a base exploration policy defined in (Iyyer et al., 2017) .", "This exploration policy is given by b βœ“ (y | x, t, z) / exp( Β· R(y, z) + score βœ“ (y, βœ“, z)).", "R(y, z) is the reward function of the incomplete program y, given the answer z.", "We use a reward function R(y, z) given by the Jaccard similarity of the gold answer z and the answer generated by the program y.", "The value of is set to infinity, which essentially is equivalent to sorting the programs based on the reward and using the current model score for tie breaking.", "Further, we prune all syntactically invalid programs.", "For more details, we refer the reader to (Iyyer et al., 2017) .", "Table 2 contains the dev and test results when using our algorithm on the SQA dataset.", "We observe that margin based methods perform better than maximum likelihood methods and policy gradient in our experiment.", "Policy shaping in general improves the performance across different algorithms.", "Our best test results outperform previous SOTA by 5.0%.", "Results Policy Gradient vs Off-Policy Gradient RE-INFORCE, a simple policy gradient method, achieved extremely poor performance.", "This likely due to the problem of exploration and having to sample from a large space of programs.", "This is further corroborated from observing the much superior performance of off-policy policy gradient methods.", "Thus, the sampling policy is an important factor to consider for policy gradient methods.", "The Effect of Policy Shaping We observe that the improvement due to policy shaping is 6.0% on the SQA dataset for MAVER and only 1.3% for maximum marginal likelihood.", "We also observe that as increases, the improvement due to policy shaping for meritocratic update increases.", "This supports our hypothesis that aggressive updates of margin based methods is beneficial when the search method is more accurate as compared to maximum marginal likelihood which hedges its bet between all programs that evaluate to the right answer.", "Stability of MMR In Section 4, the general update equation helps us point out that MMR could be unstable due to the peaky competing distribution.", "MAVER was proposed to increase the stability of the algorithm.", "To measure stability, we cal- Table 4 .", "Policy Shaping vs Model Shaping Critique policy contains useful information that can bias the search away from spurious programs.", "Therefore, one can also consider making the critique policy as part of the model.", "We call this model shaping.", "We define our model to be the shaped policy and train and test using the new model.", "Using MAVER updates, we found that the dev accuracy dropped to 37.1%.", "We conjecture that the strong prior in the critique policy can hinder generalization in model shaping.", "Related Work Semantic Parsing from Denotation Mapping natural language text to formal meaning representation was first studied by Montague (1970) .", "Early work on learning semantic parsers rely on labeled formal representations as the supervision signals Collins, 2005, 2007; Zelle and Mooney, 1993) .", "However, because getting access to gold formal representation generally requires expensive annotations by an expert, distant supervision approaches, where semantic parsers are learned from denotation only, have become the main learning paradigm (e.g., Clarke et al., 2010; Liang et al., 2011; Artzi and Zettlemoyer, 2013; Berant et al., 2013; Iyyer et al., 2017; Krishnamurthy et al., 2017) .", "Guu et al.", "(2017) studied the problem of spurious programs and considered adding noise to diversify the search procedure and introduced meritocratic updates.", "Reinforcement Learning Algorithms Reinforcement learning algorithms have been applied to various NLP problems including dialogue (Li et al., 2016 ), text-based games (Narasimhan et al., 2015) , information extraction (Narasimhan et al., 2016) , coreference resolution (Clark and Man- ning, 2016), semantic parsing (Guu et al., 2017) and instruction following (Misra et al., 2017) .", "Guu et al.", "(2017) show that policy gradient methods underperform maximum marginal likelihood approaches.", "Our result on the SQA dataset supports their observation.", "However, we show that using off-policy sampling, policy gradient methods can provide superior performance to maximum marginal likelihood methods.", "Margin-based Learning Margin-based methods have been considered in the context of SVM learning.", "In the NLP literature, margin based learning has been applied to parsing (Taskar et al., 2004; McDonald et al., 2005) , text classification (Taskar et al., 2003) , machine translation (Watanabe et al., 2007) and semantic parsing (Iyyer et al., 2017) .", "Kummerfeld et al.", "(2015) found that max-margin based methods generally outperform likelihood maximization on a range of tasks.", "Previous work have studied connections between margin based method and likelihood maximization for supervised learning setting.", "We show them as special cases of our unified update equation for distant supervision learning.", "Similar to this work, Lee et al.", "(2016) also found that in the context of supervised learning, margin-based algorithms which update all violated examples perform better than the one that only updates the most violated example.", "Latent Variable Modeling Learning semantic parsers from denotation can be viewed as a latent variable modeling problem, where the program is the latent variable.", "Probabilistic latent variable models have been studied using EM-algorithm and its variant (Dempster et al., 1977) .", "The graphical model literature has studied latent variable learning on margin-based methods (Yu and Joachims, 2009 ) and probabilistic models (Quattoni et al., 2007) .", "Samdani et al.", "(2012) studied various vari-ants of EM algorithm and showed that all of them are special cases of a unified framework.", "Our generalized update framework is similar in spirit.", "Conclusion In this paper, we propose a general update equation from semantic parsing from denotation and propose a policy shaping method for addressing the spurious program challenge.", "For the future, we plan to apply the proposed learning framework to more semantic parsing tasks and consider new methods for policy shaping." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "4", "4.1", "4.2", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Background", "The Semantic Parsing Task", "Learning", "Addressing Spurious Programs: Policy Shaping", "Addressing Update Strategy Selection:", "Commonly Used Learning Algorithms", "Generalized Update Equation", "Experiments", "Setup", "Results", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-103#paper-1267#slide-21
Conclusion
Discussed problem with search and update steps in semantic parsing from denotation. Introduced policy shaping for biasing the search away from spurious programs. Introduced generalized update equation that generalizes common update strategies and allows novel updates. Policy shaping allows more aggressive update!
Discussed problem with search and update steps in semantic parsing from denotation. Introduced policy shaping for biasing the search away from spurious programs. Introduced generalized update equation that generalizes common update strategies and allows novel updates. Policy shaping allows more aggressive update!
[]
GEM-SciDuet-train-103#paper-1267#slide-22
1267
Policy Shaping and Generalized Update Equations for Semantic Parsing from Denotations
Semantic parsing from denotations faces two key challenges in model training: (1) given only the denotations (e.g., answers), search for good candidate semantic parses, and (2) choose the best model update algorithm. We propose effective and general solutions to each of them. Using policy shaping, we bias the search procedure towards semantic parses that are more compatible to the text, which provide better supervision signals for training. In addition, we propose an update equation that generalizes three different families of learning algorithms, which enables fast model exploration. When experimented on a recently proposed sequential question answering dataset, our framework leads to a new state-of-theart model that outperforms previous work by 5.0% absolute on exact match accuracy.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233 ], "paper_content_text": [ "Introduction Semantic parsing from denotations (SpFD) is the problem of mapping text to executable formal representations (or program) in a situated environment and executing them to generate denotations (or answer), in the absence of access to correct representations.", "Several problems have been handled within this framework, including question answering (Berant et al., 2013; Iyyer et al., 2017) and instructions for robots (Artzi and Zettlemoyer, 2013; Misra et al., 2015) .", "Consider the example in Figure 1 .", "Given the question and a table environment, a semantic parser maps the question to an executable program, in this case a SQL query, and then executes the query on the environment to generate the answer England.", "In the SpFD setting, the training data does not contain the correct programs.", "Thus, the existing learning approaches for SpFD perform two steps for every training example, a search step that explores the space of programs Select Nation Where Points is Maximum Program: Answer: Environment: England Figure 1 : An example of semantic parsing from denotations.", "Given the table environment, map the question to an executable program that evaluates to the answer.", "and finds suitable candidates, and an update step that uses these programs to update the model.", "Figure 2 shows the two step training procedure for the above example.", "In this paper, we address two key challenges in model training for SpFD by proposing a novel learning framework, improving both the search and update steps.", "The first challenge, the existence of spurious programs, lies in the search step.", "More specifically, while the success of the search step relies on its ability to find programs that are semantically correct, we can only verify if the program can generate correct answers, given that no gold programs are presented.", "The search step is complicated by spurious programs, which happen to evaluate to the correct answer but do not represent accurately the meaning of the natural language question.", "For example, for the environment in Figure 1 , the program Select Nation Where Name = Karen Andrew is spurious.", "Selecting spurious programs as positive examples can greatly affect the performance of semantic parsers as these programs generally do not gen- Our contributions in this work are twofold.", "To address the first challenge, we propose a policy shaping (Griffith et al., 2013) method that incorporates simple, lightweight domain knowledge, such as a small set of lexical pairs of tokens in the question and program, in the form of a critique policy ( Β§ 3).", "This helps bias the search towards the correct program, an important step to improve supervision signals, which benefits learning regardless of the choice of algorithm.", "To address the second challenge, we prove that the parameter update step in several algorithms are similar and can be viewed as special cases of a generalized update equation ( Β§ 4).", "The equation contains two variable terms that govern the update behavior.", "Changing these two terms effectively defines an infinite class of learning algorithms where different values lead to significantly different results.", "We study this effect and propose a novel learning framework that improves over existing methods.", "We evaluate our methods using the sequential question answering (SQA) dataset (Iyyer et al., 2017) , and show that our proposed improvements to the search and update steps consistently enhance existing approaches.", "The proposed algorithm achieves new state-of-the-art and outperforms existing parsers by 5.0%.", "Background We give a formal problem definition of the semantic parsing task, followed by the general learning framework for solving it.", "The Semantic Parsing Task The problem discussed in this paper can be formally defined as follows.", "Let X be the set of all possible questions, Y programs (e.g., SQL-like queries), T tables (i.e., the structured data in this work) and Z answers.", "We further assume access to an executor : Y β‡₯ T !", "Z, that given a program y 2 Y and a table t 2 T , generates an answer (y, t) 2 Z.", "We assume that the executor and all tables are deterministic and the executor can be called as many times as possible.", "To facilitate discussion in the following sections, we define an environment function e t : Y !", "Z, by applying the executor to the program as e t (y) = (y, t).", "Given a question x and an environment e t , our aim is to generate a program y ⇀ 2 Y and then execute it to produce the answer e t (y ⇀ ).", "Assume that for any y 2 Y, the score of y being a correct program for x is score βœ“ (y, x, t), parameterized by βœ“.", "The inference task is thus: y ⇀ = arg max y2Y score βœ“ (y, x, t) (1) As the size of Y is exponential to the length of the program, a generic search procedure is typically employed for Eq.", "(1), as efficient dynamic algorithms typically do not exist.", "These search procedures generally maintain a beam of program states sorted according to some scoring function, where each program state represents an incomplete program.", "The search then generates a new program state from an existing state by performing an action.", "Each action adds a set of tokens (e.g., Nation) and keyword (e.g., Select) to a program state.", "For example, in order to generate the program in Figure 1 , the DynSP parser (Iyyer et al., 2017) will take the first action as adding the SQL expression Select Nation.", "Notice that score βœ“ can be used in either probabilistic or nonprobabilistic models.", "For probabilistic models, we assume that it is a Boltzmann policy, meaning that p βœ“ (y | x, t) / exp{score βœ“ (y, x, t)}.", "Learning Learning a semantic parser is equivalent to learning the parameters βœ“ in the scoring function, which is a structured learning problem, due to the large, structured output space Y.", "Structured learning algorithms generally consist of two major components: search and update.", "When the gold programs are available during training, the search procedure finds a set of high-scoring incorrect programs.", "These programs are used by the update step to derive loss for updating parameters.", "For example, these programs are used for approximating the partition-function in maximum-likelihood objective (Liang et al., 2011) and finding set of programs causing margin violation in margin based methods (DaumΓ© III and Marcu, 2005) .", "Depending on the exact algorithm being used, these two components are not necessarily separated into isolated steps.", "For instance, parameters can be updated in the middle of search (e.g., Huang et al., 2012) .", "For learning semantic parsers from denotations, where we assume only answers are available in a training set {( x i , t i , z i )} N i=1 of N examples, the basic construction of the learning algorithms remains the same.", "However, the problems that search needs to handle in SpFD is more challenging.", "In addition to finding a set of high-scoring incorrect programs, the search procedure also needs to guess the correct program(s) evaluating to the gold answer z i .", "This problem is further complicated by the presence of spurious programs, which generate the correct answer but are semantically incompatible with the question.", "For example, although all programs in Figure 2 evaluate to the same answer, only one of them is correct.", "The issue of the spurious programs also affects the design of model update.", "For instance, maximum marginal likelihood methods treat all the programs that evaluate to the gold answer equally, while maximum margin reward networks use model score to break tie and pick one of the programs as the correct reference.", "Addressing Spurious Programs: Policy Shaping Given a training example (x, t, z), the aim of the search step is to find a set K(x, t, z) of programs consisting of correct programs that evaluate to z and high-scoring incorrect programs.", "The search step should avoid picking up spurious programs for learning since such programs typically do not generalize.", "For example, in Figure 2, the spurious program Select Nation Where Index is Min will evaluate to an incorrect answer if the indices of the first two rows are swapped 1 .", "This problem is challenging since among the programs that evaluate to the correct answer, most of them are spurious.", "The search step can be viewed as following an exploration policy b βœ“ (y|x, t, z) to explore the set of programs Y.", "This exploration is often performed by beam search and at each step, we either sample from b βœ“ or take the top scoring programs.", "The set K(x, t, z) is then used by the update step for parameter update.", "Most search strategies use an exploration policy which is based on the score function, for example b βœ“ (y|x, t, z) / exp{score βœ“ (y, t)}.", "However, this approach can suffer from a divergence phenomenon whereby the score of spurious programs picked up by the search in the first epoch increases, making it more likely for the search to pick them up in the future.", "Such divergence issues are common with latent-variable learning and often require careful initialization to overcome (Rose, 1998) .", "Unfortunately such initialization schemes are not applicable for deep neural networks which form the model of most successful semantic parsers today (Jia and Liang, 2016; Misra and Artzi, 2016; Iyyer et al., 2017) .", "Prior work, such as ✏-greedy exploration (Guu et al., 2017) , has reduced the severity of this problem by introducing random noise in the search procedure to avoid saturating the search on high-scoring spurious programs.", "However, random noise need not bias the search towards the correct program(s).", "In this paper, we introduce a simple policy-shaping method to guide the search.", "This approach allows incorporating prior knowledge in the exploration policy and can bias the search away from spurious programs.", "Β» Compute generalized gradient updates 6: βœ“ = βœ“ + Β΅ (K) 7: return βœ“ Policy Shaping Policy shaping is a method to introduce prior knowledge into a policy (Griffith et al., 2013) .", "Formally, let the current behavior policy be b βœ“ (y|x, t, z) and a predefined critique policy, the prior knowledge, be p c (y|x, t).", "Policy shaping defines a new shaped behavior policy p b (y|x, t) given by: p b (y|x, t) = b βœ“ (y|x, t, z)p c (y|x, t) P y 0 2Y b βœ“ (y 0 |x, t, z)p c (y 0 |x, t) .", "(2) Using the shaped policy for exploration biases the search towards the critique policy's preference.", "We next describe a simple critique policy that we use in this paper.", "Lexical Policy Shaping We qualitatively observed that correct programs often contains tokens which are also present in the question.", "For example, the correct program in Figure 2 contains the token Points, which is also present in the question.", "We therefore, define a simple surface form similarity feature match(x, y) that computes the ratio of number of non-keyword tokens in the program y that are also present in the question x.", "However, surface-form similarity is often not enough.", "For example, both the first and fourth program in Figure 2 contain the token Points but only the fourth program is correct.", "Therefore, we also use a simple co-occurrence feature that triggers on frequently co-occurring pairs of tokens in the program and instruction.", "For example, the token most is highly likely to co-occur with a correct program containing the keyword Max.", "This happens for the example in Figure 2 .", "Similarly the token not may co-occur with the keyword NotEqual.", "We assume access to a lexicon ⇀ = {(w j , !", "j )} k j=1 containing k lexical pairs of tokens and keywords.", "Each lexical pair (w, !)", "maps the token w in a text to a keyword !", "in a program.", "For a given program y and question x, we define a co-occurrence score as co_occur(y, x) = P (w,!", ")2⇀ {w 2 x^!", "2 y}}.", "We define critique score critique(y, x) as the sum of the match and co_occur scores.", "The critique policy is given by: p c (y|x, t) / exp (⌘ ⇀ critique(y, x)) , (3) where ⌘ is a single scalar hyper-parameter denoting the confidence in the critique policy.", "Addressing Update Strategy Selection: Generalized Update Equation Given the set of programs generated by the search step, one can use many objectives to update the parameters.", "For example, previous work have utilized maximum marginal likelihood (Krishnamurthy et al., 2017; Guu et al., 2017) , reinforcement learning (Zhong et al., 2017; Guu et al., 2017) and margin based methods (Iyyer et al., 2017) .", "It could be difficult to choose the suitable algorithm from these options.", "In this section, we propose a principle and general update equation such that previous update algorithms can be considered as special cases to this equation.", "Having a general update is important for the following reasons.", "First, it allows us to understand existing algorithms better by examining their basic properties.", "Second, the generalized update equation also makes it easy to implement and experiment with various different algorithms.", "Moreover, it provides a framework that enables the development of new variations or extensions of existing learning methods.", "In the following, we describe how the commonly used algorithms are in fact very similartheir update rules can all be viewed as special cases of the proposed generalized update equation.", "Algorithm 1 shows the meta-learning framework.", "For every training example, we first find a set of candidates using an exploration policy (line 4).", "We use the program candidates to update the parameters (line 6).", "Commonly Used Learning Algorithms We briefly describe three algorithms: maximum marginalized likelihood, policy gradient and maximum margin reward.", "Maximum Marginalized Likelihood The maximum marginalized likelihood method maximizes the log-likelihood of the training data by marginalizing over the set of programs.", "J MML = log p(z i |x i , t i ) = log X y2Y p(z i |y, t i )p(y|x i , t i ) (4) Because an answer is deterministically computed given a program and a table, we define p(z | y, t) as 1 or 0 depending upon whether the y evaluates to z given t, or not.", "Let Gen(z, t) βœ“ Y be the set of compatible programs that evaluate to z given the table t. The objective can then be expressed as: J MML = log X y2Gen(zi,ti) p(y|x i , t i ) (5) In practice, the summation over Gen(.)", "is approximated by only using the compatible programs in the set K generated by the search step.", "Policy Gradient Methods Most reinforcement learning approaches for semantic parsing assume access to a reward function R : Y β‡₯ X β‡₯ Z !", "R, giving a scalar reward R(y, z) for a given program y and the correct answer z.", "2 We can further assume without loss of generality that the reward is always in [0, 1].", "Reinforcement learning approaches maximize the expected reward J RL : J RL = X y2Y p(y|x i , t i )R(y, z i ) (6) J RL is hard to approximate using numerical integration since the reward for all programs may not be known a priori.", "Policy gradient methods solve this by approximating the derivative using a sample from the policy.", "When the search space is large, the policy may fail to sample a correct program, which can greatly slow down the learning.", "Therefore, off-policy methods are sometimes introduced to bias the sampling towards high-reward yielding programs.", "In those methods, an additional exploration policy u(y|x i , t i , z i ) is used to improve sampling.", "Importance weights are used to make the gradient unbiased (see Appendix for derivation).", "Maximum Margin Reward For every training example (x i , t i , z i ), the maximum margin reward method finds the highest scoring program y i that evaluates to z i , as the reference program, from the set K of programs generated by the search.", "With a margin function : Y β‡₯Y β‡₯Z !", "R and reference program y, the set of programs V that violate the margin constraint can thus be defined as: V = {y 0 | y 0 2 Y and score βœ“ (y, x, t) ο£Ώ score βœ“ (y 0 , x, t) + (y, y 0 , z)}, (7) where (y, y 0 , z) = R(y, z) R(y 0 , z).", "Similarly, the program that most violates the constraint can be written as: y = arg max y 0 2Y {score βœ“ (y 0 , x, t) + (y, y 0 , z) score βœ“ (y, x, t)} (8) The most-violation margin objective (negative margin loss) is thus defined as: J MMR = max{0, score βœ“ (Θ³, x i , t i ) score βœ“ (y i , x i , t i ) + (y i ,Θ³, z i )} Unlike the previous two learning algorithms, margin methods only update the score of the reference program and the program that violates the margin.", "Generalized Update Equation Although the algorithms described in Β§4.1 seem very different on the surface, the gradients of their loss functions can in fact be described in the same generalized form, given in Eq.", "(9) 3 .", "In addition to the gradient of the model scoring function, this equation has two variable terms, w(Β·), q(Β·).", "We call the first term w(y, x, t, z) intensity, which is a positive scalar value and the second term q(y|x, t) the competing distribution, which is a probability distribution over programs.", "Varying them makes the equation equivalent to the update rule of the algorithms we discussed, as shown in Table 1 .", "We also consider meritocratic update policy which uses a hyperparameter to sharpen or smooth the intensity of maximum marginal likelihood (Guu et al., 2017) .", "Intuitively, w(y, x, t, z) defines the positive part of the update equation, which defines how aggressively the update favors program y.", "Likewise, q(y|x, t) defines the negative part of the learning Generalized Update Equation: (7) ) andΘ³ is the most violating program (cf.", "Eq.", "(8) ).", "For REINFORCE,Ε· is sampled from K using p(.)", "whereas for Off-Policy Policy Gradient,Ε· is sampled using u(.).", "(K) = X y2K w(y, x, t, z) 0 @ r βœ“ score βœ“ (y, x, t) X y 0 2Y q(y 0 |x, t)r βœ“ score βœ“ (y 0 , x, t) algorithm, namely how aggressively the update penalizes the members of the program set.", "The generalized update equation provides a tool for better understanding individual algorithm, and helps shed some light on when a particular method may perform better.", "Intensity versus Search Quality In SpFD, the effectiveness of the algorithms for SpFD is closely related to the quality of the search results given that the gold program is not available.", "Intuitively, if the search quality is good, the update algorithm could be aggressive on updating the model parameters.", "When the search quality is poor, the algorithm should be conservative.", "The intensity w(Β·) is closely related to the aggressiveness of the algorithm.", "For example, the maximum marginal likelihood is less aggressive given that it produces a non-zero intensity over all programs in the program set K that evaluate to the correct answer.", "The intensity for a particular correct program y is proportional to its probability p(y|x, t).", "Further, meritocratic update becomes more aggressive as becomes larger.", "In contrast, REINFORCE and maximum margin reward both have a non-zero intensity only on a single program in K. This value is 1.0 for maximum margin reward, while for reinforcement learning, this value is the reward.", "Maximum margin reward therefore updates most aggressively in favor of its selection while maximum marginal likelihood tends to hedge its bet.", "Therefore, the maximum margin methods should benefit the most when the search quality improves.", "Stability The general equation also allows us to investigate the stability of a model update algorithm.", "In general, the variance of update direction can be high, hence less stable, if the model update algorithm has peaky competing distribution, or it puts all of its intensity on a single program.", "For example, REINFORCE only samples one program and puts non-zero intensity only on that program, so it could be unstable depending on the sampling results.", "The competing distribution affects the stability of the algorithm.", "For example, maximum margin reward penalizes only the most violating program and is benign to other incorrect programs.", "Therefore, the MMR algorithm could be unstable during training.", "New Model Update Algorithm The general equation provides a framework that enables the development of new variations or extensions of existing learning methods.", "For example, in order to improve the stability of the MMR algorithm, we propose a simple variant of maximum margin reward, which penalizes all violating programs instead of only the most violating one.", "We call this approach maximum margin average violation reward (MAVER), which is included in Table 1 as well.", "Given that MAVER effectively considers more negative examples during each update, we expect that it is more stable compared to the MMR algorithm.", "Experiments We describe the setup in Β§5.1 and results in Β§5.2.", "Setup Dataset We use the sequential question answering (SQA) dataset (Iyyer et al., 2017) for our experiments.", "SQA contains 6,066 sequences and each sequence contains up to 3 questions, with 17,553 questions in total.", "The data is partitioned into training (83%) and test (17%) splits.", "We use 4/5 of the original train split as our training set and the remaining 1/5 as the dev set.", "We evaluate using exact match on answer.", "Previous state-of-theart result on the SQA dataset is 44.7% accuracy, using maximum margin reward learning.", "Semantic Parser Our semantic parser is based on DynSP (Iyyer et al., 2017) , which contains a set of SQL actions, such as adding a clause (e.g., Select Column) or adding an operator (e.g., Max).", "Each action has an associated neural network module that generates the score for the action based on the instruction, the table and the list of past actions.", "The score of the entire program is given by the sum of scores of all actions.", "We modified DynSP to improve its representational capacity.", "We refer to the new parser as DynSP++.", "Most notably, we included new features and introduced two additional parser actions.", "See Appendix 8.2 for more details.", "While these improvements help us achieve state-of-the-art results, the majority of the gain comes from the learning contributions described in this paper.", "Hyperparameters For each experiment, we train the model for 30 epochs.", "We find the optimal stopping epoch by evaluating the model on the dev set.", "We then train on train+dev set till the stopping epoch and evaluate the model on the held-out test set.", "Model parameters are trained using stochastic gradient descent with learning rate of 0.1.", "We set the hyperparameter ⌘ for policy shaping to 5.", "All hyperparameters were tuned on the dev set.", "We use 40 lexical pairs for defining the co-occur score.", "We used common English superlatives (e.g., highest, most) and comparators (e.g., more, larger) and did not fit the lexical pairs based on the dataset.", "Given the model parameter βœ“, we use a base exploration policy defined in (Iyyer et al., 2017) .", "This exploration policy is given by b βœ“ (y | x, t, z) / exp( Β· R(y, z) + score βœ“ (y, βœ“, z)).", "R(y, z) is the reward function of the incomplete program y, given the answer z.", "We use a reward function R(y, z) given by the Jaccard similarity of the gold answer z and the answer generated by the program y.", "The value of is set to infinity, which essentially is equivalent to sorting the programs based on the reward and using the current model score for tie breaking.", "Further, we prune all syntactically invalid programs.", "For more details, we refer the reader to (Iyyer et al., 2017) .", "Table 2 contains the dev and test results when using our algorithm on the SQA dataset.", "We observe that margin based methods perform better than maximum likelihood methods and policy gradient in our experiment.", "Policy shaping in general improves the performance across different algorithms.", "Our best test results outperform previous SOTA by 5.0%.", "Results Policy Gradient vs Off-Policy Gradient RE-INFORCE, a simple policy gradient method, achieved extremely poor performance.", "This likely due to the problem of exploration and having to sample from a large space of programs.", "This is further corroborated from observing the much superior performance of off-policy policy gradient methods.", "Thus, the sampling policy is an important factor to consider for policy gradient methods.", "The Effect of Policy Shaping We observe that the improvement due to policy shaping is 6.0% on the SQA dataset for MAVER and only 1.3% for maximum marginal likelihood.", "We also observe that as increases, the improvement due to policy shaping for meritocratic update increases.", "This supports our hypothesis that aggressive updates of margin based methods is beneficial when the search method is more accurate as compared to maximum marginal likelihood which hedges its bet between all programs that evaluate to the right answer.", "Stability of MMR In Section 4, the general update equation helps us point out that MMR could be unstable due to the peaky competing distribution.", "MAVER was proposed to increase the stability of the algorithm.", "To measure stability, we cal- Table 4 .", "Policy Shaping vs Model Shaping Critique policy contains useful information that can bias the search away from spurious programs.", "Therefore, one can also consider making the critique policy as part of the model.", "We call this model shaping.", "We define our model to be the shaped policy and train and test using the new model.", "Using MAVER updates, we found that the dev accuracy dropped to 37.1%.", "We conjecture that the strong prior in the critique policy can hinder generalization in model shaping.", "Related Work Semantic Parsing from Denotation Mapping natural language text to formal meaning representation was first studied by Montague (1970) .", "Early work on learning semantic parsers rely on labeled formal representations as the supervision signals Collins, 2005, 2007; Zelle and Mooney, 1993) .", "However, because getting access to gold formal representation generally requires expensive annotations by an expert, distant supervision approaches, where semantic parsers are learned from denotation only, have become the main learning paradigm (e.g., Clarke et al., 2010; Liang et al., 2011; Artzi and Zettlemoyer, 2013; Berant et al., 2013; Iyyer et al., 2017; Krishnamurthy et al., 2017) .", "Guu et al.", "(2017) studied the problem of spurious programs and considered adding noise to diversify the search procedure and introduced meritocratic updates.", "Reinforcement Learning Algorithms Reinforcement learning algorithms have been applied to various NLP problems including dialogue (Li et al., 2016 ), text-based games (Narasimhan et al., 2015) , information extraction (Narasimhan et al., 2016) , coreference resolution (Clark and Man- ning, 2016), semantic parsing (Guu et al., 2017) and instruction following (Misra et al., 2017) .", "Guu et al.", "(2017) show that policy gradient methods underperform maximum marginal likelihood approaches.", "Our result on the SQA dataset supports their observation.", "However, we show that using off-policy sampling, policy gradient methods can provide superior performance to maximum marginal likelihood methods.", "Margin-based Learning Margin-based methods have been considered in the context of SVM learning.", "In the NLP literature, margin based learning has been applied to parsing (Taskar et al., 2004; McDonald et al., 2005) , text classification (Taskar et al., 2003) , machine translation (Watanabe et al., 2007) and semantic parsing (Iyyer et al., 2017) .", "Kummerfeld et al.", "(2015) found that max-margin based methods generally outperform likelihood maximization on a range of tasks.", "Previous work have studied connections between margin based method and likelihood maximization for supervised learning setting.", "We show them as special cases of our unified update equation for distant supervision learning.", "Similar to this work, Lee et al.", "(2016) also found that in the context of supervised learning, margin-based algorithms which update all violated examples perform better than the one that only updates the most violated example.", "Latent Variable Modeling Learning semantic parsers from denotation can be viewed as a latent variable modeling problem, where the program is the latent variable.", "Probabilistic latent variable models have been studied using EM-algorithm and its variant (Dempster et al., 1977) .", "The graphical model literature has studied latent variable learning on margin-based methods (Yu and Joachims, 2009 ) and probabilistic models (Quattoni et al., 2007) .", "Samdani et al.", "(2012) studied various vari-ants of EM algorithm and showed that all of them are special cases of a unified framework.", "Our generalized update framework is similar in spirit.", "Conclusion In this paper, we propose a general update equation from semantic parsing from denotation and propose a policy shaping method for addressing the spurious program challenge.", "For the future, we plan to apply the proposed learning framework to more semantic parsing tasks and consider new methods for policy shaping." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "4", "4.1", "4.2", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Background", "The Semantic Parsing Task", "Learning", "Addressing Spurious Programs: Policy Shaping", "Addressing Update Strategy Selection:", "Commonly Used Learning Algorithms", "Generalized Update Equation", "Experiments", "Setup", "Results", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-103#paper-1267#slide-22
Generalized Update as an Analysis Tool
MMR and MAVER are more aggressive than MML MMR and MAVER only pick one MML gives credits to all {y} that satisfies {z} MMR and MAVER benefit more from shaping
MMR and MAVER are more aggressive than MML MMR and MAVER only pick one MML gives credits to all {y} that satisfies {z} MMR and MAVER benefit more from shaping
[]
GEM-SciDuet-train-103#paper-1267#slide-23
1267
Policy Shaping and Generalized Update Equations for Semantic Parsing from Denotations
Semantic parsing from denotations faces two key challenges in model training: (1) given only the denotations (e.g., answers), search for good candidate semantic parses, and (2) choose the best model update algorithm. We propose effective and general solutions to each of them. Using policy shaping, we bias the search procedure towards semantic parses that are more compatible to the text, which provide better supervision signals for training. In addition, we propose an update equation that generalizes three different families of learning algorithms, which enables fast model exploration. When experimented on a recently proposed sequential question answering dataset, our framework leads to a new state-of-theart model that outperforms previous work by 5.0% absolute on exact match accuracy.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233 ], "paper_content_text": [ "Introduction Semantic parsing from denotations (SpFD) is the problem of mapping text to executable formal representations (or program) in a situated environment and executing them to generate denotations (or answer), in the absence of access to correct representations.", "Several problems have been handled within this framework, including question answering (Berant et al., 2013; Iyyer et al., 2017) and instructions for robots (Artzi and Zettlemoyer, 2013; Misra et al., 2015) .", "Consider the example in Figure 1 .", "Given the question and a table environment, a semantic parser maps the question to an executable program, in this case a SQL query, and then executes the query on the environment to generate the answer England.", "In the SpFD setting, the training data does not contain the correct programs.", "Thus, the existing learning approaches for SpFD perform two steps for every training example, a search step that explores the space of programs Select Nation Where Points is Maximum Program: Answer: Environment: England Figure 1 : An example of semantic parsing from denotations.", "Given the table environment, map the question to an executable program that evaluates to the answer.", "and finds suitable candidates, and an update step that uses these programs to update the model.", "Figure 2 shows the two step training procedure for the above example.", "In this paper, we address two key challenges in model training for SpFD by proposing a novel learning framework, improving both the search and update steps.", "The first challenge, the existence of spurious programs, lies in the search step.", "More specifically, while the success of the search step relies on its ability to find programs that are semantically correct, we can only verify if the program can generate correct answers, given that no gold programs are presented.", "The search step is complicated by spurious programs, which happen to evaluate to the correct answer but do not represent accurately the meaning of the natural language question.", "For example, for the environment in Figure 1 , the program Select Nation Where Name = Karen Andrew is spurious.", "Selecting spurious programs as positive examples can greatly affect the performance of semantic parsers as these programs generally do not gen- Our contributions in this work are twofold.", "To address the first challenge, we propose a policy shaping (Griffith et al., 2013) method that incorporates simple, lightweight domain knowledge, such as a small set of lexical pairs of tokens in the question and program, in the form of a critique policy ( Β§ 3).", "This helps bias the search towards the correct program, an important step to improve supervision signals, which benefits learning regardless of the choice of algorithm.", "To address the second challenge, we prove that the parameter update step in several algorithms are similar and can be viewed as special cases of a generalized update equation ( Β§ 4).", "The equation contains two variable terms that govern the update behavior.", "Changing these two terms effectively defines an infinite class of learning algorithms where different values lead to significantly different results.", "We study this effect and propose a novel learning framework that improves over existing methods.", "We evaluate our methods using the sequential question answering (SQA) dataset (Iyyer et al., 2017) , and show that our proposed improvements to the search and update steps consistently enhance existing approaches.", "The proposed algorithm achieves new state-of-the-art and outperforms existing parsers by 5.0%.", "Background We give a formal problem definition of the semantic parsing task, followed by the general learning framework for solving it.", "The Semantic Parsing Task The problem discussed in this paper can be formally defined as follows.", "Let X be the set of all possible questions, Y programs (e.g., SQL-like queries), T tables (i.e., the structured data in this work) and Z answers.", "We further assume access to an executor : Y β‡₯ T !", "Z, that given a program y 2 Y and a table t 2 T , generates an answer (y, t) 2 Z.", "We assume that the executor and all tables are deterministic and the executor can be called as many times as possible.", "To facilitate discussion in the following sections, we define an environment function e t : Y !", "Z, by applying the executor to the program as e t (y) = (y, t).", "Given a question x and an environment e t , our aim is to generate a program y ⇀ 2 Y and then execute it to produce the answer e t (y ⇀ ).", "Assume that for any y 2 Y, the score of y being a correct program for x is score βœ“ (y, x, t), parameterized by βœ“.", "The inference task is thus: y ⇀ = arg max y2Y score βœ“ (y, x, t) (1) As the size of Y is exponential to the length of the program, a generic search procedure is typically employed for Eq.", "(1), as efficient dynamic algorithms typically do not exist.", "These search procedures generally maintain a beam of program states sorted according to some scoring function, where each program state represents an incomplete program.", "The search then generates a new program state from an existing state by performing an action.", "Each action adds a set of tokens (e.g., Nation) and keyword (e.g., Select) to a program state.", "For example, in order to generate the program in Figure 1 , the DynSP parser (Iyyer et al., 2017) will take the first action as adding the SQL expression Select Nation.", "Notice that score βœ“ can be used in either probabilistic or nonprobabilistic models.", "For probabilistic models, we assume that it is a Boltzmann policy, meaning that p βœ“ (y | x, t) / exp{score βœ“ (y, x, t)}.", "Learning Learning a semantic parser is equivalent to learning the parameters βœ“ in the scoring function, which is a structured learning problem, due to the large, structured output space Y.", "Structured learning algorithms generally consist of two major components: search and update.", "When the gold programs are available during training, the search procedure finds a set of high-scoring incorrect programs.", "These programs are used by the update step to derive loss for updating parameters.", "For example, these programs are used for approximating the partition-function in maximum-likelihood objective (Liang et al., 2011) and finding set of programs causing margin violation in margin based methods (DaumΓ© III and Marcu, 2005) .", "Depending on the exact algorithm being used, these two components are not necessarily separated into isolated steps.", "For instance, parameters can be updated in the middle of search (e.g., Huang et al., 2012) .", "For learning semantic parsers from denotations, where we assume only answers are available in a training set {( x i , t i , z i )} N i=1 of N examples, the basic construction of the learning algorithms remains the same.", "However, the problems that search needs to handle in SpFD is more challenging.", "In addition to finding a set of high-scoring incorrect programs, the search procedure also needs to guess the correct program(s) evaluating to the gold answer z i .", "This problem is further complicated by the presence of spurious programs, which generate the correct answer but are semantically incompatible with the question.", "For example, although all programs in Figure 2 evaluate to the same answer, only one of them is correct.", "The issue of the spurious programs also affects the design of model update.", "For instance, maximum marginal likelihood methods treat all the programs that evaluate to the gold answer equally, while maximum margin reward networks use model score to break tie and pick one of the programs as the correct reference.", "Addressing Spurious Programs: Policy Shaping Given a training example (x, t, z), the aim of the search step is to find a set K(x, t, z) of programs consisting of correct programs that evaluate to z and high-scoring incorrect programs.", "The search step should avoid picking up spurious programs for learning since such programs typically do not generalize.", "For example, in Figure 2, the spurious program Select Nation Where Index is Min will evaluate to an incorrect answer if the indices of the first two rows are swapped 1 .", "This problem is challenging since among the programs that evaluate to the correct answer, most of them are spurious.", "The search step can be viewed as following an exploration policy b βœ“ (y|x, t, z) to explore the set of programs Y.", "This exploration is often performed by beam search and at each step, we either sample from b βœ“ or take the top scoring programs.", "The set K(x, t, z) is then used by the update step for parameter update.", "Most search strategies use an exploration policy which is based on the score function, for example b βœ“ (y|x, t, z) / exp{score βœ“ (y, t)}.", "However, this approach can suffer from a divergence phenomenon whereby the score of spurious programs picked up by the search in the first epoch increases, making it more likely for the search to pick them up in the future.", "Such divergence issues are common with latent-variable learning and often require careful initialization to overcome (Rose, 1998) .", "Unfortunately such initialization schemes are not applicable for deep neural networks which form the model of most successful semantic parsers today (Jia and Liang, 2016; Misra and Artzi, 2016; Iyyer et al., 2017) .", "Prior work, such as ✏-greedy exploration (Guu et al., 2017) , has reduced the severity of this problem by introducing random noise in the search procedure to avoid saturating the search on high-scoring spurious programs.", "However, random noise need not bias the search towards the correct program(s).", "In this paper, we introduce a simple policy-shaping method to guide the search.", "This approach allows incorporating prior knowledge in the exploration policy and can bias the search away from spurious programs.", "Β» Compute generalized gradient updates 6: βœ“ = βœ“ + Β΅ (K) 7: return βœ“ Policy Shaping Policy shaping is a method to introduce prior knowledge into a policy (Griffith et al., 2013) .", "Formally, let the current behavior policy be b βœ“ (y|x, t, z) and a predefined critique policy, the prior knowledge, be p c (y|x, t).", "Policy shaping defines a new shaped behavior policy p b (y|x, t) given by: p b (y|x, t) = b βœ“ (y|x, t, z)p c (y|x, t) P y 0 2Y b βœ“ (y 0 |x, t, z)p c (y 0 |x, t) .", "(2) Using the shaped policy for exploration biases the search towards the critique policy's preference.", "We next describe a simple critique policy that we use in this paper.", "Lexical Policy Shaping We qualitatively observed that correct programs often contains tokens which are also present in the question.", "For example, the correct program in Figure 2 contains the token Points, which is also present in the question.", "We therefore, define a simple surface form similarity feature match(x, y) that computes the ratio of number of non-keyword tokens in the program y that are also present in the question x.", "However, surface-form similarity is often not enough.", "For example, both the first and fourth program in Figure 2 contain the token Points but only the fourth program is correct.", "Therefore, we also use a simple co-occurrence feature that triggers on frequently co-occurring pairs of tokens in the program and instruction.", "For example, the token most is highly likely to co-occur with a correct program containing the keyword Max.", "This happens for the example in Figure 2 .", "Similarly the token not may co-occur with the keyword NotEqual.", "We assume access to a lexicon ⇀ = {(w j , !", "j )} k j=1 containing k lexical pairs of tokens and keywords.", "Each lexical pair (w, !)", "maps the token w in a text to a keyword !", "in a program.", "For a given program y and question x, we define a co-occurrence score as co_occur(y, x) = P (w,!", ")2⇀ {w 2 x^!", "2 y}}.", "We define critique score critique(y, x) as the sum of the match and co_occur scores.", "The critique policy is given by: p c (y|x, t) / exp (⌘ ⇀ critique(y, x)) , (3) where ⌘ is a single scalar hyper-parameter denoting the confidence in the critique policy.", "Addressing Update Strategy Selection: Generalized Update Equation Given the set of programs generated by the search step, one can use many objectives to update the parameters.", "For example, previous work have utilized maximum marginal likelihood (Krishnamurthy et al., 2017; Guu et al., 2017) , reinforcement learning (Zhong et al., 2017; Guu et al., 2017) and margin based methods (Iyyer et al., 2017) .", "It could be difficult to choose the suitable algorithm from these options.", "In this section, we propose a principle and general update equation such that previous update algorithms can be considered as special cases to this equation.", "Having a general update is important for the following reasons.", "First, it allows us to understand existing algorithms better by examining their basic properties.", "Second, the generalized update equation also makes it easy to implement and experiment with various different algorithms.", "Moreover, it provides a framework that enables the development of new variations or extensions of existing learning methods.", "In the following, we describe how the commonly used algorithms are in fact very similartheir update rules can all be viewed as special cases of the proposed generalized update equation.", "Algorithm 1 shows the meta-learning framework.", "For every training example, we first find a set of candidates using an exploration policy (line 4).", "We use the program candidates to update the parameters (line 6).", "Commonly Used Learning Algorithms We briefly describe three algorithms: maximum marginalized likelihood, policy gradient and maximum margin reward.", "Maximum Marginalized Likelihood The maximum marginalized likelihood method maximizes the log-likelihood of the training data by marginalizing over the set of programs.", "J MML = log p(z i |x i , t i ) = log X y2Y p(z i |y, t i )p(y|x i , t i ) (4) Because an answer is deterministically computed given a program and a table, we define p(z | y, t) as 1 or 0 depending upon whether the y evaluates to z given t, or not.", "Let Gen(z, t) βœ“ Y be the set of compatible programs that evaluate to z given the table t. The objective can then be expressed as: J MML = log X y2Gen(zi,ti) p(y|x i , t i ) (5) In practice, the summation over Gen(.)", "is approximated by only using the compatible programs in the set K generated by the search step.", "Policy Gradient Methods Most reinforcement learning approaches for semantic parsing assume access to a reward function R : Y β‡₯ X β‡₯ Z !", "R, giving a scalar reward R(y, z) for a given program y and the correct answer z.", "2 We can further assume without loss of generality that the reward is always in [0, 1].", "Reinforcement learning approaches maximize the expected reward J RL : J RL = X y2Y p(y|x i , t i )R(y, z i ) (6) J RL is hard to approximate using numerical integration since the reward for all programs may not be known a priori.", "Policy gradient methods solve this by approximating the derivative using a sample from the policy.", "When the search space is large, the policy may fail to sample a correct program, which can greatly slow down the learning.", "Therefore, off-policy methods are sometimes introduced to bias the sampling towards high-reward yielding programs.", "In those methods, an additional exploration policy u(y|x i , t i , z i ) is used to improve sampling.", "Importance weights are used to make the gradient unbiased (see Appendix for derivation).", "Maximum Margin Reward For every training example (x i , t i , z i ), the maximum margin reward method finds the highest scoring program y i that evaluates to z i , as the reference program, from the set K of programs generated by the search.", "With a margin function : Y β‡₯Y β‡₯Z !", "R and reference program y, the set of programs V that violate the margin constraint can thus be defined as: V = {y 0 | y 0 2 Y and score βœ“ (y, x, t) ο£Ώ score βœ“ (y 0 , x, t) + (y, y 0 , z)}, (7) where (y, y 0 , z) = R(y, z) R(y 0 , z).", "Similarly, the program that most violates the constraint can be written as: y = arg max y 0 2Y {score βœ“ (y 0 , x, t) + (y, y 0 , z) score βœ“ (y, x, t)} (8) The most-violation margin objective (negative margin loss) is thus defined as: J MMR = max{0, score βœ“ (Θ³, x i , t i ) score βœ“ (y i , x i , t i ) + (y i ,Θ³, z i )} Unlike the previous two learning algorithms, margin methods only update the score of the reference program and the program that violates the margin.", "Generalized Update Equation Although the algorithms described in Β§4.1 seem very different on the surface, the gradients of their loss functions can in fact be described in the same generalized form, given in Eq.", "(9) 3 .", "In addition to the gradient of the model scoring function, this equation has two variable terms, w(Β·), q(Β·).", "We call the first term w(y, x, t, z) intensity, which is a positive scalar value and the second term q(y|x, t) the competing distribution, which is a probability distribution over programs.", "Varying them makes the equation equivalent to the update rule of the algorithms we discussed, as shown in Table 1 .", "We also consider meritocratic update policy which uses a hyperparameter to sharpen or smooth the intensity of maximum marginal likelihood (Guu et al., 2017) .", "Intuitively, w(y, x, t, z) defines the positive part of the update equation, which defines how aggressively the update favors program y.", "Likewise, q(y|x, t) defines the negative part of the learning Generalized Update Equation: (7) ) andΘ³ is the most violating program (cf.", "Eq.", "(8) ).", "For REINFORCE,Ε· is sampled from K using p(.)", "whereas for Off-Policy Policy Gradient,Ε· is sampled using u(.).", "(K) = X y2K w(y, x, t, z) 0 @ r βœ“ score βœ“ (y, x, t) X y 0 2Y q(y 0 |x, t)r βœ“ score βœ“ (y 0 , x, t) algorithm, namely how aggressively the update penalizes the members of the program set.", "The generalized update equation provides a tool for better understanding individual algorithm, and helps shed some light on when a particular method may perform better.", "Intensity versus Search Quality In SpFD, the effectiveness of the algorithms for SpFD is closely related to the quality of the search results given that the gold program is not available.", "Intuitively, if the search quality is good, the update algorithm could be aggressive on updating the model parameters.", "When the search quality is poor, the algorithm should be conservative.", "The intensity w(Β·) is closely related to the aggressiveness of the algorithm.", "For example, the maximum marginal likelihood is less aggressive given that it produces a non-zero intensity over all programs in the program set K that evaluate to the correct answer.", "The intensity for a particular correct program y is proportional to its probability p(y|x, t).", "Further, meritocratic update becomes more aggressive as becomes larger.", "In contrast, REINFORCE and maximum margin reward both have a non-zero intensity only on a single program in K. This value is 1.0 for maximum margin reward, while for reinforcement learning, this value is the reward.", "Maximum margin reward therefore updates most aggressively in favor of its selection while maximum marginal likelihood tends to hedge its bet.", "Therefore, the maximum margin methods should benefit the most when the search quality improves.", "Stability The general equation also allows us to investigate the stability of a model update algorithm.", "In general, the variance of update direction can be high, hence less stable, if the model update algorithm has peaky competing distribution, or it puts all of its intensity on a single program.", "For example, REINFORCE only samples one program and puts non-zero intensity only on that program, so it could be unstable depending on the sampling results.", "The competing distribution affects the stability of the algorithm.", "For example, maximum margin reward penalizes only the most violating program and is benign to other incorrect programs.", "Therefore, the MMR algorithm could be unstable during training.", "New Model Update Algorithm The general equation provides a framework that enables the development of new variations or extensions of existing learning methods.", "For example, in order to improve the stability of the MMR algorithm, we propose a simple variant of maximum margin reward, which penalizes all violating programs instead of only the most violating one.", "We call this approach maximum margin average violation reward (MAVER), which is included in Table 1 as well.", "Given that MAVER effectively considers more negative examples during each update, we expect that it is more stable compared to the MMR algorithm.", "Experiments We describe the setup in Β§5.1 and results in Β§5.2.", "Setup Dataset We use the sequential question answering (SQA) dataset (Iyyer et al., 2017) for our experiments.", "SQA contains 6,066 sequences and each sequence contains up to 3 questions, with 17,553 questions in total.", "The data is partitioned into training (83%) and test (17%) splits.", "We use 4/5 of the original train split as our training set and the remaining 1/5 as the dev set.", "We evaluate using exact match on answer.", "Previous state-of-theart result on the SQA dataset is 44.7% accuracy, using maximum margin reward learning.", "Semantic Parser Our semantic parser is based on DynSP (Iyyer et al., 2017) , which contains a set of SQL actions, such as adding a clause (e.g., Select Column) or adding an operator (e.g., Max).", "Each action has an associated neural network module that generates the score for the action based on the instruction, the table and the list of past actions.", "The score of the entire program is given by the sum of scores of all actions.", "We modified DynSP to improve its representational capacity.", "We refer to the new parser as DynSP++.", "Most notably, we included new features and introduced two additional parser actions.", "See Appendix 8.2 for more details.", "While these improvements help us achieve state-of-the-art results, the majority of the gain comes from the learning contributions described in this paper.", "Hyperparameters For each experiment, we train the model for 30 epochs.", "We find the optimal stopping epoch by evaluating the model on the dev set.", "We then train on train+dev set till the stopping epoch and evaluate the model on the held-out test set.", "Model parameters are trained using stochastic gradient descent with learning rate of 0.1.", "We set the hyperparameter ⌘ for policy shaping to 5.", "All hyperparameters were tuned on the dev set.", "We use 40 lexical pairs for defining the co-occur score.", "We used common English superlatives (e.g., highest, most) and comparators (e.g., more, larger) and did not fit the lexical pairs based on the dataset.", "Given the model parameter βœ“, we use a base exploration policy defined in (Iyyer et al., 2017) .", "This exploration policy is given by b βœ“ (y | x, t, z) / exp( Β· R(y, z) + score βœ“ (y, βœ“, z)).", "R(y, z) is the reward function of the incomplete program y, given the answer z.", "We use a reward function R(y, z) given by the Jaccard similarity of the gold answer z and the answer generated by the program y.", "The value of is set to infinity, which essentially is equivalent to sorting the programs based on the reward and using the current model score for tie breaking.", "Further, we prune all syntactically invalid programs.", "For more details, we refer the reader to (Iyyer et al., 2017) .", "Table 2 contains the dev and test results when using our algorithm on the SQA dataset.", "We observe that margin based methods perform better than maximum likelihood methods and policy gradient in our experiment.", "Policy shaping in general improves the performance across different algorithms.", "Our best test results outperform previous SOTA by 5.0%.", "Results Policy Gradient vs Off-Policy Gradient RE-INFORCE, a simple policy gradient method, achieved extremely poor performance.", "This likely due to the problem of exploration and having to sample from a large space of programs.", "This is further corroborated from observing the much superior performance of off-policy policy gradient methods.", "Thus, the sampling policy is an important factor to consider for policy gradient methods.", "The Effect of Policy Shaping We observe that the improvement due to policy shaping is 6.0% on the SQA dataset for MAVER and only 1.3% for maximum marginal likelihood.", "We also observe that as increases, the improvement due to policy shaping for meritocratic update increases.", "This supports our hypothesis that aggressive updates of margin based methods is beneficial when the search method is more accurate as compared to maximum marginal likelihood which hedges its bet between all programs that evaluate to the right answer.", "Stability of MMR In Section 4, the general update equation helps us point out that MMR could be unstable due to the peaky competing distribution.", "MAVER was proposed to increase the stability of the algorithm.", "To measure stability, we cal- Table 4 .", "Policy Shaping vs Model Shaping Critique policy contains useful information that can bias the search away from spurious programs.", "Therefore, one can also consider making the critique policy as part of the model.", "We call this model shaping.", "We define our model to be the shaped policy and train and test using the new model.", "Using MAVER updates, we found that the dev accuracy dropped to 37.1%.", "We conjecture that the strong prior in the critique policy can hinder generalization in model shaping.", "Related Work Semantic Parsing from Denotation Mapping natural language text to formal meaning representation was first studied by Montague (1970) .", "Early work on learning semantic parsers rely on labeled formal representations as the supervision signals Collins, 2005, 2007; Zelle and Mooney, 1993) .", "However, because getting access to gold formal representation generally requires expensive annotations by an expert, distant supervision approaches, where semantic parsers are learned from denotation only, have become the main learning paradigm (e.g., Clarke et al., 2010; Liang et al., 2011; Artzi and Zettlemoyer, 2013; Berant et al., 2013; Iyyer et al., 2017; Krishnamurthy et al., 2017) .", "Guu et al.", "(2017) studied the problem of spurious programs and considered adding noise to diversify the search procedure and introduced meritocratic updates.", "Reinforcement Learning Algorithms Reinforcement learning algorithms have been applied to various NLP problems including dialogue (Li et al., 2016 ), text-based games (Narasimhan et al., 2015) , information extraction (Narasimhan et al., 2016) , coreference resolution (Clark and Man- ning, 2016), semantic parsing (Guu et al., 2017) and instruction following (Misra et al., 2017) .", "Guu et al.", "(2017) show that policy gradient methods underperform maximum marginal likelihood approaches.", "Our result on the SQA dataset supports their observation.", "However, we show that using off-policy sampling, policy gradient methods can provide superior performance to maximum marginal likelihood methods.", "Margin-based Learning Margin-based methods have been considered in the context of SVM learning.", "In the NLP literature, margin based learning has been applied to parsing (Taskar et al., 2004; McDonald et al., 2005) , text classification (Taskar et al., 2003) , machine translation (Watanabe et al., 2007) and semantic parsing (Iyyer et al., 2017) .", "Kummerfeld et al.", "(2015) found that max-margin based methods generally outperform likelihood maximization on a range of tasks.", "Previous work have studied connections between margin based method and likelihood maximization for supervised learning setting.", "We show them as special cases of our unified update equation for distant supervision learning.", "Similar to this work, Lee et al.", "(2016) also found that in the context of supervised learning, margin-based algorithms which update all violated examples perform better than the one that only updates the most violated example.", "Latent Variable Modeling Learning semantic parsers from denotation can be viewed as a latent variable modeling problem, where the program is the latent variable.", "Probabilistic latent variable models have been studied using EM-algorithm and its variant (Dempster et al., 1977) .", "The graphical model literature has studied latent variable learning on margin-based methods (Yu and Joachims, 2009 ) and probabilistic models (Quattoni et al., 2007) .", "Samdani et al.", "(2012) studied various vari-ants of EM algorithm and showed that all of them are special cases of a unified framework.", "Our generalized update framework is similar in spirit.", "Conclusion In this paper, we propose a general update equation from semantic parsing from denotation and propose a policy shaping method for addressing the spurious program challenge.", "For the future, we plan to apply the proposed learning framework to more semantic parsing tasks and consider new methods for policy shaping." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "4", "4.1", "4.2", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Background", "The Semantic Parsing Task", "Learning", "Addressing Spurious Programs: Policy Shaping", "Addressing Update Strategy Selection:", "Commonly Used Learning Algorithms", "Generalized Update Equation", "Experiments", "Setup", "Results", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-103#paper-1267#slide-23
Shaping and update
Better search more aggressive update [Search for Training] With x, t, z, beam search suitable ={y} Using policy shaping to find better K Shaping affects here directly [Update] Update , according K = {y} What is the better objective function J? Shaping affects here indirectly
Better search more aggressive update [Search for Training] With x, t, z, beam search suitable ={y} Using policy shaping to find better K Shaping affects here directly [Update] Update , according K = {y} What is the better objective function J? Shaping affects here indirectly
[]
GEM-SciDuet-train-103#paper-1267#slide-24
1267
Policy Shaping and Generalized Update Equations for Semantic Parsing from Denotations
Semantic parsing from denotations faces two key challenges in model training: (1) given only the denotations (e.g., answers), search for good candidate semantic parses, and (2) choose the best model update algorithm. We propose effective and general solutions to each of them. Using policy shaping, we bias the search procedure towards semantic parses that are more compatible to the text, which provide better supervision signals for training. In addition, we propose an update equation that generalizes three different families of learning algorithms, which enables fast model exploration. When experimented on a recently proposed sequential question answering dataset, our framework leads to a new state-of-theart model that outperforms previous work by 5.0% absolute on exact match accuracy.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233 ], "paper_content_text": [ "Introduction Semantic parsing from denotations (SpFD) is the problem of mapping text to executable formal representations (or program) in a situated environment and executing them to generate denotations (or answer), in the absence of access to correct representations.", "Several problems have been handled within this framework, including question answering (Berant et al., 2013; Iyyer et al., 2017) and instructions for robots (Artzi and Zettlemoyer, 2013; Misra et al., 2015) .", "Consider the example in Figure 1 .", "Given the question and a table environment, a semantic parser maps the question to an executable program, in this case a SQL query, and then executes the query on the environment to generate the answer England.", "In the SpFD setting, the training data does not contain the correct programs.", "Thus, the existing learning approaches for SpFD perform two steps for every training example, a search step that explores the space of programs Select Nation Where Points is Maximum Program: Answer: Environment: England Figure 1 : An example of semantic parsing from denotations.", "Given the table environment, map the question to an executable program that evaluates to the answer.", "and finds suitable candidates, and an update step that uses these programs to update the model.", "Figure 2 shows the two step training procedure for the above example.", "In this paper, we address two key challenges in model training for SpFD by proposing a novel learning framework, improving both the search and update steps.", "The first challenge, the existence of spurious programs, lies in the search step.", "More specifically, while the success of the search step relies on its ability to find programs that are semantically correct, we can only verify if the program can generate correct answers, given that no gold programs are presented.", "The search step is complicated by spurious programs, which happen to evaluate to the correct answer but do not represent accurately the meaning of the natural language question.", "For example, for the environment in Figure 1 , the program Select Nation Where Name = Karen Andrew is spurious.", "Selecting spurious programs as positive examples can greatly affect the performance of semantic parsers as these programs generally do not gen- Our contributions in this work are twofold.", "To address the first challenge, we propose a policy shaping (Griffith et al., 2013) method that incorporates simple, lightweight domain knowledge, such as a small set of lexical pairs of tokens in the question and program, in the form of a critique policy ( Β§ 3).", "This helps bias the search towards the correct program, an important step to improve supervision signals, which benefits learning regardless of the choice of algorithm.", "To address the second challenge, we prove that the parameter update step in several algorithms are similar and can be viewed as special cases of a generalized update equation ( Β§ 4).", "The equation contains two variable terms that govern the update behavior.", "Changing these two terms effectively defines an infinite class of learning algorithms where different values lead to significantly different results.", "We study this effect and propose a novel learning framework that improves over existing methods.", "We evaluate our methods using the sequential question answering (SQA) dataset (Iyyer et al., 2017) , and show that our proposed improvements to the search and update steps consistently enhance existing approaches.", "The proposed algorithm achieves new state-of-the-art and outperforms existing parsers by 5.0%.", "Background We give a formal problem definition of the semantic parsing task, followed by the general learning framework for solving it.", "The Semantic Parsing Task The problem discussed in this paper can be formally defined as follows.", "Let X be the set of all possible questions, Y programs (e.g., SQL-like queries), T tables (i.e., the structured data in this work) and Z answers.", "We further assume access to an executor : Y β‡₯ T !", "Z, that given a program y 2 Y and a table t 2 T , generates an answer (y, t) 2 Z.", "We assume that the executor and all tables are deterministic and the executor can be called as many times as possible.", "To facilitate discussion in the following sections, we define an environment function e t : Y !", "Z, by applying the executor to the program as e t (y) = (y, t).", "Given a question x and an environment e t , our aim is to generate a program y ⇀ 2 Y and then execute it to produce the answer e t (y ⇀ ).", "Assume that for any y 2 Y, the score of y being a correct program for x is score βœ“ (y, x, t), parameterized by βœ“.", "The inference task is thus: y ⇀ = arg max y2Y score βœ“ (y, x, t) (1) As the size of Y is exponential to the length of the program, a generic search procedure is typically employed for Eq.", "(1), as efficient dynamic algorithms typically do not exist.", "These search procedures generally maintain a beam of program states sorted according to some scoring function, where each program state represents an incomplete program.", "The search then generates a new program state from an existing state by performing an action.", "Each action adds a set of tokens (e.g., Nation) and keyword (e.g., Select) to a program state.", "For example, in order to generate the program in Figure 1 , the DynSP parser (Iyyer et al., 2017) will take the first action as adding the SQL expression Select Nation.", "Notice that score βœ“ can be used in either probabilistic or nonprobabilistic models.", "For probabilistic models, we assume that it is a Boltzmann policy, meaning that p βœ“ (y | x, t) / exp{score βœ“ (y, x, t)}.", "Learning Learning a semantic parser is equivalent to learning the parameters βœ“ in the scoring function, which is a structured learning problem, due to the large, structured output space Y.", "Structured learning algorithms generally consist of two major components: search and update.", "When the gold programs are available during training, the search procedure finds a set of high-scoring incorrect programs.", "These programs are used by the update step to derive loss for updating parameters.", "For example, these programs are used for approximating the partition-function in maximum-likelihood objective (Liang et al., 2011) and finding set of programs causing margin violation in margin based methods (DaumΓ© III and Marcu, 2005) .", "Depending on the exact algorithm being used, these two components are not necessarily separated into isolated steps.", "For instance, parameters can be updated in the middle of search (e.g., Huang et al., 2012) .", "For learning semantic parsers from denotations, where we assume only answers are available in a training set {( x i , t i , z i )} N i=1 of N examples, the basic construction of the learning algorithms remains the same.", "However, the problems that search needs to handle in SpFD is more challenging.", "In addition to finding a set of high-scoring incorrect programs, the search procedure also needs to guess the correct program(s) evaluating to the gold answer z i .", "This problem is further complicated by the presence of spurious programs, which generate the correct answer but are semantically incompatible with the question.", "For example, although all programs in Figure 2 evaluate to the same answer, only one of them is correct.", "The issue of the spurious programs also affects the design of model update.", "For instance, maximum marginal likelihood methods treat all the programs that evaluate to the gold answer equally, while maximum margin reward networks use model score to break tie and pick one of the programs as the correct reference.", "Addressing Spurious Programs: Policy Shaping Given a training example (x, t, z), the aim of the search step is to find a set K(x, t, z) of programs consisting of correct programs that evaluate to z and high-scoring incorrect programs.", "The search step should avoid picking up spurious programs for learning since such programs typically do not generalize.", "For example, in Figure 2, the spurious program Select Nation Where Index is Min will evaluate to an incorrect answer if the indices of the first two rows are swapped 1 .", "This problem is challenging since among the programs that evaluate to the correct answer, most of them are spurious.", "The search step can be viewed as following an exploration policy b βœ“ (y|x, t, z) to explore the set of programs Y.", "This exploration is often performed by beam search and at each step, we either sample from b βœ“ or take the top scoring programs.", "The set K(x, t, z) is then used by the update step for parameter update.", "Most search strategies use an exploration policy which is based on the score function, for example b βœ“ (y|x, t, z) / exp{score βœ“ (y, t)}.", "However, this approach can suffer from a divergence phenomenon whereby the score of spurious programs picked up by the search in the first epoch increases, making it more likely for the search to pick them up in the future.", "Such divergence issues are common with latent-variable learning and often require careful initialization to overcome (Rose, 1998) .", "Unfortunately such initialization schemes are not applicable for deep neural networks which form the model of most successful semantic parsers today (Jia and Liang, 2016; Misra and Artzi, 2016; Iyyer et al., 2017) .", "Prior work, such as ✏-greedy exploration (Guu et al., 2017) , has reduced the severity of this problem by introducing random noise in the search procedure to avoid saturating the search on high-scoring spurious programs.", "However, random noise need not bias the search towards the correct program(s).", "In this paper, we introduce a simple policy-shaping method to guide the search.", "This approach allows incorporating prior knowledge in the exploration policy and can bias the search away from spurious programs.", "Β» Compute generalized gradient updates 6: βœ“ = βœ“ + Β΅ (K) 7: return βœ“ Policy Shaping Policy shaping is a method to introduce prior knowledge into a policy (Griffith et al., 2013) .", "Formally, let the current behavior policy be b βœ“ (y|x, t, z) and a predefined critique policy, the prior knowledge, be p c (y|x, t).", "Policy shaping defines a new shaped behavior policy p b (y|x, t) given by: p b (y|x, t) = b βœ“ (y|x, t, z)p c (y|x, t) P y 0 2Y b βœ“ (y 0 |x, t, z)p c (y 0 |x, t) .", "(2) Using the shaped policy for exploration biases the search towards the critique policy's preference.", "We next describe a simple critique policy that we use in this paper.", "Lexical Policy Shaping We qualitatively observed that correct programs often contains tokens which are also present in the question.", "For example, the correct program in Figure 2 contains the token Points, which is also present in the question.", "We therefore, define a simple surface form similarity feature match(x, y) that computes the ratio of number of non-keyword tokens in the program y that are also present in the question x.", "However, surface-form similarity is often not enough.", "For example, both the first and fourth program in Figure 2 contain the token Points but only the fourth program is correct.", "Therefore, we also use a simple co-occurrence feature that triggers on frequently co-occurring pairs of tokens in the program and instruction.", "For example, the token most is highly likely to co-occur with a correct program containing the keyword Max.", "This happens for the example in Figure 2 .", "Similarly the token not may co-occur with the keyword NotEqual.", "We assume access to a lexicon ⇀ = {(w j , !", "j )} k j=1 containing k lexical pairs of tokens and keywords.", "Each lexical pair (w, !)", "maps the token w in a text to a keyword !", "in a program.", "For a given program y and question x, we define a co-occurrence score as co_occur(y, x) = P (w,!", ")2⇀ {w 2 x^!", "2 y}}.", "We define critique score critique(y, x) as the sum of the match and co_occur scores.", "The critique policy is given by: p c (y|x, t) / exp (⌘ ⇀ critique(y, x)) , (3) where ⌘ is a single scalar hyper-parameter denoting the confidence in the critique policy.", "Addressing Update Strategy Selection: Generalized Update Equation Given the set of programs generated by the search step, one can use many objectives to update the parameters.", "For example, previous work have utilized maximum marginal likelihood (Krishnamurthy et al., 2017; Guu et al., 2017) , reinforcement learning (Zhong et al., 2017; Guu et al., 2017) and margin based methods (Iyyer et al., 2017) .", "It could be difficult to choose the suitable algorithm from these options.", "In this section, we propose a principle and general update equation such that previous update algorithms can be considered as special cases to this equation.", "Having a general update is important for the following reasons.", "First, it allows us to understand existing algorithms better by examining their basic properties.", "Second, the generalized update equation also makes it easy to implement and experiment with various different algorithms.", "Moreover, it provides a framework that enables the development of new variations or extensions of existing learning methods.", "In the following, we describe how the commonly used algorithms are in fact very similartheir update rules can all be viewed as special cases of the proposed generalized update equation.", "Algorithm 1 shows the meta-learning framework.", "For every training example, we first find a set of candidates using an exploration policy (line 4).", "We use the program candidates to update the parameters (line 6).", "Commonly Used Learning Algorithms We briefly describe three algorithms: maximum marginalized likelihood, policy gradient and maximum margin reward.", "Maximum Marginalized Likelihood The maximum marginalized likelihood method maximizes the log-likelihood of the training data by marginalizing over the set of programs.", "J MML = log p(z i |x i , t i ) = log X y2Y p(z i |y, t i )p(y|x i , t i ) (4) Because an answer is deterministically computed given a program and a table, we define p(z | y, t) as 1 or 0 depending upon whether the y evaluates to z given t, or not.", "Let Gen(z, t) βœ“ Y be the set of compatible programs that evaluate to z given the table t. The objective can then be expressed as: J MML = log X y2Gen(zi,ti) p(y|x i , t i ) (5) In practice, the summation over Gen(.)", "is approximated by only using the compatible programs in the set K generated by the search step.", "Policy Gradient Methods Most reinforcement learning approaches for semantic parsing assume access to a reward function R : Y β‡₯ X β‡₯ Z !", "R, giving a scalar reward R(y, z) for a given program y and the correct answer z.", "2 We can further assume without loss of generality that the reward is always in [0, 1].", "Reinforcement learning approaches maximize the expected reward J RL : J RL = X y2Y p(y|x i , t i )R(y, z i ) (6) J RL is hard to approximate using numerical integration since the reward for all programs may not be known a priori.", "Policy gradient methods solve this by approximating the derivative using a sample from the policy.", "When the search space is large, the policy may fail to sample a correct program, which can greatly slow down the learning.", "Therefore, off-policy methods are sometimes introduced to bias the sampling towards high-reward yielding programs.", "In those methods, an additional exploration policy u(y|x i , t i , z i ) is used to improve sampling.", "Importance weights are used to make the gradient unbiased (see Appendix for derivation).", "Maximum Margin Reward For every training example (x i , t i , z i ), the maximum margin reward method finds the highest scoring program y i that evaluates to z i , as the reference program, from the set K of programs generated by the search.", "With a margin function : Y β‡₯Y β‡₯Z !", "R and reference program y, the set of programs V that violate the margin constraint can thus be defined as: V = {y 0 | y 0 2 Y and score βœ“ (y, x, t) ο£Ώ score βœ“ (y 0 , x, t) + (y, y 0 , z)}, (7) where (y, y 0 , z) = R(y, z) R(y 0 , z).", "Similarly, the program that most violates the constraint can be written as: y = arg max y 0 2Y {score βœ“ (y 0 , x, t) + (y, y 0 , z) score βœ“ (y, x, t)} (8) The most-violation margin objective (negative margin loss) is thus defined as: J MMR = max{0, score βœ“ (Θ³, x i , t i ) score βœ“ (y i , x i , t i ) + (y i ,Θ³, z i )} Unlike the previous two learning algorithms, margin methods only update the score of the reference program and the program that violates the margin.", "Generalized Update Equation Although the algorithms described in Β§4.1 seem very different on the surface, the gradients of their loss functions can in fact be described in the same generalized form, given in Eq.", "(9) 3 .", "In addition to the gradient of the model scoring function, this equation has two variable terms, w(Β·), q(Β·).", "We call the first term w(y, x, t, z) intensity, which is a positive scalar value and the second term q(y|x, t) the competing distribution, which is a probability distribution over programs.", "Varying them makes the equation equivalent to the update rule of the algorithms we discussed, as shown in Table 1 .", "We also consider meritocratic update policy which uses a hyperparameter to sharpen or smooth the intensity of maximum marginal likelihood (Guu et al., 2017) .", "Intuitively, w(y, x, t, z) defines the positive part of the update equation, which defines how aggressively the update favors program y.", "Likewise, q(y|x, t) defines the negative part of the learning Generalized Update Equation: (7) ) andΘ³ is the most violating program (cf.", "Eq.", "(8) ).", "For REINFORCE,Ε· is sampled from K using p(.)", "whereas for Off-Policy Policy Gradient,Ε· is sampled using u(.).", "(K) = X y2K w(y, x, t, z) 0 @ r βœ“ score βœ“ (y, x, t) X y 0 2Y q(y 0 |x, t)r βœ“ score βœ“ (y 0 , x, t) algorithm, namely how aggressively the update penalizes the members of the program set.", "The generalized update equation provides a tool for better understanding individual algorithm, and helps shed some light on when a particular method may perform better.", "Intensity versus Search Quality In SpFD, the effectiveness of the algorithms for SpFD is closely related to the quality of the search results given that the gold program is not available.", "Intuitively, if the search quality is good, the update algorithm could be aggressive on updating the model parameters.", "When the search quality is poor, the algorithm should be conservative.", "The intensity w(Β·) is closely related to the aggressiveness of the algorithm.", "For example, the maximum marginal likelihood is less aggressive given that it produces a non-zero intensity over all programs in the program set K that evaluate to the correct answer.", "The intensity for a particular correct program y is proportional to its probability p(y|x, t).", "Further, meritocratic update becomes more aggressive as becomes larger.", "In contrast, REINFORCE and maximum margin reward both have a non-zero intensity only on a single program in K. This value is 1.0 for maximum margin reward, while for reinforcement learning, this value is the reward.", "Maximum margin reward therefore updates most aggressively in favor of its selection while maximum marginal likelihood tends to hedge its bet.", "Therefore, the maximum margin methods should benefit the most when the search quality improves.", "Stability The general equation also allows us to investigate the stability of a model update algorithm.", "In general, the variance of update direction can be high, hence less stable, if the model update algorithm has peaky competing distribution, or it puts all of its intensity on a single program.", "For example, REINFORCE only samples one program and puts non-zero intensity only on that program, so it could be unstable depending on the sampling results.", "The competing distribution affects the stability of the algorithm.", "For example, maximum margin reward penalizes only the most violating program and is benign to other incorrect programs.", "Therefore, the MMR algorithm could be unstable during training.", "New Model Update Algorithm The general equation provides a framework that enables the development of new variations or extensions of existing learning methods.", "For example, in order to improve the stability of the MMR algorithm, we propose a simple variant of maximum margin reward, which penalizes all violating programs instead of only the most violating one.", "We call this approach maximum margin average violation reward (MAVER), which is included in Table 1 as well.", "Given that MAVER effectively considers more negative examples during each update, we expect that it is more stable compared to the MMR algorithm.", "Experiments We describe the setup in Β§5.1 and results in Β§5.2.", "Setup Dataset We use the sequential question answering (SQA) dataset (Iyyer et al., 2017) for our experiments.", "SQA contains 6,066 sequences and each sequence contains up to 3 questions, with 17,553 questions in total.", "The data is partitioned into training (83%) and test (17%) splits.", "We use 4/5 of the original train split as our training set and the remaining 1/5 as the dev set.", "We evaluate using exact match on answer.", "Previous state-of-theart result on the SQA dataset is 44.7% accuracy, using maximum margin reward learning.", "Semantic Parser Our semantic parser is based on DynSP (Iyyer et al., 2017) , which contains a set of SQL actions, such as adding a clause (e.g., Select Column) or adding an operator (e.g., Max).", "Each action has an associated neural network module that generates the score for the action based on the instruction, the table and the list of past actions.", "The score of the entire program is given by the sum of scores of all actions.", "We modified DynSP to improve its representational capacity.", "We refer to the new parser as DynSP++.", "Most notably, we included new features and introduced two additional parser actions.", "See Appendix 8.2 for more details.", "While these improvements help us achieve state-of-the-art results, the majority of the gain comes from the learning contributions described in this paper.", "Hyperparameters For each experiment, we train the model for 30 epochs.", "We find the optimal stopping epoch by evaluating the model on the dev set.", "We then train on train+dev set till the stopping epoch and evaluate the model on the held-out test set.", "Model parameters are trained using stochastic gradient descent with learning rate of 0.1.", "We set the hyperparameter ⌘ for policy shaping to 5.", "All hyperparameters were tuned on the dev set.", "We use 40 lexical pairs for defining the co-occur score.", "We used common English superlatives (e.g., highest, most) and comparators (e.g., more, larger) and did not fit the lexical pairs based on the dataset.", "Given the model parameter βœ“, we use a base exploration policy defined in (Iyyer et al., 2017) .", "This exploration policy is given by b βœ“ (y | x, t, z) / exp( Β· R(y, z) + score βœ“ (y, βœ“, z)).", "R(y, z) is the reward function of the incomplete program y, given the answer z.", "We use a reward function R(y, z) given by the Jaccard similarity of the gold answer z and the answer generated by the program y.", "The value of is set to infinity, which essentially is equivalent to sorting the programs based on the reward and using the current model score for tie breaking.", "Further, we prune all syntactically invalid programs.", "For more details, we refer the reader to (Iyyer et al., 2017) .", "Table 2 contains the dev and test results when using our algorithm on the SQA dataset.", "We observe that margin based methods perform better than maximum likelihood methods and policy gradient in our experiment.", "Policy shaping in general improves the performance across different algorithms.", "Our best test results outperform previous SOTA by 5.0%.", "Results Policy Gradient vs Off-Policy Gradient RE-INFORCE, a simple policy gradient method, achieved extremely poor performance.", "This likely due to the problem of exploration and having to sample from a large space of programs.", "This is further corroborated from observing the much superior performance of off-policy policy gradient methods.", "Thus, the sampling policy is an important factor to consider for policy gradient methods.", "The Effect of Policy Shaping We observe that the improvement due to policy shaping is 6.0% on the SQA dataset for MAVER and only 1.3% for maximum marginal likelihood.", "We also observe that as increases, the improvement due to policy shaping for meritocratic update increases.", "This supports our hypothesis that aggressive updates of margin based methods is beneficial when the search method is more accurate as compared to maximum marginal likelihood which hedges its bet between all programs that evaluate to the right answer.", "Stability of MMR In Section 4, the general update equation helps us point out that MMR could be unstable due to the peaky competing distribution.", "MAVER was proposed to increase the stability of the algorithm.", "To measure stability, we cal- Table 4 .", "Policy Shaping vs Model Shaping Critique policy contains useful information that can bias the search away from spurious programs.", "Therefore, one can also consider making the critique policy as part of the model.", "We call this model shaping.", "We define our model to be the shaped policy and train and test using the new model.", "Using MAVER updates, we found that the dev accuracy dropped to 37.1%.", "We conjecture that the strong prior in the critique policy can hinder generalization in model shaping.", "Related Work Semantic Parsing from Denotation Mapping natural language text to formal meaning representation was first studied by Montague (1970) .", "Early work on learning semantic parsers rely on labeled formal representations as the supervision signals Collins, 2005, 2007; Zelle and Mooney, 1993) .", "However, because getting access to gold formal representation generally requires expensive annotations by an expert, distant supervision approaches, where semantic parsers are learned from denotation only, have become the main learning paradigm (e.g., Clarke et al., 2010; Liang et al., 2011; Artzi and Zettlemoyer, 2013; Berant et al., 2013; Iyyer et al., 2017; Krishnamurthy et al., 2017) .", "Guu et al.", "(2017) studied the problem of spurious programs and considered adding noise to diversify the search procedure and introduced meritocratic updates.", "Reinforcement Learning Algorithms Reinforcement learning algorithms have been applied to various NLP problems including dialogue (Li et al., 2016 ), text-based games (Narasimhan et al., 2015) , information extraction (Narasimhan et al., 2016) , coreference resolution (Clark and Man- ning, 2016), semantic parsing (Guu et al., 2017) and instruction following (Misra et al., 2017) .", "Guu et al.", "(2017) show that policy gradient methods underperform maximum marginal likelihood approaches.", "Our result on the SQA dataset supports their observation.", "However, we show that using off-policy sampling, policy gradient methods can provide superior performance to maximum marginal likelihood methods.", "Margin-based Learning Margin-based methods have been considered in the context of SVM learning.", "In the NLP literature, margin based learning has been applied to parsing (Taskar et al., 2004; McDonald et al., 2005) , text classification (Taskar et al., 2003) , machine translation (Watanabe et al., 2007) and semantic parsing (Iyyer et al., 2017) .", "Kummerfeld et al.", "(2015) found that max-margin based methods generally outperform likelihood maximization on a range of tasks.", "Previous work have studied connections between margin based method and likelihood maximization for supervised learning setting.", "We show them as special cases of our unified update equation for distant supervision learning.", "Similar to this work, Lee et al.", "(2016) also found that in the context of supervised learning, margin-based algorithms which update all violated examples perform better than the one that only updates the most violated example.", "Latent Variable Modeling Learning semantic parsers from denotation can be viewed as a latent variable modeling problem, where the program is the latent variable.", "Probabilistic latent variable models have been studied using EM-algorithm and its variant (Dempster et al., 1977) .", "The graphical model literature has studied latent variable learning on margin-based methods (Yu and Joachims, 2009 ) and probabilistic models (Quattoni et al., 2007) .", "Samdani et al.", "(2012) studied various vari-ants of EM algorithm and showed that all of them are special cases of a unified framework.", "Our generalized update framework is similar in spirit.", "Conclusion In this paper, we propose a general update equation from semantic parsing from denotation and propose a policy shaping method for addressing the spurious program challenge.", "For the future, we plan to apply the proposed learning framework to more semantic parsing tasks and consider new methods for policy shaping." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "4", "4.1", "4.2", "5", "5.1", "5.2", "6", "7" ], "paper_header_content": [ "Introduction", "Background", "The Semantic Parsing Task", "Learning", "Addressing Spurious Programs: Policy Shaping", "Addressing Update Strategy Selection:", "Commonly Used Learning Algorithms", "Generalized Update Equation", "Experiments", "Setup", "Results", "Related Work", "Conclusion" ] }
GEM-SciDuet-train-103#paper-1267#slide-24
Novel Learning Algorithm
Intensity Competing Distribution Dev Performance Maximum Margin Reward (MMR) Maximum Margin Reward (MMR) Maximum Margin Reward (MMR) Maximum Marginal (MML) Likelihood Mixing the MMRs intensity and MMLs competing distribution gives an update that outperforms MMR.
Intensity Competing Distribution Dev Performance Maximum Margin Reward (MMR) Maximum Margin Reward (MMR) Maximum Margin Reward (MMR) Maximum Marginal (MML) Likelihood Mixing the MMRs intensity and MMLs competing distribution gives an update that outperforms MMR.
[]
GEM-SciDuet-train-104#paper-1274#slide-0
1274
Extracting Commonsense Properties from Embeddings with Limited Human Guidance
Intelligent systems require common sense, but automatically extracting this knowledge from text can be difficult. We propose and assess methods for extracting one type of commonsense knowledge, object-property comparisons, from pretrained embeddings. In experiments, we show that our approach exceeds the accuracy of previous work but requires substantially less hand-annotated knowledge. Further, we show that an active learning approach that synthesizes common-sense queries can boost accuracy.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112 ], "paper_content_text": [ "Introduction Automatically extracting common sense from text is a long-standing challenge in natural language processing (Schubert, 2002; Van Durme and Schubert, 2008; Vanderwende, 2005) .", "As argued by Forbes and Yejin (2017) , typical language use may reflect common sense, but the commonsense knowledge itself is not often explicitly stated, due to reporting bias (Gordon and Van Durme, 2013) .", "Thus, additional human knowledge or annotated training data are often used to help systems learn common sense.", "In this paper, we study methods for reducing the amount of human input needed to learn common sense.", "Specifically, we focus on learning relative comparisons of (one-dimensional) object properties, such as the fact that a cantaloupe is more round than a hammer.", "Methods for learning this kind of common sense have been developed previously (e.g.", "Forbes and Choi, 2017) , but the best-performing methods in that previous work requires dozens of manually-annotated frames for each comparison property, to connect the property to how it is indirectly reflected in text-e.g., if text asserts that \"x carries y,\" this implies that x is probably larger than y.", "Our architecture for relative comparisons follows the zero-shot learning paradigm (Palatucci et al., 2009) .", "It takes the form of a neural network that compares a projection of embeddings for each of two objects (e.g.", "\"elephant\" and \"tiger\") to the embeddings for the two poles of the target dimension of comparison (e.g., \"big\" and \"small\" for the size property).", "The projected object embeddings are trained to be closer to the appropriate pole, using a small training set of hand-labeled comparisons.", "Our experiments reveal that our architecture outperforms previous work, despite using less annotated data.", "Further, because our architecture takes the property (pole) labels as arguments, it can extend to the zero-shot setting in which we evaluate on properties not seen in training.", "We find that in zero-shot, our approach outperforms baselines and comes close to supervised results, but providing labels for both poles of the relation rather than just one is important.", "Finally, because the number of properties we wish to learn is large, we experiment with active learning (AL) over a larger property space.", "We show that synthesizing AL queries can be effective using an approach that explicitly models which comparison questions are nonsensical (e.g., is Batman taller than Democracy?).", "We release our code base and a new commonsense data set to the research community.", "1 Problem Definition and Methods We define the task of comparing object properties in two different ways: a three-way classification task, and a four-way classification task.", "In the three-way classification task, we want to estimate the following conditional probability: P (L|O 1 , O 2 , Property), L ∈ { < , > , β‰ˆ }.", "1 https://github.com/yangyiben/PCE For example, P rob(An elephant is larger than a dog) can be expressed as P (L = > |O 1 = \"elephant\", O 2 = \"dog\", Property = \"size\").", "The three-way classification task has been explored in previous work (Forbes and Choi, 2017) and is only performed on triples where both objects have the property, so that the comparison is meaningful.", "In applications, however, we may not know in advance which comparisons are meaningful.", "Thus, we also define a four-way classification task to include \"not applicable\" as the fourth label, so that inference can be performed on any objectproperty triples.", "In the four-way task, the system is tasked with identifying the nonsensical comparisons.", "Formally, we want to estimate the following conditional probability: P (L|O 1 , O 2 , Property), L∈{ < , > , β‰ˆ , N/A }.", "Three-way Model For each comparison property, we pick an adjective and its antonym to represent the { < , > } labels.", "For example, for the property size, we pick \"big\" and \"small\".", "The adjective \"similar\" serves as the label for β‰ˆ for all properties.", "Under this framework, a relative comparison question, for instance, \"Is a dog bigger than an elephant?", "\", can be formulated as a quintuple query to the model, namely {dog, elephant, small, similar, big}.", "Denoting the word embeddings for tokens in a quintuple query as X, Y , R < , R β‰ˆ , R > , our three-way model is defined as follows: P (L = s|Q) = sof tmax(R s Β· Οƒ((X βŠ• Y )W )), for s ∈ {<, >, β‰ˆ}, where Q is an quintuple query, Οƒ(Β·) is an activation function and W is a learnable weight matrix.", "The symbol βŠ• represents concatenation.", "We refer to this method as PCE (Property Comparison from Embeddings) for the 3-way task.", "We also experiment with generating label representations from just a single ad- jective (property) embedding R < , namely R β‰ˆ = Οƒ(R < W 2 ), R > = Οƒ(R < W 3 ) .", "We refer to this simpler method as PCE(one-pole).", "We note that in both the three-and four-way settings, the question \"A>B?\"", "is equivalent to \"B<A?\".", "We leverage this fact at test time by feeding our network a reversed object pair, and taking the average of the aligned network outputs before the softmax layer to reduce prediction variance.", "We refer to our model without this technique as PCE(no reverse).", "The key distinction of our method is that it learns a projection from the object word embedding space to the label embedding space.", "This allows the model to leverage the property label embeddings to perform zero-shot prediction on properties not observed in training.", "For example, from a training example \"dogs are smaller than elephants\", the model will learn a projection that puts \"dogs\" relatively closer to \"small,\" and far from \"big\" and \"similar.\"", "Doing so may also result in projecting \"dog\" to be closer to \"light\" than to \"heavy,\" such that the model is able to predict \"dogs are lighter than elephants\" despite never being trained on any weight comparison examples.", "Four-way Model Our four-way model is the same as our three-way model, with an additional module to learn whether the comparison is applicable.", "Keeping the other output nodes unchanged, we add an additional component into the softmax layer to output the probability of \"N/A\": h x = Οƒ(XW a ), h y = Οƒ(Y W a ), A i = h i Β· R > + h i Β· R < , P (L = N/A |Q) ∝ exp(A x + A y ).", "Synthesis for Active Learning We propose a method to synthesize informative queries to pose to annotators, a form of active learning (Settles, 2009 ).", "We use the common heuristic that an informative training example will have a high uncertainty in the model's predictive distribution.", "We adopt the confidence measure (Culotta and McCallum, 2005) to access the uncertainty of a given example: U ncertainty(x) = 1 βˆ’ max y P (y|x, D train ).", "Good candidates for acquisition should have high uncertainty measure, but we also want to avoid querying outliers.", "As the vocabulary is finite, it is possible to evaluate the uncertainty measures for all possible inputs to synthesize the most uncertain query.", "However, such a greedy policy is expensive and prone to selecting outliers.", "Hence, we adopt a sampling based synthesis strategy: at each round, we generate one random object pair per property, and query the one that achieves the highest uncertainty measure.", "A classical difficulty faced by synthesis approaches to active learning is that they may pro-duce unnatural queries that are difficult for a human to label (Baum and Lang, 1992).", "However, our task formulation includes \"similar\" and \"N/A\" classes that encompass many of the more difficult or confusing comparisons, which we believe aids the effectiveness of the synthesis approach.", "Experiments We now present our experimental results on both the three-way and four-way tasks.", "Data Sets We test our three-way model on the VERB PHYSICS data set from (Forbes and Choi, 2017) .", "As there are only 5 properties in VERB PHYSICS, we also develop a new data set we call PROP-ERTY COMMON SENSE.", "We select 32 commonsense properties to form our property set (e.g., value, roundness, deliciousness, intelligence, etc.).", "We extract object nouns from the McRae Feature Norms dataset (McRae et al., 2005) and add selected named entities to form a object vocabulary of 689 distinct objects.", "We randomly generate 3148 object-property triples, label them and reserve 45% of the data for the test set.", "We further add 5 manually-selected applicable comparison examples per property to our test set, in order to make sure each property has some applicable testing examples.", "To verify the labeling, we have a second annotator redundantly label 200 examples and find a Cohen's Kappa of 0.64, which indicates good annotator agreement (we analyze the source of the disagreements in Section 4.1).", "The training set is used for the passive learning and pool-based active learning, and a human oracle provides labels in the synthesis active learning setting.", "Experimental Setup We experiment with three types of embeddings: GloVe, normalized 300-dimensional embeddings trained on a corpus of 6B tokens (Pennington et al., 2014) (the F&C method (Forbes and Choi, 2017) uses the 100-dimensional version, as it achieves the highest validation accuracy for their methods); Word2vec, normalized 300dimensional embeddings trained on 100B tokens (Mikolov et al., 2013) ; and LSTM, the normalized 1024-dimensional weight matrix from the softmax layer of the Google 1B LSTM language model (Jozefowicz et al., 2016) .", "For training PCE, we use an identity activation function and apply 50% dropout.", "We use the Adam optimizer with default settings to train the models for 800 epochs, minimizing cross entropy loss.", "For zero-shot learning, we adopt a hold-oneproperty-out scheme to test our models' zero-shot performance.", "Finally, for active learning, we use Word2vec embeddings.", "All the models are trained on 200 random training examples to warm up.", "We train for 20 epochs after each label acquisition.", "To smooth noise, we report the average of 20 different runs of random (passive learning) and least confident (LC) pool-based active learning (Culotta and McCallum, 2005) baselines.", "We report the average of only 6 runs for an expected model change (EMC) pool-based active learning (Cai et al., 2013) baseline due to its high computational cost, and of only 2 runs for our synthesis active learning approach due to its high labeling cost.", "The pool size is 1540 examples.", "Results In Table 1 , we compare the performance of the three-way PCE model against the existing state of the art on the VERB PHYSICS data set.", "The use of LSTM embeddings in PCE yields the best accuracy for all properties.", "Across all embedding choices, PCE performs as well or better than F&C, despite the fact that PCE does not use the annotated frames that F&C requires (approximately 188 labels per property).", "Thus, our approach matches or exceeds the performance of previous work using significantly less annotated knowledge.", "The lower performance of \"no reverse\" shows that the simple method of averaging over the reversed object pair is effective.", "Table 2 evaluates our models on properties not seen in training (zero-shot learning).", "We compare against a random baseline, and an Emb-Similarity baseline that classifies based on the cosine similarity of the object embeddings to the pole label embeddings (i.e., without the projection layer in PCE).", "PCE outperforms the baselines.", "Although the one-pole method was shown to perform similarly to the two-pole method for properties seen in training (Table 1) , we see that for zero-shot learning, using two poles is important.", "In Table 3 , we show that our four-way models with different embeddings beat both the majority and random baselines on the PROPERTY Table 3 : Accuracy on the four-way task on the PROPERTY COMMON SENSE data.", "COMMON SENSE data.", "Here, the LSTM embeddings perform similarly to the Word2vec embeddings, perhaps because the PROPERTY COM-MON SENSE vocabulary consists of less frequent nouns than in VERB PHYSICS.", "Thus, the Word2vec embeddings are able to catch up due to their larger vocabulary and much larger training corpus.", "Finally, in Figure 1 , we evaluate in the active learning setting.", "The synthesis approach performs best, especially later in training when the training pool for the pool-based methods has only uninformative examples remaining.", "Figure 2 helps explain the relative advantage of the synthesis approach: it is able to continue synthesizing informative (uncertain) queries throughout the entire training run.", "Discussion Sources of annotator disagreement As noted above, we found a \"good\" level of agreement (Cohen's Kappa of 0.64) for our PROPERTY COMMON SENSE data, which is lower than one might expect for task aimed at common sense.", "We analyzed the disagreements and found that they stem from two sources of subjectivity in the task.", "The first is that different labelers may have different thresholds for what counts as similar-a spider and an ant might be marked similar in size for one labeler, but not for another labeler.", "In our data, 58% of the disagreements are cases in which one annotator marks similar while the other says not similar.", "The second is that different labelers have different standards for whether a comparison is N/A.", "For example, in our data set, one labeler labels that a toaster is physically stronger than alcohol, and the other labeler says the comparison is N/A.", "37% of our disagreements are due to this type of subjectivity.", "The above two types of subjectivity account for almost all disagreements (95%), and the remaining 5% are due to annotation errors (one of the annotators makes mistake).", "Model Interpretation Since we adopt an identity activation function and a single layer design, it is possible to simplify the mathematical expression of our model to make it more interpretable.", "After accounting for model averaging, we have the following equality: P (L =< |Q) ∝ exp(R < Β· ((X βŠ• Y )W ) + R > Β· ((Y βŠ• X)W )) = exp(R T < (XW 1 + Y W 2 ) + R T > (Y W 1 + XW 2 )) ∝ exp((R < βˆ’ R > ) T (XW 1 + XW 2 )), where W = W 1 βŠ• W 2 .", "So we can define a score of \"R < \" for a object with embedding X as the following: score(X, R < ) = (R < βˆ’ R > ) T (XW 1 + XW 2 ).", "An object with a higher score for R < is more associated with the R < pole than the R > one.", "For example, score(\"elephant\",\"small\") represents how small an elephant is-a larger score indicates a smaller object.", "Table 4 shows smallness scores for 5 randomly picked objects from the VERB PHYSICS data set.", "PCE tends to assign higher scores to the smaller objects in the set.", "Sensitivity to pole labels PCE requires labels for the poles of the target object property.", "Table 5 : Trained and zero-shot accuracies for different word choices analysis to pole labels, evaluating the test accuracy of PCE as the pole label varies among different combinations of synonyms for the size and speed relations.", "We evaluate in both the trained setting (comparable to the results in Table 1 ) and the zero-shot setting (comparable to Table 2 ).", "We see that the trained accuracy remains essentially unchanged for different pole labels.", "In the zeroshot setting, all combinations achieve accuracy that beats the baselines in Table 2 , but the accuracy value is somewhat sensitive to the choice of pole label.", "Exploring how to select pole labels and experimenting with richer pole representations such as textual definitions are items of future work.", "Conclusion In this paper, we presented a method for extracting commonsense knowledge from embeddings.", "Our experiments demonstrate that the approach is effective at performing relative comparisons of object properties using less hand-annotated knowledge than in previous work.", "A synthesis active learner was found to boost accuracy, and further experiments with this approach are an item of future work." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Problem Definition and Methods", "Three-way Model", "Four-way Model", "Synthesis for Active Learning", "Experiments", "Data Sets", "Experimental Setup", "Results", "Sources of annotator disagreement", "Model Interpretation", "Sensitivity to pole labels", "Conclusion" ] }
GEM-SciDuet-train-104#paper-1274#slide-0
Commonsense Property Comparison Task
Is an elephant bigger or smaller than a mouse? Is Ferrari more expensive or cheaper than beer?
Is an elephant bigger or smaller than a mouse? Is Ferrari more expensive or cheaper than beer?
[]
GEM-SciDuet-train-104#paper-1274#slide-2
1274
Extracting Commonsense Properties from Embeddings with Limited Human Guidance
Intelligent systems require common sense, but automatically extracting this knowledge from text can be difficult. We propose and assess methods for extracting one type of commonsense knowledge, object-property comparisons, from pretrained embeddings. In experiments, we show that our approach exceeds the accuracy of previous work but requires substantially less hand-annotated knowledge. Further, we show that an active learning approach that synthesizes common-sense queries can boost accuracy.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112 ], "paper_content_text": [ "Introduction Automatically extracting common sense from text is a long-standing challenge in natural language processing (Schubert, 2002; Van Durme and Schubert, 2008; Vanderwende, 2005) .", "As argued by Forbes and Yejin (2017) , typical language use may reflect common sense, but the commonsense knowledge itself is not often explicitly stated, due to reporting bias (Gordon and Van Durme, 2013) .", "Thus, additional human knowledge or annotated training data are often used to help systems learn common sense.", "In this paper, we study methods for reducing the amount of human input needed to learn common sense.", "Specifically, we focus on learning relative comparisons of (one-dimensional) object properties, such as the fact that a cantaloupe is more round than a hammer.", "Methods for learning this kind of common sense have been developed previously (e.g.", "Forbes and Choi, 2017) , but the best-performing methods in that previous work requires dozens of manually-annotated frames for each comparison property, to connect the property to how it is indirectly reflected in text-e.g., if text asserts that \"x carries y,\" this implies that x is probably larger than y.", "Our architecture for relative comparisons follows the zero-shot learning paradigm (Palatucci et al., 2009) .", "It takes the form of a neural network that compares a projection of embeddings for each of two objects (e.g.", "\"elephant\" and \"tiger\") to the embeddings for the two poles of the target dimension of comparison (e.g., \"big\" and \"small\" for the size property).", "The projected object embeddings are trained to be closer to the appropriate pole, using a small training set of hand-labeled comparisons.", "Our experiments reveal that our architecture outperforms previous work, despite using less annotated data.", "Further, because our architecture takes the property (pole) labels as arguments, it can extend to the zero-shot setting in which we evaluate on properties not seen in training.", "We find that in zero-shot, our approach outperforms baselines and comes close to supervised results, but providing labels for both poles of the relation rather than just one is important.", "Finally, because the number of properties we wish to learn is large, we experiment with active learning (AL) over a larger property space.", "We show that synthesizing AL queries can be effective using an approach that explicitly models which comparison questions are nonsensical (e.g., is Batman taller than Democracy?).", "We release our code base and a new commonsense data set to the research community.", "1 Problem Definition and Methods We define the task of comparing object properties in two different ways: a three-way classification task, and a four-way classification task.", "In the three-way classification task, we want to estimate the following conditional probability: P (L|O 1 , O 2 , Property), L ∈ { < , > , β‰ˆ }.", "1 https://github.com/yangyiben/PCE For example, P rob(An elephant is larger than a dog) can be expressed as P (L = > |O 1 = \"elephant\", O 2 = \"dog\", Property = \"size\").", "The three-way classification task has been explored in previous work (Forbes and Choi, 2017) and is only performed on triples where both objects have the property, so that the comparison is meaningful.", "In applications, however, we may not know in advance which comparisons are meaningful.", "Thus, we also define a four-way classification task to include \"not applicable\" as the fourth label, so that inference can be performed on any objectproperty triples.", "In the four-way task, the system is tasked with identifying the nonsensical comparisons.", "Formally, we want to estimate the following conditional probability: P (L|O 1 , O 2 , Property), L∈{ < , > , β‰ˆ , N/A }.", "Three-way Model For each comparison property, we pick an adjective and its antonym to represent the { < , > } labels.", "For example, for the property size, we pick \"big\" and \"small\".", "The adjective \"similar\" serves as the label for β‰ˆ for all properties.", "Under this framework, a relative comparison question, for instance, \"Is a dog bigger than an elephant?", "\", can be formulated as a quintuple query to the model, namely {dog, elephant, small, similar, big}.", "Denoting the word embeddings for tokens in a quintuple query as X, Y , R < , R β‰ˆ , R > , our three-way model is defined as follows: P (L = s|Q) = sof tmax(R s Β· Οƒ((X βŠ• Y )W )), for s ∈ {<, >, β‰ˆ}, where Q is an quintuple query, Οƒ(Β·) is an activation function and W is a learnable weight matrix.", "The symbol βŠ• represents concatenation.", "We refer to this method as PCE (Property Comparison from Embeddings) for the 3-way task.", "We also experiment with generating label representations from just a single ad- jective (property) embedding R < , namely R β‰ˆ = Οƒ(R < W 2 ), R > = Οƒ(R < W 3 ) .", "We refer to this simpler method as PCE(one-pole).", "We note that in both the three-and four-way settings, the question \"A>B?\"", "is equivalent to \"B<A?\".", "We leverage this fact at test time by feeding our network a reversed object pair, and taking the average of the aligned network outputs before the softmax layer to reduce prediction variance.", "We refer to our model without this technique as PCE(no reverse).", "The key distinction of our method is that it learns a projection from the object word embedding space to the label embedding space.", "This allows the model to leverage the property label embeddings to perform zero-shot prediction on properties not observed in training.", "For example, from a training example \"dogs are smaller than elephants\", the model will learn a projection that puts \"dogs\" relatively closer to \"small,\" and far from \"big\" and \"similar.\"", "Doing so may also result in projecting \"dog\" to be closer to \"light\" than to \"heavy,\" such that the model is able to predict \"dogs are lighter than elephants\" despite never being trained on any weight comparison examples.", "Four-way Model Our four-way model is the same as our three-way model, with an additional module to learn whether the comparison is applicable.", "Keeping the other output nodes unchanged, we add an additional component into the softmax layer to output the probability of \"N/A\": h x = Οƒ(XW a ), h y = Οƒ(Y W a ), A i = h i Β· R > + h i Β· R < , P (L = N/A |Q) ∝ exp(A x + A y ).", "Synthesis for Active Learning We propose a method to synthesize informative queries to pose to annotators, a form of active learning (Settles, 2009 ).", "We use the common heuristic that an informative training example will have a high uncertainty in the model's predictive distribution.", "We adopt the confidence measure (Culotta and McCallum, 2005) to access the uncertainty of a given example: U ncertainty(x) = 1 βˆ’ max y P (y|x, D train ).", "Good candidates for acquisition should have high uncertainty measure, but we also want to avoid querying outliers.", "As the vocabulary is finite, it is possible to evaluate the uncertainty measures for all possible inputs to synthesize the most uncertain query.", "However, such a greedy policy is expensive and prone to selecting outliers.", "Hence, we adopt a sampling based synthesis strategy: at each round, we generate one random object pair per property, and query the one that achieves the highest uncertainty measure.", "A classical difficulty faced by synthesis approaches to active learning is that they may pro-duce unnatural queries that are difficult for a human to label (Baum and Lang, 1992).", "However, our task formulation includes \"similar\" and \"N/A\" classes that encompass many of the more difficult or confusing comparisons, which we believe aids the effectiveness of the synthesis approach.", "Experiments We now present our experimental results on both the three-way and four-way tasks.", "Data Sets We test our three-way model on the VERB PHYSICS data set from (Forbes and Choi, 2017) .", "As there are only 5 properties in VERB PHYSICS, we also develop a new data set we call PROP-ERTY COMMON SENSE.", "We select 32 commonsense properties to form our property set (e.g., value, roundness, deliciousness, intelligence, etc.).", "We extract object nouns from the McRae Feature Norms dataset (McRae et al., 2005) and add selected named entities to form a object vocabulary of 689 distinct objects.", "We randomly generate 3148 object-property triples, label them and reserve 45% of the data for the test set.", "We further add 5 manually-selected applicable comparison examples per property to our test set, in order to make sure each property has some applicable testing examples.", "To verify the labeling, we have a second annotator redundantly label 200 examples and find a Cohen's Kappa of 0.64, which indicates good annotator agreement (we analyze the source of the disagreements in Section 4.1).", "The training set is used for the passive learning and pool-based active learning, and a human oracle provides labels in the synthesis active learning setting.", "Experimental Setup We experiment with three types of embeddings: GloVe, normalized 300-dimensional embeddings trained on a corpus of 6B tokens (Pennington et al., 2014) (the F&C method (Forbes and Choi, 2017) uses the 100-dimensional version, as it achieves the highest validation accuracy for their methods); Word2vec, normalized 300dimensional embeddings trained on 100B tokens (Mikolov et al., 2013) ; and LSTM, the normalized 1024-dimensional weight matrix from the softmax layer of the Google 1B LSTM language model (Jozefowicz et al., 2016) .", "For training PCE, we use an identity activation function and apply 50% dropout.", "We use the Adam optimizer with default settings to train the models for 800 epochs, minimizing cross entropy loss.", "For zero-shot learning, we adopt a hold-oneproperty-out scheme to test our models' zero-shot performance.", "Finally, for active learning, we use Word2vec embeddings.", "All the models are trained on 200 random training examples to warm up.", "We train for 20 epochs after each label acquisition.", "To smooth noise, we report the average of 20 different runs of random (passive learning) and least confident (LC) pool-based active learning (Culotta and McCallum, 2005) baselines.", "We report the average of only 6 runs for an expected model change (EMC) pool-based active learning (Cai et al., 2013) baseline due to its high computational cost, and of only 2 runs for our synthesis active learning approach due to its high labeling cost.", "The pool size is 1540 examples.", "Results In Table 1 , we compare the performance of the three-way PCE model against the existing state of the art on the VERB PHYSICS data set.", "The use of LSTM embeddings in PCE yields the best accuracy for all properties.", "Across all embedding choices, PCE performs as well or better than F&C, despite the fact that PCE does not use the annotated frames that F&C requires (approximately 188 labels per property).", "Thus, our approach matches or exceeds the performance of previous work using significantly less annotated knowledge.", "The lower performance of \"no reverse\" shows that the simple method of averaging over the reversed object pair is effective.", "Table 2 evaluates our models on properties not seen in training (zero-shot learning).", "We compare against a random baseline, and an Emb-Similarity baseline that classifies based on the cosine similarity of the object embeddings to the pole label embeddings (i.e., without the projection layer in PCE).", "PCE outperforms the baselines.", "Although the one-pole method was shown to perform similarly to the two-pole method for properties seen in training (Table 1) , we see that for zero-shot learning, using two poles is important.", "In Table 3 , we show that our four-way models with different embeddings beat both the majority and random baselines on the PROPERTY Table 3 : Accuracy on the four-way task on the PROPERTY COMMON SENSE data.", "COMMON SENSE data.", "Here, the LSTM embeddings perform similarly to the Word2vec embeddings, perhaps because the PROPERTY COM-MON SENSE vocabulary consists of less frequent nouns than in VERB PHYSICS.", "Thus, the Word2vec embeddings are able to catch up due to their larger vocabulary and much larger training corpus.", "Finally, in Figure 1 , we evaluate in the active learning setting.", "The synthesis approach performs best, especially later in training when the training pool for the pool-based methods has only uninformative examples remaining.", "Figure 2 helps explain the relative advantage of the synthesis approach: it is able to continue synthesizing informative (uncertain) queries throughout the entire training run.", "Discussion Sources of annotator disagreement As noted above, we found a \"good\" level of agreement (Cohen's Kappa of 0.64) for our PROPERTY COMMON SENSE data, which is lower than one might expect for task aimed at common sense.", "We analyzed the disagreements and found that they stem from two sources of subjectivity in the task.", "The first is that different labelers may have different thresholds for what counts as similar-a spider and an ant might be marked similar in size for one labeler, but not for another labeler.", "In our data, 58% of the disagreements are cases in which one annotator marks similar while the other says not similar.", "The second is that different labelers have different standards for whether a comparison is N/A.", "For example, in our data set, one labeler labels that a toaster is physically stronger than alcohol, and the other labeler says the comparison is N/A.", "37% of our disagreements are due to this type of subjectivity.", "The above two types of subjectivity account for almost all disagreements (95%), and the remaining 5% are due to annotation errors (one of the annotators makes mistake).", "Model Interpretation Since we adopt an identity activation function and a single layer design, it is possible to simplify the mathematical expression of our model to make it more interpretable.", "After accounting for model averaging, we have the following equality: P (L =< |Q) ∝ exp(R < Β· ((X βŠ• Y )W ) + R > Β· ((Y βŠ• X)W )) = exp(R T < (XW 1 + Y W 2 ) + R T > (Y W 1 + XW 2 )) ∝ exp((R < βˆ’ R > ) T (XW 1 + XW 2 )), where W = W 1 βŠ• W 2 .", "So we can define a score of \"R < \" for a object with embedding X as the following: score(X, R < ) = (R < βˆ’ R > ) T (XW 1 + XW 2 ).", "An object with a higher score for R < is more associated with the R < pole than the R > one.", "For example, score(\"elephant\",\"small\") represents how small an elephant is-a larger score indicates a smaller object.", "Table 4 shows smallness scores for 5 randomly picked objects from the VERB PHYSICS data set.", "PCE tends to assign higher scores to the smaller objects in the set.", "Sensitivity to pole labels PCE requires labels for the poles of the target object property.", "Table 5 : Trained and zero-shot accuracies for different word choices analysis to pole labels, evaluating the test accuracy of PCE as the pole label varies among different combinations of synonyms for the size and speed relations.", "We evaluate in both the trained setting (comparable to the results in Table 1 ) and the zero-shot setting (comparable to Table 2 ).", "We see that the trained accuracy remains essentially unchanged for different pole labels.", "In the zeroshot setting, all combinations achieve accuracy that beats the baselines in Table 2 , but the accuracy value is somewhat sensitive to the choice of pole label.", "Exploring how to select pole labels and experimenting with richer pole representations such as textual definitions are items of future work.", "Conclusion In this paper, we presented a method for extracting commonsense knowledge from embeddings.", "Our experiments demonstrate that the approach is effective at performing relative comparisons of object properties using less hand-annotated knowledge than in previous work.", "A synthesis active learner was found to boost accuracy, and further experiments with this approach are an item of future work." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Problem Definition and Methods", "Three-way Model", "Four-way Model", "Synthesis for Active Learning", "Experiments", "Data Sets", "Experimental Setup", "Results", "Sources of annotator disagreement", "Model Interpretation", "Sensitivity to pole labels", "Conclusion" ] }
GEM-SciDuet-train-104#paper-1274#slide-2
Learning Commonsense Knowledge from Text
Reporting bias [Gordon and Van Durme 2013]: Commonsense knowledge is rarely explicitly stated. Large knowledge dimensions: Property specified by adjectives: large, heavy, fast, rigid, etc. Creating training examples and building separate models on each type of property requires expensive labeling efforts. Handling unseen properties during the test phase (zero-shot prediction)? Language variation: An ideal model should be able to take f lexible natural language inputs. Can we build an efficient commonsense comparison model with word embedding inputs only ?
Reporting bias [Gordon and Van Durme 2013]: Commonsense knowledge is rarely explicitly stated. Large knowledge dimensions: Property specified by adjectives: large, heavy, fast, rigid, etc. Creating training examples and building separate models on each type of property requires expensive labeling efforts. Handling unseen properties during the test phase (zero-shot prediction)? Language variation: An ideal model should be able to take f lexible natural language inputs. Can we build an efficient commonsense comparison model with word embedding inputs only ?
[]
GEM-SciDuet-train-104#paper-1274#slide-3
1274
Extracting Commonsense Properties from Embeddings with Limited Human Guidance
Intelligent systems require common sense, but automatically extracting this knowledge from text can be difficult. We propose and assess methods for extracting one type of commonsense knowledge, object-property comparisons, from pretrained embeddings. In experiments, we show that our approach exceeds the accuracy of previous work but requires substantially less hand-annotated knowledge. Further, we show that an active learning approach that synthesizes common-sense queries can boost accuracy.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112 ], "paper_content_text": [ "Introduction Automatically extracting common sense from text is a long-standing challenge in natural language processing (Schubert, 2002; Van Durme and Schubert, 2008; Vanderwende, 2005) .", "As argued by Forbes and Yejin (2017) , typical language use may reflect common sense, but the commonsense knowledge itself is not often explicitly stated, due to reporting bias (Gordon and Van Durme, 2013) .", "Thus, additional human knowledge or annotated training data are often used to help systems learn common sense.", "In this paper, we study methods for reducing the amount of human input needed to learn common sense.", "Specifically, we focus on learning relative comparisons of (one-dimensional) object properties, such as the fact that a cantaloupe is more round than a hammer.", "Methods for learning this kind of common sense have been developed previously (e.g.", "Forbes and Choi, 2017) , but the best-performing methods in that previous work requires dozens of manually-annotated frames for each comparison property, to connect the property to how it is indirectly reflected in text-e.g., if text asserts that \"x carries y,\" this implies that x is probably larger than y.", "Our architecture for relative comparisons follows the zero-shot learning paradigm (Palatucci et al., 2009) .", "It takes the form of a neural network that compares a projection of embeddings for each of two objects (e.g.", "\"elephant\" and \"tiger\") to the embeddings for the two poles of the target dimension of comparison (e.g., \"big\" and \"small\" for the size property).", "The projected object embeddings are trained to be closer to the appropriate pole, using a small training set of hand-labeled comparisons.", "Our experiments reveal that our architecture outperforms previous work, despite using less annotated data.", "Further, because our architecture takes the property (pole) labels as arguments, it can extend to the zero-shot setting in which we evaluate on properties not seen in training.", "We find that in zero-shot, our approach outperforms baselines and comes close to supervised results, but providing labels for both poles of the relation rather than just one is important.", "Finally, because the number of properties we wish to learn is large, we experiment with active learning (AL) over a larger property space.", "We show that synthesizing AL queries can be effective using an approach that explicitly models which comparison questions are nonsensical (e.g., is Batman taller than Democracy?).", "We release our code base and a new commonsense data set to the research community.", "1 Problem Definition and Methods We define the task of comparing object properties in two different ways: a three-way classification task, and a four-way classification task.", "In the three-way classification task, we want to estimate the following conditional probability: P (L|O 1 , O 2 , Property), L ∈ { < , > , β‰ˆ }.", "1 https://github.com/yangyiben/PCE For example, P rob(An elephant is larger than a dog) can be expressed as P (L = > |O 1 = \"elephant\", O 2 = \"dog\", Property = \"size\").", "The three-way classification task has been explored in previous work (Forbes and Choi, 2017) and is only performed on triples where both objects have the property, so that the comparison is meaningful.", "In applications, however, we may not know in advance which comparisons are meaningful.", "Thus, we also define a four-way classification task to include \"not applicable\" as the fourth label, so that inference can be performed on any objectproperty triples.", "In the four-way task, the system is tasked with identifying the nonsensical comparisons.", "Formally, we want to estimate the following conditional probability: P (L|O 1 , O 2 , Property), L∈{ < , > , β‰ˆ , N/A }.", "Three-way Model For each comparison property, we pick an adjective and its antonym to represent the { < , > } labels.", "For example, for the property size, we pick \"big\" and \"small\".", "The adjective \"similar\" serves as the label for β‰ˆ for all properties.", "Under this framework, a relative comparison question, for instance, \"Is a dog bigger than an elephant?", "\", can be formulated as a quintuple query to the model, namely {dog, elephant, small, similar, big}.", "Denoting the word embeddings for tokens in a quintuple query as X, Y , R < , R β‰ˆ , R > , our three-way model is defined as follows: P (L = s|Q) = sof tmax(R s Β· Οƒ((X βŠ• Y )W )), for s ∈ {<, >, β‰ˆ}, where Q is an quintuple query, Οƒ(Β·) is an activation function and W is a learnable weight matrix.", "The symbol βŠ• represents concatenation.", "We refer to this method as PCE (Property Comparison from Embeddings) for the 3-way task.", "We also experiment with generating label representations from just a single ad- jective (property) embedding R < , namely R β‰ˆ = Οƒ(R < W 2 ), R > = Οƒ(R < W 3 ) .", "We refer to this simpler method as PCE(one-pole).", "We note that in both the three-and four-way settings, the question \"A>B?\"", "is equivalent to \"B<A?\".", "We leverage this fact at test time by feeding our network a reversed object pair, and taking the average of the aligned network outputs before the softmax layer to reduce prediction variance.", "We refer to our model without this technique as PCE(no reverse).", "The key distinction of our method is that it learns a projection from the object word embedding space to the label embedding space.", "This allows the model to leverage the property label embeddings to perform zero-shot prediction on properties not observed in training.", "For example, from a training example \"dogs are smaller than elephants\", the model will learn a projection that puts \"dogs\" relatively closer to \"small,\" and far from \"big\" and \"similar.\"", "Doing so may also result in projecting \"dog\" to be closer to \"light\" than to \"heavy,\" such that the model is able to predict \"dogs are lighter than elephants\" despite never being trained on any weight comparison examples.", "Four-way Model Our four-way model is the same as our three-way model, with an additional module to learn whether the comparison is applicable.", "Keeping the other output nodes unchanged, we add an additional component into the softmax layer to output the probability of \"N/A\": h x = Οƒ(XW a ), h y = Οƒ(Y W a ), A i = h i Β· R > + h i Β· R < , P (L = N/A |Q) ∝ exp(A x + A y ).", "Synthesis for Active Learning We propose a method to synthesize informative queries to pose to annotators, a form of active learning (Settles, 2009 ).", "We use the common heuristic that an informative training example will have a high uncertainty in the model's predictive distribution.", "We adopt the confidence measure (Culotta and McCallum, 2005) to access the uncertainty of a given example: U ncertainty(x) = 1 βˆ’ max y P (y|x, D train ).", "Good candidates for acquisition should have high uncertainty measure, but we also want to avoid querying outliers.", "As the vocabulary is finite, it is possible to evaluate the uncertainty measures for all possible inputs to synthesize the most uncertain query.", "However, such a greedy policy is expensive and prone to selecting outliers.", "Hence, we adopt a sampling based synthesis strategy: at each round, we generate one random object pair per property, and query the one that achieves the highest uncertainty measure.", "A classical difficulty faced by synthesis approaches to active learning is that they may pro-duce unnatural queries that are difficult for a human to label (Baum and Lang, 1992).", "However, our task formulation includes \"similar\" and \"N/A\" classes that encompass many of the more difficult or confusing comparisons, which we believe aids the effectiveness of the synthesis approach.", "Experiments We now present our experimental results on both the three-way and four-way tasks.", "Data Sets We test our three-way model on the VERB PHYSICS data set from (Forbes and Choi, 2017) .", "As there are only 5 properties in VERB PHYSICS, we also develop a new data set we call PROP-ERTY COMMON SENSE.", "We select 32 commonsense properties to form our property set (e.g., value, roundness, deliciousness, intelligence, etc.).", "We extract object nouns from the McRae Feature Norms dataset (McRae et al., 2005) and add selected named entities to form a object vocabulary of 689 distinct objects.", "We randomly generate 3148 object-property triples, label them and reserve 45% of the data for the test set.", "We further add 5 manually-selected applicable comparison examples per property to our test set, in order to make sure each property has some applicable testing examples.", "To verify the labeling, we have a second annotator redundantly label 200 examples and find a Cohen's Kappa of 0.64, which indicates good annotator agreement (we analyze the source of the disagreements in Section 4.1).", "The training set is used for the passive learning and pool-based active learning, and a human oracle provides labels in the synthesis active learning setting.", "Experimental Setup We experiment with three types of embeddings: GloVe, normalized 300-dimensional embeddings trained on a corpus of 6B tokens (Pennington et al., 2014) (the F&C method (Forbes and Choi, 2017) uses the 100-dimensional version, as it achieves the highest validation accuracy for their methods); Word2vec, normalized 300dimensional embeddings trained on 100B tokens (Mikolov et al., 2013) ; and LSTM, the normalized 1024-dimensional weight matrix from the softmax layer of the Google 1B LSTM language model (Jozefowicz et al., 2016) .", "For training PCE, we use an identity activation function and apply 50% dropout.", "We use the Adam optimizer with default settings to train the models for 800 epochs, minimizing cross entropy loss.", "For zero-shot learning, we adopt a hold-oneproperty-out scheme to test our models' zero-shot performance.", "Finally, for active learning, we use Word2vec embeddings.", "All the models are trained on 200 random training examples to warm up.", "We train for 20 epochs after each label acquisition.", "To smooth noise, we report the average of 20 different runs of random (passive learning) and least confident (LC) pool-based active learning (Culotta and McCallum, 2005) baselines.", "We report the average of only 6 runs for an expected model change (EMC) pool-based active learning (Cai et al., 2013) baseline due to its high computational cost, and of only 2 runs for our synthesis active learning approach due to its high labeling cost.", "The pool size is 1540 examples.", "Results In Table 1 , we compare the performance of the three-way PCE model against the existing state of the art on the VERB PHYSICS data set.", "The use of LSTM embeddings in PCE yields the best accuracy for all properties.", "Across all embedding choices, PCE performs as well or better than F&C, despite the fact that PCE does not use the annotated frames that F&C requires (approximately 188 labels per property).", "Thus, our approach matches or exceeds the performance of previous work using significantly less annotated knowledge.", "The lower performance of \"no reverse\" shows that the simple method of averaging over the reversed object pair is effective.", "Table 2 evaluates our models on properties not seen in training (zero-shot learning).", "We compare against a random baseline, and an Emb-Similarity baseline that classifies based on the cosine similarity of the object embeddings to the pole label embeddings (i.e., without the projection layer in PCE).", "PCE outperforms the baselines.", "Although the one-pole method was shown to perform similarly to the two-pole method for properties seen in training (Table 1) , we see that for zero-shot learning, using two poles is important.", "In Table 3 , we show that our four-way models with different embeddings beat both the majority and random baselines on the PROPERTY Table 3 : Accuracy on the four-way task on the PROPERTY COMMON SENSE data.", "COMMON SENSE data.", "Here, the LSTM embeddings perform similarly to the Word2vec embeddings, perhaps because the PROPERTY COM-MON SENSE vocabulary consists of less frequent nouns than in VERB PHYSICS.", "Thus, the Word2vec embeddings are able to catch up due to their larger vocabulary and much larger training corpus.", "Finally, in Figure 1 , we evaluate in the active learning setting.", "The synthesis approach performs best, especially later in training when the training pool for the pool-based methods has only uninformative examples remaining.", "Figure 2 helps explain the relative advantage of the synthesis approach: it is able to continue synthesizing informative (uncertain) queries throughout the entire training run.", "Discussion Sources of annotator disagreement As noted above, we found a \"good\" level of agreement (Cohen's Kappa of 0.64) for our PROPERTY COMMON SENSE data, which is lower than one might expect for task aimed at common sense.", "We analyzed the disagreements and found that they stem from two sources of subjectivity in the task.", "The first is that different labelers may have different thresholds for what counts as similar-a spider and an ant might be marked similar in size for one labeler, but not for another labeler.", "In our data, 58% of the disagreements are cases in which one annotator marks similar while the other says not similar.", "The second is that different labelers have different standards for whether a comparison is N/A.", "For example, in our data set, one labeler labels that a toaster is physically stronger than alcohol, and the other labeler says the comparison is N/A.", "37% of our disagreements are due to this type of subjectivity.", "The above two types of subjectivity account for almost all disagreements (95%), and the remaining 5% are due to annotation errors (one of the annotators makes mistake).", "Model Interpretation Since we adopt an identity activation function and a single layer design, it is possible to simplify the mathematical expression of our model to make it more interpretable.", "After accounting for model averaging, we have the following equality: P (L =< |Q) ∝ exp(R < Β· ((X βŠ• Y )W ) + R > Β· ((Y βŠ• X)W )) = exp(R T < (XW 1 + Y W 2 ) + R T > (Y W 1 + XW 2 )) ∝ exp((R < βˆ’ R > ) T (XW 1 + XW 2 )), where W = W 1 βŠ• W 2 .", "So we can define a score of \"R < \" for a object with embedding X as the following: score(X, R < ) = (R < βˆ’ R > ) T (XW 1 + XW 2 ).", "An object with a higher score for R < is more associated with the R < pole than the R > one.", "For example, score(\"elephant\",\"small\") represents how small an elephant is-a larger score indicates a smaller object.", "Table 4 shows smallness scores for 5 randomly picked objects from the VERB PHYSICS data set.", "PCE tends to assign higher scores to the smaller objects in the set.", "Sensitivity to pole labels PCE requires labels for the poles of the target object property.", "Table 5 : Trained and zero-shot accuracies for different word choices analysis to pole labels, evaluating the test accuracy of PCE as the pole label varies among different combinations of synonyms for the size and speed relations.", "We evaluate in both the trained setting (comparable to the results in Table 1 ) and the zero-shot setting (comparable to Table 2 ).", "We see that the trained accuracy remains essentially unchanged for different pole labels.", "In the zeroshot setting, all combinations achieve accuracy that beats the baselines in Table 2 , but the accuracy value is somewhat sensitive to the choice of pole label.", "Exploring how to select pole labels and experimenting with richer pole representations such as textual definitions are items of future work.", "Conclusion In this paper, we presented a method for extracting commonsense knowledge from embeddings.", "Our experiments demonstrate that the approach is effective at performing relative comparisons of object properties using less hand-annotated knowledge than in previous work.", "A synthesis active learner was found to boost accuracy, and further experiments with this approach are an item of future work." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Problem Definition and Methods", "Three-way Model", "Four-way Model", "Synthesis for Active Learning", "Experiments", "Data Sets", "Experimental Setup", "Results", "Sources of annotator disagreement", "Model Interpretation", "Sensitivity to pole labels", "Conclusion" ] }
GEM-SciDuet-train-104#paper-1274#slide-3
Categorical Linear Regressions
Figure 1: Creating a softmax regression model for each property.
Figure 1: Creating a softmax regression model for each property.
[]
GEM-SciDuet-train-104#paper-1274#slide-5
1274
Extracting Commonsense Properties from Embeddings with Limited Human Guidance
Intelligent systems require common sense, but automatically extracting this knowledge from text can be difficult. We propose and assess methods for extracting one type of commonsense knowledge, object-property comparisons, from pretrained embeddings. In experiments, we show that our approach exceeds the accuracy of previous work but requires substantially less hand-annotated knowledge. Further, we show that an active learning approach that synthesizes common-sense queries can boost accuracy.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112 ], "paper_content_text": [ "Introduction Automatically extracting common sense from text is a long-standing challenge in natural language processing (Schubert, 2002; Van Durme and Schubert, 2008; Vanderwende, 2005) .", "As argued by Forbes and Yejin (2017) , typical language use may reflect common sense, but the commonsense knowledge itself is not often explicitly stated, due to reporting bias (Gordon and Van Durme, 2013) .", "Thus, additional human knowledge or annotated training data are often used to help systems learn common sense.", "In this paper, we study methods for reducing the amount of human input needed to learn common sense.", "Specifically, we focus on learning relative comparisons of (one-dimensional) object properties, such as the fact that a cantaloupe is more round than a hammer.", "Methods for learning this kind of common sense have been developed previously (e.g.", "Forbes and Choi, 2017) , but the best-performing methods in that previous work requires dozens of manually-annotated frames for each comparison property, to connect the property to how it is indirectly reflected in text-e.g., if text asserts that \"x carries y,\" this implies that x is probably larger than y.", "Our architecture for relative comparisons follows the zero-shot learning paradigm (Palatucci et al., 2009) .", "It takes the form of a neural network that compares a projection of embeddings for each of two objects (e.g.", "\"elephant\" and \"tiger\") to the embeddings for the two poles of the target dimension of comparison (e.g., \"big\" and \"small\" for the size property).", "The projected object embeddings are trained to be closer to the appropriate pole, using a small training set of hand-labeled comparisons.", "Our experiments reveal that our architecture outperforms previous work, despite using less annotated data.", "Further, because our architecture takes the property (pole) labels as arguments, it can extend to the zero-shot setting in which we evaluate on properties not seen in training.", "We find that in zero-shot, our approach outperforms baselines and comes close to supervised results, but providing labels for both poles of the relation rather than just one is important.", "Finally, because the number of properties we wish to learn is large, we experiment with active learning (AL) over a larger property space.", "We show that synthesizing AL queries can be effective using an approach that explicitly models which comparison questions are nonsensical (e.g., is Batman taller than Democracy?).", "We release our code base and a new commonsense data set to the research community.", "1 Problem Definition and Methods We define the task of comparing object properties in two different ways: a three-way classification task, and a four-way classification task.", "In the three-way classification task, we want to estimate the following conditional probability: P (L|O 1 , O 2 , Property), L ∈ { < , > , β‰ˆ }.", "1 https://github.com/yangyiben/PCE For example, P rob(An elephant is larger than a dog) can be expressed as P (L = > |O 1 = \"elephant\", O 2 = \"dog\", Property = \"size\").", "The three-way classification task has been explored in previous work (Forbes and Choi, 2017) and is only performed on triples where both objects have the property, so that the comparison is meaningful.", "In applications, however, we may not know in advance which comparisons are meaningful.", "Thus, we also define a four-way classification task to include \"not applicable\" as the fourth label, so that inference can be performed on any objectproperty triples.", "In the four-way task, the system is tasked with identifying the nonsensical comparisons.", "Formally, we want to estimate the following conditional probability: P (L|O 1 , O 2 , Property), L∈{ < , > , β‰ˆ , N/A }.", "Three-way Model For each comparison property, we pick an adjective and its antonym to represent the { < , > } labels.", "For example, for the property size, we pick \"big\" and \"small\".", "The adjective \"similar\" serves as the label for β‰ˆ for all properties.", "Under this framework, a relative comparison question, for instance, \"Is a dog bigger than an elephant?", "\", can be formulated as a quintuple query to the model, namely {dog, elephant, small, similar, big}.", "Denoting the word embeddings for tokens in a quintuple query as X, Y , R < , R β‰ˆ , R > , our three-way model is defined as follows: P (L = s|Q) = sof tmax(R s Β· Οƒ((X βŠ• Y )W )), for s ∈ {<, >, β‰ˆ}, where Q is an quintuple query, Οƒ(Β·) is an activation function and W is a learnable weight matrix.", "The symbol βŠ• represents concatenation.", "We refer to this method as PCE (Property Comparison from Embeddings) for the 3-way task.", "We also experiment with generating label representations from just a single ad- jective (property) embedding R < , namely R β‰ˆ = Οƒ(R < W 2 ), R > = Οƒ(R < W 3 ) .", "We refer to this simpler method as PCE(one-pole).", "We note that in both the three-and four-way settings, the question \"A>B?\"", "is equivalent to \"B<A?\".", "We leverage this fact at test time by feeding our network a reversed object pair, and taking the average of the aligned network outputs before the softmax layer to reduce prediction variance.", "We refer to our model without this technique as PCE(no reverse).", "The key distinction of our method is that it learns a projection from the object word embedding space to the label embedding space.", "This allows the model to leverage the property label embeddings to perform zero-shot prediction on properties not observed in training.", "For example, from a training example \"dogs are smaller than elephants\", the model will learn a projection that puts \"dogs\" relatively closer to \"small,\" and far from \"big\" and \"similar.\"", "Doing so may also result in projecting \"dog\" to be closer to \"light\" than to \"heavy,\" such that the model is able to predict \"dogs are lighter than elephants\" despite never being trained on any weight comparison examples.", "Four-way Model Our four-way model is the same as our three-way model, with an additional module to learn whether the comparison is applicable.", "Keeping the other output nodes unchanged, we add an additional component into the softmax layer to output the probability of \"N/A\": h x = Οƒ(XW a ), h y = Οƒ(Y W a ), A i = h i Β· R > + h i Β· R < , P (L = N/A |Q) ∝ exp(A x + A y ).", "Synthesis for Active Learning We propose a method to synthesize informative queries to pose to annotators, a form of active learning (Settles, 2009 ).", "We use the common heuristic that an informative training example will have a high uncertainty in the model's predictive distribution.", "We adopt the confidence measure (Culotta and McCallum, 2005) to access the uncertainty of a given example: U ncertainty(x) = 1 βˆ’ max y P (y|x, D train ).", "Good candidates for acquisition should have high uncertainty measure, but we also want to avoid querying outliers.", "As the vocabulary is finite, it is possible to evaluate the uncertainty measures for all possible inputs to synthesize the most uncertain query.", "However, such a greedy policy is expensive and prone to selecting outliers.", "Hence, we adopt a sampling based synthesis strategy: at each round, we generate one random object pair per property, and query the one that achieves the highest uncertainty measure.", "A classical difficulty faced by synthesis approaches to active learning is that they may pro-duce unnatural queries that are difficult for a human to label (Baum and Lang, 1992).", "However, our task formulation includes \"similar\" and \"N/A\" classes that encompass many of the more difficult or confusing comparisons, which we believe aids the effectiveness of the synthesis approach.", "Experiments We now present our experimental results on both the three-way and four-way tasks.", "Data Sets We test our three-way model on the VERB PHYSICS data set from (Forbes and Choi, 2017) .", "As there are only 5 properties in VERB PHYSICS, we also develop a new data set we call PROP-ERTY COMMON SENSE.", "We select 32 commonsense properties to form our property set (e.g., value, roundness, deliciousness, intelligence, etc.).", "We extract object nouns from the McRae Feature Norms dataset (McRae et al., 2005) and add selected named entities to form a object vocabulary of 689 distinct objects.", "We randomly generate 3148 object-property triples, label them and reserve 45% of the data for the test set.", "We further add 5 manually-selected applicable comparison examples per property to our test set, in order to make sure each property has some applicable testing examples.", "To verify the labeling, we have a second annotator redundantly label 200 examples and find a Cohen's Kappa of 0.64, which indicates good annotator agreement (we analyze the source of the disagreements in Section 4.1).", "The training set is used for the passive learning and pool-based active learning, and a human oracle provides labels in the synthesis active learning setting.", "Experimental Setup We experiment with three types of embeddings: GloVe, normalized 300-dimensional embeddings trained on a corpus of 6B tokens (Pennington et al., 2014) (the F&C method (Forbes and Choi, 2017) uses the 100-dimensional version, as it achieves the highest validation accuracy for their methods); Word2vec, normalized 300dimensional embeddings trained on 100B tokens (Mikolov et al., 2013) ; and LSTM, the normalized 1024-dimensional weight matrix from the softmax layer of the Google 1B LSTM language model (Jozefowicz et al., 2016) .", "For training PCE, we use an identity activation function and apply 50% dropout.", "We use the Adam optimizer with default settings to train the models for 800 epochs, minimizing cross entropy loss.", "For zero-shot learning, we adopt a hold-oneproperty-out scheme to test our models' zero-shot performance.", "Finally, for active learning, we use Word2vec embeddings.", "All the models are trained on 200 random training examples to warm up.", "We train for 20 epochs after each label acquisition.", "To smooth noise, we report the average of 20 different runs of random (passive learning) and least confident (LC) pool-based active learning (Culotta and McCallum, 2005) baselines.", "We report the average of only 6 runs for an expected model change (EMC) pool-based active learning (Cai et al., 2013) baseline due to its high computational cost, and of only 2 runs for our synthesis active learning approach due to its high labeling cost.", "The pool size is 1540 examples.", "Results In Table 1 , we compare the performance of the three-way PCE model against the existing state of the art on the VERB PHYSICS data set.", "The use of LSTM embeddings in PCE yields the best accuracy for all properties.", "Across all embedding choices, PCE performs as well or better than F&C, despite the fact that PCE does not use the annotated frames that F&C requires (approximately 188 labels per property).", "Thus, our approach matches or exceeds the performance of previous work using significantly less annotated knowledge.", "The lower performance of \"no reverse\" shows that the simple method of averaging over the reversed object pair is effective.", "Table 2 evaluates our models on properties not seen in training (zero-shot learning).", "We compare against a random baseline, and an Emb-Similarity baseline that classifies based on the cosine similarity of the object embeddings to the pole label embeddings (i.e., without the projection layer in PCE).", "PCE outperforms the baselines.", "Although the one-pole method was shown to perform similarly to the two-pole method for properties seen in training (Table 1) , we see that for zero-shot learning, using two poles is important.", "In Table 3 , we show that our four-way models with different embeddings beat both the majority and random baselines on the PROPERTY Table 3 : Accuracy on the four-way task on the PROPERTY COMMON SENSE data.", "COMMON SENSE data.", "Here, the LSTM embeddings perform similarly to the Word2vec embeddings, perhaps because the PROPERTY COM-MON SENSE vocabulary consists of less frequent nouns than in VERB PHYSICS.", "Thus, the Word2vec embeddings are able to catch up due to their larger vocabulary and much larger training corpus.", "Finally, in Figure 1 , we evaluate in the active learning setting.", "The synthesis approach performs best, especially later in training when the training pool for the pool-based methods has only uninformative examples remaining.", "Figure 2 helps explain the relative advantage of the synthesis approach: it is able to continue synthesizing informative (uncertain) queries throughout the entire training run.", "Discussion Sources of annotator disagreement As noted above, we found a \"good\" level of agreement (Cohen's Kappa of 0.64) for our PROPERTY COMMON SENSE data, which is lower than one might expect for task aimed at common sense.", "We analyzed the disagreements and found that they stem from two sources of subjectivity in the task.", "The first is that different labelers may have different thresholds for what counts as similar-a spider and an ant might be marked similar in size for one labeler, but not for another labeler.", "In our data, 58% of the disagreements are cases in which one annotator marks similar while the other says not similar.", "The second is that different labelers have different standards for whether a comparison is N/A.", "For example, in our data set, one labeler labels that a toaster is physically stronger than alcohol, and the other labeler says the comparison is N/A.", "37% of our disagreements are due to this type of subjectivity.", "The above two types of subjectivity account for almost all disagreements (95%), and the remaining 5% are due to annotation errors (one of the annotators makes mistake).", "Model Interpretation Since we adopt an identity activation function and a single layer design, it is possible to simplify the mathematical expression of our model to make it more interpretable.", "After accounting for model averaging, we have the following equality: P (L =< |Q) ∝ exp(R < Β· ((X βŠ• Y )W ) + R > Β· ((Y βŠ• X)W )) = exp(R T < (XW 1 + Y W 2 ) + R T > (Y W 1 + XW 2 )) ∝ exp((R < βˆ’ R > ) T (XW 1 + XW 2 )), where W = W 1 βŠ• W 2 .", "So we can define a score of \"R < \" for a object with embedding X as the following: score(X, R < ) = (R < βˆ’ R > ) T (XW 1 + XW 2 ).", "An object with a higher score for R < is more associated with the R < pole than the R > one.", "For example, score(\"elephant\",\"small\") represents how small an elephant is-a larger score indicates a smaller object.", "Table 4 shows smallness scores for 5 randomly picked objects from the VERB PHYSICS data set.", "PCE tends to assign higher scores to the smaller objects in the set.", "Sensitivity to pole labels PCE requires labels for the poles of the target object property.", "Table 5 : Trained and zero-shot accuracies for different word choices analysis to pole labels, evaluating the test accuracy of PCE as the pole label varies among different combinations of synonyms for the size and speed relations.", "We evaluate in both the trained setting (comparable to the results in Table 1 ) and the zero-shot setting (comparable to Table 2 ).", "We see that the trained accuracy remains essentially unchanged for different pole labels.", "In the zeroshot setting, all combinations achieve accuracy that beats the baselines in Table 2 , but the accuracy value is somewhat sensitive to the choice of pole label.", "Exploring how to select pole labels and experimenting with richer pole representations such as textual definitions are items of future work.", "Conclusion In this paper, we presented a method for extracting commonsense knowledge from embeddings.", "Our experiments demonstrate that the approach is effective at performing relative comparisons of object properties using less hand-annotated knowledge than in previous work.", "A synthesis active learner was found to boost accuracy, and further experiments with this approach are an item of future work." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Problem Definition and Methods", "Three-way Model", "Four-way Model", "Synthesis for Active Learning", "Experiments", "Data Sets", "Experimental Setup", "Results", "Sources of annotator disagreement", "Model Interpretation", "Sensitivity to pole labels", "Conclusion" ] }
GEM-SciDuet-train-104#paper-1274#slide-5
Data
VERB PHYSICS ( 5 physical properties) [Forbes and Choi 2017] PROPERTY COMMON SENSE ( 32 commonsense properties)
VERB PHYSICS ( 5 physical properties) [Forbes and Choi 2017] PROPERTY COMMON SENSE ( 32 commonsense properties)
[]
GEM-SciDuet-train-104#paper-1274#slide-6
1274
Extracting Commonsense Properties from Embeddings with Limited Human Guidance
Intelligent systems require common sense, but automatically extracting this knowledge from text can be difficult. We propose and assess methods for extracting one type of commonsense knowledge, object-property comparisons, from pretrained embeddings. In experiments, we show that our approach exceeds the accuracy of previous work but requires substantially less hand-annotated knowledge. Further, we show that an active learning approach that synthesizes common-sense queries can boost accuracy.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112 ], "paper_content_text": [ "Introduction Automatically extracting common sense from text is a long-standing challenge in natural language processing (Schubert, 2002; Van Durme and Schubert, 2008; Vanderwende, 2005) .", "As argued by Forbes and Yejin (2017) , typical language use may reflect common sense, but the commonsense knowledge itself is not often explicitly stated, due to reporting bias (Gordon and Van Durme, 2013) .", "Thus, additional human knowledge or annotated training data are often used to help systems learn common sense.", "In this paper, we study methods for reducing the amount of human input needed to learn common sense.", "Specifically, we focus on learning relative comparisons of (one-dimensional) object properties, such as the fact that a cantaloupe is more round than a hammer.", "Methods for learning this kind of common sense have been developed previously (e.g.", "Forbes and Choi, 2017) , but the best-performing methods in that previous work requires dozens of manually-annotated frames for each comparison property, to connect the property to how it is indirectly reflected in text-e.g., if text asserts that \"x carries y,\" this implies that x is probably larger than y.", "Our architecture for relative comparisons follows the zero-shot learning paradigm (Palatucci et al., 2009) .", "It takes the form of a neural network that compares a projection of embeddings for each of two objects (e.g.", "\"elephant\" and \"tiger\") to the embeddings for the two poles of the target dimension of comparison (e.g., \"big\" and \"small\" for the size property).", "The projected object embeddings are trained to be closer to the appropriate pole, using a small training set of hand-labeled comparisons.", "Our experiments reveal that our architecture outperforms previous work, despite using less annotated data.", "Further, because our architecture takes the property (pole) labels as arguments, it can extend to the zero-shot setting in which we evaluate on properties not seen in training.", "We find that in zero-shot, our approach outperforms baselines and comes close to supervised results, but providing labels for both poles of the relation rather than just one is important.", "Finally, because the number of properties we wish to learn is large, we experiment with active learning (AL) over a larger property space.", "We show that synthesizing AL queries can be effective using an approach that explicitly models which comparison questions are nonsensical (e.g., is Batman taller than Democracy?).", "We release our code base and a new commonsense data set to the research community.", "1 Problem Definition and Methods We define the task of comparing object properties in two different ways: a three-way classification task, and a four-way classification task.", "In the three-way classification task, we want to estimate the following conditional probability: P (L|O 1 , O 2 , Property), L ∈ { < , > , β‰ˆ }.", "1 https://github.com/yangyiben/PCE For example, P rob(An elephant is larger than a dog) can be expressed as P (L = > |O 1 = \"elephant\", O 2 = \"dog\", Property = \"size\").", "The three-way classification task has been explored in previous work (Forbes and Choi, 2017) and is only performed on triples where both objects have the property, so that the comparison is meaningful.", "In applications, however, we may not know in advance which comparisons are meaningful.", "Thus, we also define a four-way classification task to include \"not applicable\" as the fourth label, so that inference can be performed on any objectproperty triples.", "In the four-way task, the system is tasked with identifying the nonsensical comparisons.", "Formally, we want to estimate the following conditional probability: P (L|O 1 , O 2 , Property), L∈{ < , > , β‰ˆ , N/A }.", "Three-way Model For each comparison property, we pick an adjective and its antonym to represent the { < , > } labels.", "For example, for the property size, we pick \"big\" and \"small\".", "The adjective \"similar\" serves as the label for β‰ˆ for all properties.", "Under this framework, a relative comparison question, for instance, \"Is a dog bigger than an elephant?", "\", can be formulated as a quintuple query to the model, namely {dog, elephant, small, similar, big}.", "Denoting the word embeddings for tokens in a quintuple query as X, Y , R < , R β‰ˆ , R > , our three-way model is defined as follows: P (L = s|Q) = sof tmax(R s Β· Οƒ((X βŠ• Y )W )), for s ∈ {<, >, β‰ˆ}, where Q is an quintuple query, Οƒ(Β·) is an activation function and W is a learnable weight matrix.", "The symbol βŠ• represents concatenation.", "We refer to this method as PCE (Property Comparison from Embeddings) for the 3-way task.", "We also experiment with generating label representations from just a single ad- jective (property) embedding R < , namely R β‰ˆ = Οƒ(R < W 2 ), R > = Οƒ(R < W 3 ) .", "We refer to this simpler method as PCE(one-pole).", "We note that in both the three-and four-way settings, the question \"A>B?\"", "is equivalent to \"B<A?\".", "We leverage this fact at test time by feeding our network a reversed object pair, and taking the average of the aligned network outputs before the softmax layer to reduce prediction variance.", "We refer to our model without this technique as PCE(no reverse).", "The key distinction of our method is that it learns a projection from the object word embedding space to the label embedding space.", "This allows the model to leverage the property label embeddings to perform zero-shot prediction on properties not observed in training.", "For example, from a training example \"dogs are smaller than elephants\", the model will learn a projection that puts \"dogs\" relatively closer to \"small,\" and far from \"big\" and \"similar.\"", "Doing so may also result in projecting \"dog\" to be closer to \"light\" than to \"heavy,\" such that the model is able to predict \"dogs are lighter than elephants\" despite never being trained on any weight comparison examples.", "Four-way Model Our four-way model is the same as our three-way model, with an additional module to learn whether the comparison is applicable.", "Keeping the other output nodes unchanged, we add an additional component into the softmax layer to output the probability of \"N/A\": h x = Οƒ(XW a ), h y = Οƒ(Y W a ), A i = h i Β· R > + h i Β· R < , P (L = N/A |Q) ∝ exp(A x + A y ).", "Synthesis for Active Learning We propose a method to synthesize informative queries to pose to annotators, a form of active learning (Settles, 2009 ).", "We use the common heuristic that an informative training example will have a high uncertainty in the model's predictive distribution.", "We adopt the confidence measure (Culotta and McCallum, 2005) to access the uncertainty of a given example: U ncertainty(x) = 1 βˆ’ max y P (y|x, D train ).", "Good candidates for acquisition should have high uncertainty measure, but we also want to avoid querying outliers.", "As the vocabulary is finite, it is possible to evaluate the uncertainty measures for all possible inputs to synthesize the most uncertain query.", "However, such a greedy policy is expensive and prone to selecting outliers.", "Hence, we adopt a sampling based synthesis strategy: at each round, we generate one random object pair per property, and query the one that achieves the highest uncertainty measure.", "A classical difficulty faced by synthesis approaches to active learning is that they may pro-duce unnatural queries that are difficult for a human to label (Baum and Lang, 1992).", "However, our task formulation includes \"similar\" and \"N/A\" classes that encompass many of the more difficult or confusing comparisons, which we believe aids the effectiveness of the synthesis approach.", "Experiments We now present our experimental results on both the three-way and four-way tasks.", "Data Sets We test our three-way model on the VERB PHYSICS data set from (Forbes and Choi, 2017) .", "As there are only 5 properties in VERB PHYSICS, we also develop a new data set we call PROP-ERTY COMMON SENSE.", "We select 32 commonsense properties to form our property set (e.g., value, roundness, deliciousness, intelligence, etc.).", "We extract object nouns from the McRae Feature Norms dataset (McRae et al., 2005) and add selected named entities to form a object vocabulary of 689 distinct objects.", "We randomly generate 3148 object-property triples, label them and reserve 45% of the data for the test set.", "We further add 5 manually-selected applicable comparison examples per property to our test set, in order to make sure each property has some applicable testing examples.", "To verify the labeling, we have a second annotator redundantly label 200 examples and find a Cohen's Kappa of 0.64, which indicates good annotator agreement (we analyze the source of the disagreements in Section 4.1).", "The training set is used for the passive learning and pool-based active learning, and a human oracle provides labels in the synthesis active learning setting.", "Experimental Setup We experiment with three types of embeddings: GloVe, normalized 300-dimensional embeddings trained on a corpus of 6B tokens (Pennington et al., 2014) (the F&C method (Forbes and Choi, 2017) uses the 100-dimensional version, as it achieves the highest validation accuracy for their methods); Word2vec, normalized 300dimensional embeddings trained on 100B tokens (Mikolov et al., 2013) ; and LSTM, the normalized 1024-dimensional weight matrix from the softmax layer of the Google 1B LSTM language model (Jozefowicz et al., 2016) .", "For training PCE, we use an identity activation function and apply 50% dropout.", "We use the Adam optimizer with default settings to train the models for 800 epochs, minimizing cross entropy loss.", "For zero-shot learning, we adopt a hold-oneproperty-out scheme to test our models' zero-shot performance.", "Finally, for active learning, we use Word2vec embeddings.", "All the models are trained on 200 random training examples to warm up.", "We train for 20 epochs after each label acquisition.", "To smooth noise, we report the average of 20 different runs of random (passive learning) and least confident (LC) pool-based active learning (Culotta and McCallum, 2005) baselines.", "We report the average of only 6 runs for an expected model change (EMC) pool-based active learning (Cai et al., 2013) baseline due to its high computational cost, and of only 2 runs for our synthesis active learning approach due to its high labeling cost.", "The pool size is 1540 examples.", "Results In Table 1 , we compare the performance of the three-way PCE model against the existing state of the art on the VERB PHYSICS data set.", "The use of LSTM embeddings in PCE yields the best accuracy for all properties.", "Across all embedding choices, PCE performs as well or better than F&C, despite the fact that PCE does not use the annotated frames that F&C requires (approximately 188 labels per property).", "Thus, our approach matches or exceeds the performance of previous work using significantly less annotated knowledge.", "The lower performance of \"no reverse\" shows that the simple method of averaging over the reversed object pair is effective.", "Table 2 evaluates our models on properties not seen in training (zero-shot learning).", "We compare against a random baseline, and an Emb-Similarity baseline that classifies based on the cosine similarity of the object embeddings to the pole label embeddings (i.e., without the projection layer in PCE).", "PCE outperforms the baselines.", "Although the one-pole method was shown to perform similarly to the two-pole method for properties seen in training (Table 1) , we see that for zero-shot learning, using two poles is important.", "In Table 3 , we show that our four-way models with different embeddings beat both the majority and random baselines on the PROPERTY Table 3 : Accuracy on the four-way task on the PROPERTY COMMON SENSE data.", "COMMON SENSE data.", "Here, the LSTM embeddings perform similarly to the Word2vec embeddings, perhaps because the PROPERTY COM-MON SENSE vocabulary consists of less frequent nouns than in VERB PHYSICS.", "Thus, the Word2vec embeddings are able to catch up due to their larger vocabulary and much larger training corpus.", "Finally, in Figure 1 , we evaluate in the active learning setting.", "The synthesis approach performs best, especially later in training when the training pool for the pool-based methods has only uninformative examples remaining.", "Figure 2 helps explain the relative advantage of the synthesis approach: it is able to continue synthesizing informative (uncertain) queries throughout the entire training run.", "Discussion Sources of annotator disagreement As noted above, we found a \"good\" level of agreement (Cohen's Kappa of 0.64) for our PROPERTY COMMON SENSE data, which is lower than one might expect for task aimed at common sense.", "We analyzed the disagreements and found that they stem from two sources of subjectivity in the task.", "The first is that different labelers may have different thresholds for what counts as similar-a spider and an ant might be marked similar in size for one labeler, but not for another labeler.", "In our data, 58% of the disagreements are cases in which one annotator marks similar while the other says not similar.", "The second is that different labelers have different standards for whether a comparison is N/A.", "For example, in our data set, one labeler labels that a toaster is physically stronger than alcohol, and the other labeler says the comparison is N/A.", "37% of our disagreements are due to this type of subjectivity.", "The above two types of subjectivity account for almost all disagreements (95%), and the remaining 5% are due to annotation errors (one of the annotators makes mistake).", "Model Interpretation Since we adopt an identity activation function and a single layer design, it is possible to simplify the mathematical expression of our model to make it more interpretable.", "After accounting for model averaging, we have the following equality: P (L =< |Q) ∝ exp(R < Β· ((X βŠ• Y )W ) + R > Β· ((Y βŠ• X)W )) = exp(R T < (XW 1 + Y W 2 ) + R T > (Y W 1 + XW 2 )) ∝ exp((R < βˆ’ R > ) T (XW 1 + XW 2 )), where W = W 1 βŠ• W 2 .", "So we can define a score of \"R < \" for a object with embedding X as the following: score(X, R < ) = (R < βˆ’ R > ) T (XW 1 + XW 2 ).", "An object with a higher score for R < is more associated with the R < pole than the R > one.", "For example, score(\"elephant\",\"small\") represents how small an elephant is-a larger score indicates a smaller object.", "Table 4 shows smallness scores for 5 randomly picked objects from the VERB PHYSICS data set.", "PCE tends to assign higher scores to the smaller objects in the set.", "Sensitivity to pole labels PCE requires labels for the poles of the target object property.", "Table 5 : Trained and zero-shot accuracies for different word choices analysis to pole labels, evaluating the test accuracy of PCE as the pole label varies among different combinations of synonyms for the size and speed relations.", "We evaluate in both the trained setting (comparable to the results in Table 1 ) and the zero-shot setting (comparable to Table 2 ).", "We see that the trained accuracy remains essentially unchanged for different pole labels.", "In the zeroshot setting, all combinations achieve accuracy that beats the baselines in Table 2 , but the accuracy value is somewhat sensitive to the choice of pole label.", "Exploring how to select pole labels and experimenting with richer pole representations such as textual definitions are items of future work.", "Conclusion In this paper, we presented a method for extracting commonsense knowledge from embeddings.", "Our experiments demonstrate that the approach is effective at performing relative comparisons of object properties using less hand-annotated knowledge than in previous work.", "A synthesis active learner was found to boost accuracy, and further experiments with this approach are an item of future work." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Problem Definition and Methods", "Three-way Model", "Four-way Model", "Synthesis for Active Learning", "Experiments", "Data Sets", "Experimental Setup", "Results", "Sources of annotator disagreement", "Model Interpretation", "Sensitivity to pole labels", "Conclusion" ] }
GEM-SciDuet-train-104#paper-1274#slide-6
Results Supervised Performance
Testsize weight stren rigid speed overall Table 1: Supervised accuracy on the VERB PHYSICS data set. PCE outperforms the F&C model from previous work.
Testsize weight stren rigid speed overall Table 1: Supervised accuracy on the VERB PHYSICS data set. PCE outperforms the F&C model from previous work.
[]
GEM-SciDuet-train-104#paper-1274#slide-7
1274
Extracting Commonsense Properties from Embeddings with Limited Human Guidance
Intelligent systems require common sense, but automatically extracting this knowledge from text can be difficult. We propose and assess methods for extracting one type of commonsense knowledge, object-property comparisons, from pretrained embeddings. In experiments, we show that our approach exceeds the accuracy of previous work but requires substantially less hand-annotated knowledge. Further, we show that an active learning approach that synthesizes common-sense queries can boost accuracy.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112 ], "paper_content_text": [ "Introduction Automatically extracting common sense from text is a long-standing challenge in natural language processing (Schubert, 2002; Van Durme and Schubert, 2008; Vanderwende, 2005) .", "As argued by Forbes and Yejin (2017) , typical language use may reflect common sense, but the commonsense knowledge itself is not often explicitly stated, due to reporting bias (Gordon and Van Durme, 2013) .", "Thus, additional human knowledge or annotated training data are often used to help systems learn common sense.", "In this paper, we study methods for reducing the amount of human input needed to learn common sense.", "Specifically, we focus on learning relative comparisons of (one-dimensional) object properties, such as the fact that a cantaloupe is more round than a hammer.", "Methods for learning this kind of common sense have been developed previously (e.g.", "Forbes and Choi, 2017) , but the best-performing methods in that previous work requires dozens of manually-annotated frames for each comparison property, to connect the property to how it is indirectly reflected in text-e.g., if text asserts that \"x carries y,\" this implies that x is probably larger than y.", "Our architecture for relative comparisons follows the zero-shot learning paradigm (Palatucci et al., 2009) .", "It takes the form of a neural network that compares a projection of embeddings for each of two objects (e.g.", "\"elephant\" and \"tiger\") to the embeddings for the two poles of the target dimension of comparison (e.g., \"big\" and \"small\" for the size property).", "The projected object embeddings are trained to be closer to the appropriate pole, using a small training set of hand-labeled comparisons.", "Our experiments reveal that our architecture outperforms previous work, despite using less annotated data.", "Further, because our architecture takes the property (pole) labels as arguments, it can extend to the zero-shot setting in which we evaluate on properties not seen in training.", "We find that in zero-shot, our approach outperforms baselines and comes close to supervised results, but providing labels for both poles of the relation rather than just one is important.", "Finally, because the number of properties we wish to learn is large, we experiment with active learning (AL) over a larger property space.", "We show that synthesizing AL queries can be effective using an approach that explicitly models which comparison questions are nonsensical (e.g., is Batman taller than Democracy?).", "We release our code base and a new commonsense data set to the research community.", "1 Problem Definition and Methods We define the task of comparing object properties in two different ways: a three-way classification task, and a four-way classification task.", "In the three-way classification task, we want to estimate the following conditional probability: P (L|O 1 , O 2 , Property), L ∈ { < , > , β‰ˆ }.", "1 https://github.com/yangyiben/PCE For example, P rob(An elephant is larger than a dog) can be expressed as P (L = > |O 1 = \"elephant\", O 2 = \"dog\", Property = \"size\").", "The three-way classification task has been explored in previous work (Forbes and Choi, 2017) and is only performed on triples where both objects have the property, so that the comparison is meaningful.", "In applications, however, we may not know in advance which comparisons are meaningful.", "Thus, we also define a four-way classification task to include \"not applicable\" as the fourth label, so that inference can be performed on any objectproperty triples.", "In the four-way task, the system is tasked with identifying the nonsensical comparisons.", "Formally, we want to estimate the following conditional probability: P (L|O 1 , O 2 , Property), L∈{ < , > , β‰ˆ , N/A }.", "Three-way Model For each comparison property, we pick an adjective and its antonym to represent the { < , > } labels.", "For example, for the property size, we pick \"big\" and \"small\".", "The adjective \"similar\" serves as the label for β‰ˆ for all properties.", "Under this framework, a relative comparison question, for instance, \"Is a dog bigger than an elephant?", "\", can be formulated as a quintuple query to the model, namely {dog, elephant, small, similar, big}.", "Denoting the word embeddings for tokens in a quintuple query as X, Y , R < , R β‰ˆ , R > , our three-way model is defined as follows: P (L = s|Q) = sof tmax(R s Β· Οƒ((X βŠ• Y )W )), for s ∈ {<, >, β‰ˆ}, where Q is an quintuple query, Οƒ(Β·) is an activation function and W is a learnable weight matrix.", "The symbol βŠ• represents concatenation.", "We refer to this method as PCE (Property Comparison from Embeddings) for the 3-way task.", "We also experiment with generating label representations from just a single ad- jective (property) embedding R < , namely R β‰ˆ = Οƒ(R < W 2 ), R > = Οƒ(R < W 3 ) .", "We refer to this simpler method as PCE(one-pole).", "We note that in both the three-and four-way settings, the question \"A>B?\"", "is equivalent to \"B<A?\".", "We leverage this fact at test time by feeding our network a reversed object pair, and taking the average of the aligned network outputs before the softmax layer to reduce prediction variance.", "We refer to our model without this technique as PCE(no reverse).", "The key distinction of our method is that it learns a projection from the object word embedding space to the label embedding space.", "This allows the model to leverage the property label embeddings to perform zero-shot prediction on properties not observed in training.", "For example, from a training example \"dogs are smaller than elephants\", the model will learn a projection that puts \"dogs\" relatively closer to \"small,\" and far from \"big\" and \"similar.\"", "Doing so may also result in projecting \"dog\" to be closer to \"light\" than to \"heavy,\" such that the model is able to predict \"dogs are lighter than elephants\" despite never being trained on any weight comparison examples.", "Four-way Model Our four-way model is the same as our three-way model, with an additional module to learn whether the comparison is applicable.", "Keeping the other output nodes unchanged, we add an additional component into the softmax layer to output the probability of \"N/A\": h x = Οƒ(XW a ), h y = Οƒ(Y W a ), A i = h i Β· R > + h i Β· R < , P (L = N/A |Q) ∝ exp(A x + A y ).", "Synthesis for Active Learning We propose a method to synthesize informative queries to pose to annotators, a form of active learning (Settles, 2009 ).", "We use the common heuristic that an informative training example will have a high uncertainty in the model's predictive distribution.", "We adopt the confidence measure (Culotta and McCallum, 2005) to access the uncertainty of a given example: U ncertainty(x) = 1 βˆ’ max y P (y|x, D train ).", "Good candidates for acquisition should have high uncertainty measure, but we also want to avoid querying outliers.", "As the vocabulary is finite, it is possible to evaluate the uncertainty measures for all possible inputs to synthesize the most uncertain query.", "However, such a greedy policy is expensive and prone to selecting outliers.", "Hence, we adopt a sampling based synthesis strategy: at each round, we generate one random object pair per property, and query the one that achieves the highest uncertainty measure.", "A classical difficulty faced by synthesis approaches to active learning is that they may pro-duce unnatural queries that are difficult for a human to label (Baum and Lang, 1992).", "However, our task formulation includes \"similar\" and \"N/A\" classes that encompass many of the more difficult or confusing comparisons, which we believe aids the effectiveness of the synthesis approach.", "Experiments We now present our experimental results on both the three-way and four-way tasks.", "Data Sets We test our three-way model on the VERB PHYSICS data set from (Forbes and Choi, 2017) .", "As there are only 5 properties in VERB PHYSICS, we also develop a new data set we call PROP-ERTY COMMON SENSE.", "We select 32 commonsense properties to form our property set (e.g., value, roundness, deliciousness, intelligence, etc.).", "We extract object nouns from the McRae Feature Norms dataset (McRae et al., 2005) and add selected named entities to form a object vocabulary of 689 distinct objects.", "We randomly generate 3148 object-property triples, label them and reserve 45% of the data for the test set.", "We further add 5 manually-selected applicable comparison examples per property to our test set, in order to make sure each property has some applicable testing examples.", "To verify the labeling, we have a second annotator redundantly label 200 examples and find a Cohen's Kappa of 0.64, which indicates good annotator agreement (we analyze the source of the disagreements in Section 4.1).", "The training set is used for the passive learning and pool-based active learning, and a human oracle provides labels in the synthesis active learning setting.", "Experimental Setup We experiment with three types of embeddings: GloVe, normalized 300-dimensional embeddings trained on a corpus of 6B tokens (Pennington et al., 2014) (the F&C method (Forbes and Choi, 2017) uses the 100-dimensional version, as it achieves the highest validation accuracy for their methods); Word2vec, normalized 300dimensional embeddings trained on 100B tokens (Mikolov et al., 2013) ; and LSTM, the normalized 1024-dimensional weight matrix from the softmax layer of the Google 1B LSTM language model (Jozefowicz et al., 2016) .", "For training PCE, we use an identity activation function and apply 50% dropout.", "We use the Adam optimizer with default settings to train the models for 800 epochs, minimizing cross entropy loss.", "For zero-shot learning, we adopt a hold-oneproperty-out scheme to test our models' zero-shot performance.", "Finally, for active learning, we use Word2vec embeddings.", "All the models are trained on 200 random training examples to warm up.", "We train for 20 epochs after each label acquisition.", "To smooth noise, we report the average of 20 different runs of random (passive learning) and least confident (LC) pool-based active learning (Culotta and McCallum, 2005) baselines.", "We report the average of only 6 runs for an expected model change (EMC) pool-based active learning (Cai et al., 2013) baseline due to its high computational cost, and of only 2 runs for our synthesis active learning approach due to its high labeling cost.", "The pool size is 1540 examples.", "Results In Table 1 , we compare the performance of the three-way PCE model against the existing state of the art on the VERB PHYSICS data set.", "The use of LSTM embeddings in PCE yields the best accuracy for all properties.", "Across all embedding choices, PCE performs as well or better than F&C, despite the fact that PCE does not use the annotated frames that F&C requires (approximately 188 labels per property).", "Thus, our approach matches or exceeds the performance of previous work using significantly less annotated knowledge.", "The lower performance of \"no reverse\" shows that the simple method of averaging over the reversed object pair is effective.", "Table 2 evaluates our models on properties not seen in training (zero-shot learning).", "We compare against a random baseline, and an Emb-Similarity baseline that classifies based on the cosine similarity of the object embeddings to the pole label embeddings (i.e., without the projection layer in PCE).", "PCE outperforms the baselines.", "Although the one-pole method was shown to perform similarly to the two-pole method for properties seen in training (Table 1) , we see that for zero-shot learning, using two poles is important.", "In Table 3 , we show that our four-way models with different embeddings beat both the majority and random baselines on the PROPERTY Table 3 : Accuracy on the four-way task on the PROPERTY COMMON SENSE data.", "COMMON SENSE data.", "Here, the LSTM embeddings perform similarly to the Word2vec embeddings, perhaps because the PROPERTY COM-MON SENSE vocabulary consists of less frequent nouns than in VERB PHYSICS.", "Thus, the Word2vec embeddings are able to catch up due to their larger vocabulary and much larger training corpus.", "Finally, in Figure 1 , we evaluate in the active learning setting.", "The synthesis approach performs best, especially later in training when the training pool for the pool-based methods has only uninformative examples remaining.", "Figure 2 helps explain the relative advantage of the synthesis approach: it is able to continue synthesizing informative (uncertain) queries throughout the entire training run.", "Discussion Sources of annotator disagreement As noted above, we found a \"good\" level of agreement (Cohen's Kappa of 0.64) for our PROPERTY COMMON SENSE data, which is lower than one might expect for task aimed at common sense.", "We analyzed the disagreements and found that they stem from two sources of subjectivity in the task.", "The first is that different labelers may have different thresholds for what counts as similar-a spider and an ant might be marked similar in size for one labeler, but not for another labeler.", "In our data, 58% of the disagreements are cases in which one annotator marks similar while the other says not similar.", "The second is that different labelers have different standards for whether a comparison is N/A.", "For example, in our data set, one labeler labels that a toaster is physically stronger than alcohol, and the other labeler says the comparison is N/A.", "37% of our disagreements are due to this type of subjectivity.", "The above two types of subjectivity account for almost all disagreements (95%), and the remaining 5% are due to annotation errors (one of the annotators makes mistake).", "Model Interpretation Since we adopt an identity activation function and a single layer design, it is possible to simplify the mathematical expression of our model to make it more interpretable.", "After accounting for model averaging, we have the following equality: P (L =< |Q) ∝ exp(R < Β· ((X βŠ• Y )W ) + R > Β· ((Y βŠ• X)W )) = exp(R T < (XW 1 + Y W 2 ) + R T > (Y W 1 + XW 2 )) ∝ exp((R < βˆ’ R > ) T (XW 1 + XW 2 )), where W = W 1 βŠ• W 2 .", "So we can define a score of \"R < \" for a object with embedding X as the following: score(X, R < ) = (R < βˆ’ R > ) T (XW 1 + XW 2 ).", "An object with a higher score for R < is more associated with the R < pole than the R > one.", "For example, score(\"elephant\",\"small\") represents how small an elephant is-a larger score indicates a smaller object.", "Table 4 shows smallness scores for 5 randomly picked objects from the VERB PHYSICS data set.", "PCE tends to assign higher scores to the smaller objects in the set.", "Sensitivity to pole labels PCE requires labels for the poles of the target object property.", "Table 5 : Trained and zero-shot accuracies for different word choices analysis to pole labels, evaluating the test accuracy of PCE as the pole label varies among different combinations of synonyms for the size and speed relations.", "We evaluate in both the trained setting (comparable to the results in Table 1 ) and the zero-shot setting (comparable to Table 2 ).", "We see that the trained accuracy remains essentially unchanged for different pole labels.", "In the zeroshot setting, all combinations achieve accuracy that beats the baselines in Table 2 , but the accuracy value is somewhat sensitive to the choice of pole label.", "Exploring how to select pole labels and experimenting with richer pole representations such as textual definitions are items of future work.", "Conclusion In this paper, we presented a method for extracting commonsense knowledge from embeddings.", "Our experiments demonstrate that the approach is effective at performing relative comparisons of object properties using less hand-annotated knowledge than in previous work.", "A synthesis active learner was found to boost accuracy, and further experiments with this approach are an item of future work." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Problem Definition and Methods", "Three-way Model", "Four-way Model", "Synthesis for Active Learning", "Experiments", "Data Sets", "Experimental Setup", "Results", "Sources of annotator disagreement", "Model Interpretation", "Sensitivity to pole labels", "Conclusion" ] }
GEM-SciDuet-train-104#paper-1274#slide-7
Results Zero shot Predictio
Testsize weight stren rigid speed Table 2: Accuracy of zero-shot learning on the VERB PHYSICS data set(using
Testsize weight stren rigid speed Table 2: Accuracy of zero-shot learning on the VERB PHYSICS data set(using
[]
GEM-SciDuet-train-104#paper-1274#slide-8
1274
Extracting Commonsense Properties from Embeddings with Limited Human Guidance
Intelligent systems require common sense, but automatically extracting this knowledge from text can be difficult. We propose and assess methods for extracting one type of commonsense knowledge, object-property comparisons, from pretrained embeddings. In experiments, we show that our approach exceeds the accuracy of previous work but requires substantially less hand-annotated knowledge. Further, we show that an active learning approach that synthesizes common-sense queries can boost accuracy.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112 ], "paper_content_text": [ "Introduction Automatically extracting common sense from text is a long-standing challenge in natural language processing (Schubert, 2002; Van Durme and Schubert, 2008; Vanderwende, 2005) .", "As argued by Forbes and Yejin (2017) , typical language use may reflect common sense, but the commonsense knowledge itself is not often explicitly stated, due to reporting bias (Gordon and Van Durme, 2013) .", "Thus, additional human knowledge or annotated training data are often used to help systems learn common sense.", "In this paper, we study methods for reducing the amount of human input needed to learn common sense.", "Specifically, we focus on learning relative comparisons of (one-dimensional) object properties, such as the fact that a cantaloupe is more round than a hammer.", "Methods for learning this kind of common sense have been developed previously (e.g.", "Forbes and Choi, 2017) , but the best-performing methods in that previous work requires dozens of manually-annotated frames for each comparison property, to connect the property to how it is indirectly reflected in text-e.g., if text asserts that \"x carries y,\" this implies that x is probably larger than y.", "Our architecture for relative comparisons follows the zero-shot learning paradigm (Palatucci et al., 2009) .", "It takes the form of a neural network that compares a projection of embeddings for each of two objects (e.g.", "\"elephant\" and \"tiger\") to the embeddings for the two poles of the target dimension of comparison (e.g., \"big\" and \"small\" for the size property).", "The projected object embeddings are trained to be closer to the appropriate pole, using a small training set of hand-labeled comparisons.", "Our experiments reveal that our architecture outperforms previous work, despite using less annotated data.", "Further, because our architecture takes the property (pole) labels as arguments, it can extend to the zero-shot setting in which we evaluate on properties not seen in training.", "We find that in zero-shot, our approach outperforms baselines and comes close to supervised results, but providing labels for both poles of the relation rather than just one is important.", "Finally, because the number of properties we wish to learn is large, we experiment with active learning (AL) over a larger property space.", "We show that synthesizing AL queries can be effective using an approach that explicitly models which comparison questions are nonsensical (e.g., is Batman taller than Democracy?).", "We release our code base and a new commonsense data set to the research community.", "1 Problem Definition and Methods We define the task of comparing object properties in two different ways: a three-way classification task, and a four-way classification task.", "In the three-way classification task, we want to estimate the following conditional probability: P (L|O 1 , O 2 , Property), L ∈ { < , > , β‰ˆ }.", "1 https://github.com/yangyiben/PCE For example, P rob(An elephant is larger than a dog) can be expressed as P (L = > |O 1 = \"elephant\", O 2 = \"dog\", Property = \"size\").", "The three-way classification task has been explored in previous work (Forbes and Choi, 2017) and is only performed on triples where both objects have the property, so that the comparison is meaningful.", "In applications, however, we may not know in advance which comparisons are meaningful.", "Thus, we also define a four-way classification task to include \"not applicable\" as the fourth label, so that inference can be performed on any objectproperty triples.", "In the four-way task, the system is tasked with identifying the nonsensical comparisons.", "Formally, we want to estimate the following conditional probability: P (L|O 1 , O 2 , Property), L∈{ < , > , β‰ˆ , N/A }.", "Three-way Model For each comparison property, we pick an adjective and its antonym to represent the { < , > } labels.", "For example, for the property size, we pick \"big\" and \"small\".", "The adjective \"similar\" serves as the label for β‰ˆ for all properties.", "Under this framework, a relative comparison question, for instance, \"Is a dog bigger than an elephant?", "\", can be formulated as a quintuple query to the model, namely {dog, elephant, small, similar, big}.", "Denoting the word embeddings for tokens in a quintuple query as X, Y , R < , R β‰ˆ , R > , our three-way model is defined as follows: P (L = s|Q) = sof tmax(R s Β· Οƒ((X βŠ• Y )W )), for s ∈ {<, >, β‰ˆ}, where Q is an quintuple query, Οƒ(Β·) is an activation function and W is a learnable weight matrix.", "The symbol βŠ• represents concatenation.", "We refer to this method as PCE (Property Comparison from Embeddings) for the 3-way task.", "We also experiment with generating label representations from just a single ad- jective (property) embedding R < , namely R β‰ˆ = Οƒ(R < W 2 ), R > = Οƒ(R < W 3 ) .", "We refer to this simpler method as PCE(one-pole).", "We note that in both the three-and four-way settings, the question \"A>B?\"", "is equivalent to \"B<A?\".", "We leverage this fact at test time by feeding our network a reversed object pair, and taking the average of the aligned network outputs before the softmax layer to reduce prediction variance.", "We refer to our model without this technique as PCE(no reverse).", "The key distinction of our method is that it learns a projection from the object word embedding space to the label embedding space.", "This allows the model to leverage the property label embeddings to perform zero-shot prediction on properties not observed in training.", "For example, from a training example \"dogs are smaller than elephants\", the model will learn a projection that puts \"dogs\" relatively closer to \"small,\" and far from \"big\" and \"similar.\"", "Doing so may also result in projecting \"dog\" to be closer to \"light\" than to \"heavy,\" such that the model is able to predict \"dogs are lighter than elephants\" despite never being trained on any weight comparison examples.", "Four-way Model Our four-way model is the same as our three-way model, with an additional module to learn whether the comparison is applicable.", "Keeping the other output nodes unchanged, we add an additional component into the softmax layer to output the probability of \"N/A\": h x = Οƒ(XW a ), h y = Οƒ(Y W a ), A i = h i Β· R > + h i Β· R < , P (L = N/A |Q) ∝ exp(A x + A y ).", "Synthesis for Active Learning We propose a method to synthesize informative queries to pose to annotators, a form of active learning (Settles, 2009 ).", "We use the common heuristic that an informative training example will have a high uncertainty in the model's predictive distribution.", "We adopt the confidence measure (Culotta and McCallum, 2005) to access the uncertainty of a given example: U ncertainty(x) = 1 βˆ’ max y P (y|x, D train ).", "Good candidates for acquisition should have high uncertainty measure, but we also want to avoid querying outliers.", "As the vocabulary is finite, it is possible to evaluate the uncertainty measures for all possible inputs to synthesize the most uncertain query.", "However, such a greedy policy is expensive and prone to selecting outliers.", "Hence, we adopt a sampling based synthesis strategy: at each round, we generate one random object pair per property, and query the one that achieves the highest uncertainty measure.", "A classical difficulty faced by synthesis approaches to active learning is that they may pro-duce unnatural queries that are difficult for a human to label (Baum and Lang, 1992).", "However, our task formulation includes \"similar\" and \"N/A\" classes that encompass many of the more difficult or confusing comparisons, which we believe aids the effectiveness of the synthesis approach.", "Experiments We now present our experimental results on both the three-way and four-way tasks.", "Data Sets We test our three-way model on the VERB PHYSICS data set from (Forbes and Choi, 2017) .", "As there are only 5 properties in VERB PHYSICS, we also develop a new data set we call PROP-ERTY COMMON SENSE.", "We select 32 commonsense properties to form our property set (e.g., value, roundness, deliciousness, intelligence, etc.).", "We extract object nouns from the McRae Feature Norms dataset (McRae et al., 2005) and add selected named entities to form a object vocabulary of 689 distinct objects.", "We randomly generate 3148 object-property triples, label them and reserve 45% of the data for the test set.", "We further add 5 manually-selected applicable comparison examples per property to our test set, in order to make sure each property has some applicable testing examples.", "To verify the labeling, we have a second annotator redundantly label 200 examples and find a Cohen's Kappa of 0.64, which indicates good annotator agreement (we analyze the source of the disagreements in Section 4.1).", "The training set is used for the passive learning and pool-based active learning, and a human oracle provides labels in the synthesis active learning setting.", "Experimental Setup We experiment with three types of embeddings: GloVe, normalized 300-dimensional embeddings trained on a corpus of 6B tokens (Pennington et al., 2014) (the F&C method (Forbes and Choi, 2017) uses the 100-dimensional version, as it achieves the highest validation accuracy for their methods); Word2vec, normalized 300dimensional embeddings trained on 100B tokens (Mikolov et al., 2013) ; and LSTM, the normalized 1024-dimensional weight matrix from the softmax layer of the Google 1B LSTM language model (Jozefowicz et al., 2016) .", "For training PCE, we use an identity activation function and apply 50% dropout.", "We use the Adam optimizer with default settings to train the models for 800 epochs, minimizing cross entropy loss.", "For zero-shot learning, we adopt a hold-oneproperty-out scheme to test our models' zero-shot performance.", "Finally, for active learning, we use Word2vec embeddings.", "All the models are trained on 200 random training examples to warm up.", "We train for 20 epochs after each label acquisition.", "To smooth noise, we report the average of 20 different runs of random (passive learning) and least confident (LC) pool-based active learning (Culotta and McCallum, 2005) baselines.", "We report the average of only 6 runs for an expected model change (EMC) pool-based active learning (Cai et al., 2013) baseline due to its high computational cost, and of only 2 runs for our synthesis active learning approach due to its high labeling cost.", "The pool size is 1540 examples.", "Results In Table 1 , we compare the performance of the three-way PCE model against the existing state of the art on the VERB PHYSICS data set.", "The use of LSTM embeddings in PCE yields the best accuracy for all properties.", "Across all embedding choices, PCE performs as well or better than F&C, despite the fact that PCE does not use the annotated frames that F&C requires (approximately 188 labels per property).", "Thus, our approach matches or exceeds the performance of previous work using significantly less annotated knowledge.", "The lower performance of \"no reverse\" shows that the simple method of averaging over the reversed object pair is effective.", "Table 2 evaluates our models on properties not seen in training (zero-shot learning).", "We compare against a random baseline, and an Emb-Similarity baseline that classifies based on the cosine similarity of the object embeddings to the pole label embeddings (i.e., without the projection layer in PCE).", "PCE outperforms the baselines.", "Although the one-pole method was shown to perform similarly to the two-pole method for properties seen in training (Table 1) , we see that for zero-shot learning, using two poles is important.", "In Table 3 , we show that our four-way models with different embeddings beat both the majority and random baselines on the PROPERTY Table 3 : Accuracy on the four-way task on the PROPERTY COMMON SENSE data.", "COMMON SENSE data.", "Here, the LSTM embeddings perform similarly to the Word2vec embeddings, perhaps because the PROPERTY COM-MON SENSE vocabulary consists of less frequent nouns than in VERB PHYSICS.", "Thus, the Word2vec embeddings are able to catch up due to their larger vocabulary and much larger training corpus.", "Finally, in Figure 1 , we evaluate in the active learning setting.", "The synthesis approach performs best, especially later in training when the training pool for the pool-based methods has only uninformative examples remaining.", "Figure 2 helps explain the relative advantage of the synthesis approach: it is able to continue synthesizing informative (uncertain) queries throughout the entire training run.", "Discussion Sources of annotator disagreement As noted above, we found a \"good\" level of agreement (Cohen's Kappa of 0.64) for our PROPERTY COMMON SENSE data, which is lower than one might expect for task aimed at common sense.", "We analyzed the disagreements and found that they stem from two sources of subjectivity in the task.", "The first is that different labelers may have different thresholds for what counts as similar-a spider and an ant might be marked similar in size for one labeler, but not for another labeler.", "In our data, 58% of the disagreements are cases in which one annotator marks similar while the other says not similar.", "The second is that different labelers have different standards for whether a comparison is N/A.", "For example, in our data set, one labeler labels that a toaster is physically stronger than alcohol, and the other labeler says the comparison is N/A.", "37% of our disagreements are due to this type of subjectivity.", "The above two types of subjectivity account for almost all disagreements (95%), and the remaining 5% are due to annotation errors (one of the annotators makes mistake).", "Model Interpretation Since we adopt an identity activation function and a single layer design, it is possible to simplify the mathematical expression of our model to make it more interpretable.", "After accounting for model averaging, we have the following equality: P (L =< |Q) ∝ exp(R < Β· ((X βŠ• Y )W ) + R > Β· ((Y βŠ• X)W )) = exp(R T < (XW 1 + Y W 2 ) + R T > (Y W 1 + XW 2 )) ∝ exp((R < βˆ’ R > ) T (XW 1 + XW 2 )), where W = W 1 βŠ• W 2 .", "So we can define a score of \"R < \" for a object with embedding X as the following: score(X, R < ) = (R < βˆ’ R > ) T (XW 1 + XW 2 ).", "An object with a higher score for R < is more associated with the R < pole than the R > one.", "For example, score(\"elephant\",\"small\") represents how small an elephant is-a larger score indicates a smaller object.", "Table 4 shows smallness scores for 5 randomly picked objects from the VERB PHYSICS data set.", "PCE tends to assign higher scores to the smaller objects in the set.", "Sensitivity to pole labels PCE requires labels for the poles of the target object property.", "Table 5 : Trained and zero-shot accuracies for different word choices analysis to pole labels, evaluating the test accuracy of PCE as the pole label varies among different combinations of synonyms for the size and speed relations.", "We evaluate in both the trained setting (comparable to the results in Table 1 ) and the zero-shot setting (comparable to Table 2 ).", "We see that the trained accuracy remains essentially unchanged for different pole labels.", "In the zeroshot setting, all combinations achieve accuracy that beats the baselines in Table 2 , but the accuracy value is somewhat sensitive to the choice of pole label.", "Exploring how to select pole labels and experimenting with richer pole representations such as textual definitions are items of future work.", "Conclusion In this paper, we presented a method for extracting commonsense knowledge from embeddings.", "Our experiments demonstrate that the approach is effective at performing relative comparisons of object properties using less hand-annotated knowledge than in previous work.", "A synthesis active learner was found to boost accuracy, and further experiments with this approach are an item of future work." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Problem Definition and Methods", "Three-way Model", "Four-way Model", "Synthesis for Active Learning", "Experiments", "Data Sets", "Experimental Setup", "Results", "Sources of annotator disagreement", "Model Interpretation", "Sensitivity to pole labels", "Conclusion" ] }
GEM-SciDuet-train-104#paper-1274#slide-8
Results
Table 3: Accuracy on the four-way task on the PROPERTY COMMON SENSE data.
Table 3: Accuracy on the four-way task on the PROPERTY COMMON SENSE data.
[]
GEM-SciDuet-train-104#paper-1274#slide-10
1274
Extracting Commonsense Properties from Embeddings with Limited Human Guidance
Intelligent systems require common sense, but automatically extracting this knowledge from text can be difficult. We propose and assess methods for extracting one type of commonsense knowledge, object-property comparisons, from pretrained embeddings. In experiments, we show that our approach exceeds the accuracy of previous work but requires substantially less hand-annotated knowledge. Further, we show that an active learning approach that synthesizes common-sense queries can boost accuracy.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112 ], "paper_content_text": [ "Introduction Automatically extracting common sense from text is a long-standing challenge in natural language processing (Schubert, 2002; Van Durme and Schubert, 2008; Vanderwende, 2005) .", "As argued by Forbes and Yejin (2017) , typical language use may reflect common sense, but the commonsense knowledge itself is not often explicitly stated, due to reporting bias (Gordon and Van Durme, 2013) .", "Thus, additional human knowledge or annotated training data are often used to help systems learn common sense.", "In this paper, we study methods for reducing the amount of human input needed to learn common sense.", "Specifically, we focus on learning relative comparisons of (one-dimensional) object properties, such as the fact that a cantaloupe is more round than a hammer.", "Methods for learning this kind of common sense have been developed previously (e.g.", "Forbes and Choi, 2017) , but the best-performing methods in that previous work requires dozens of manually-annotated frames for each comparison property, to connect the property to how it is indirectly reflected in text-e.g., if text asserts that \"x carries y,\" this implies that x is probably larger than y.", "Our architecture for relative comparisons follows the zero-shot learning paradigm (Palatucci et al., 2009) .", "It takes the form of a neural network that compares a projection of embeddings for each of two objects (e.g.", "\"elephant\" and \"tiger\") to the embeddings for the two poles of the target dimension of comparison (e.g., \"big\" and \"small\" for the size property).", "The projected object embeddings are trained to be closer to the appropriate pole, using a small training set of hand-labeled comparisons.", "Our experiments reveal that our architecture outperforms previous work, despite using less annotated data.", "Further, because our architecture takes the property (pole) labels as arguments, it can extend to the zero-shot setting in which we evaluate on properties not seen in training.", "We find that in zero-shot, our approach outperforms baselines and comes close to supervised results, but providing labels for both poles of the relation rather than just one is important.", "Finally, because the number of properties we wish to learn is large, we experiment with active learning (AL) over a larger property space.", "We show that synthesizing AL queries can be effective using an approach that explicitly models which comparison questions are nonsensical (e.g., is Batman taller than Democracy?).", "We release our code base and a new commonsense data set to the research community.", "1 Problem Definition and Methods We define the task of comparing object properties in two different ways: a three-way classification task, and a four-way classification task.", "In the three-way classification task, we want to estimate the following conditional probability: P (L|O 1 , O 2 , Property), L ∈ { < , > , β‰ˆ }.", "1 https://github.com/yangyiben/PCE For example, P rob(An elephant is larger than a dog) can be expressed as P (L = > |O 1 = \"elephant\", O 2 = \"dog\", Property = \"size\").", "The three-way classification task has been explored in previous work (Forbes and Choi, 2017) and is only performed on triples where both objects have the property, so that the comparison is meaningful.", "In applications, however, we may not know in advance which comparisons are meaningful.", "Thus, we also define a four-way classification task to include \"not applicable\" as the fourth label, so that inference can be performed on any objectproperty triples.", "In the four-way task, the system is tasked with identifying the nonsensical comparisons.", "Formally, we want to estimate the following conditional probability: P (L|O 1 , O 2 , Property), L∈{ < , > , β‰ˆ , N/A }.", "Three-way Model For each comparison property, we pick an adjective and its antonym to represent the { < , > } labels.", "For example, for the property size, we pick \"big\" and \"small\".", "The adjective \"similar\" serves as the label for β‰ˆ for all properties.", "Under this framework, a relative comparison question, for instance, \"Is a dog bigger than an elephant?", "\", can be formulated as a quintuple query to the model, namely {dog, elephant, small, similar, big}.", "Denoting the word embeddings for tokens in a quintuple query as X, Y , R < , R β‰ˆ , R > , our three-way model is defined as follows: P (L = s|Q) = sof tmax(R s Β· Οƒ((X βŠ• Y )W )), for s ∈ {<, >, β‰ˆ}, where Q is an quintuple query, Οƒ(Β·) is an activation function and W is a learnable weight matrix.", "The symbol βŠ• represents concatenation.", "We refer to this method as PCE (Property Comparison from Embeddings) for the 3-way task.", "We also experiment with generating label representations from just a single ad- jective (property) embedding R < , namely R β‰ˆ = Οƒ(R < W 2 ), R > = Οƒ(R < W 3 ) .", "We refer to this simpler method as PCE(one-pole).", "We note that in both the three-and four-way settings, the question \"A>B?\"", "is equivalent to \"B<A?\".", "We leverage this fact at test time by feeding our network a reversed object pair, and taking the average of the aligned network outputs before the softmax layer to reduce prediction variance.", "We refer to our model without this technique as PCE(no reverse).", "The key distinction of our method is that it learns a projection from the object word embedding space to the label embedding space.", "This allows the model to leverage the property label embeddings to perform zero-shot prediction on properties not observed in training.", "For example, from a training example \"dogs are smaller than elephants\", the model will learn a projection that puts \"dogs\" relatively closer to \"small,\" and far from \"big\" and \"similar.\"", "Doing so may also result in projecting \"dog\" to be closer to \"light\" than to \"heavy,\" such that the model is able to predict \"dogs are lighter than elephants\" despite never being trained on any weight comparison examples.", "Four-way Model Our four-way model is the same as our three-way model, with an additional module to learn whether the comparison is applicable.", "Keeping the other output nodes unchanged, we add an additional component into the softmax layer to output the probability of \"N/A\": h x = Οƒ(XW a ), h y = Οƒ(Y W a ), A i = h i Β· R > + h i Β· R < , P (L = N/A |Q) ∝ exp(A x + A y ).", "Synthesis for Active Learning We propose a method to synthesize informative queries to pose to annotators, a form of active learning (Settles, 2009 ).", "We use the common heuristic that an informative training example will have a high uncertainty in the model's predictive distribution.", "We adopt the confidence measure (Culotta and McCallum, 2005) to access the uncertainty of a given example: U ncertainty(x) = 1 βˆ’ max y P (y|x, D train ).", "Good candidates for acquisition should have high uncertainty measure, but we also want to avoid querying outliers.", "As the vocabulary is finite, it is possible to evaluate the uncertainty measures for all possible inputs to synthesize the most uncertain query.", "However, such a greedy policy is expensive and prone to selecting outliers.", "Hence, we adopt a sampling based synthesis strategy: at each round, we generate one random object pair per property, and query the one that achieves the highest uncertainty measure.", "A classical difficulty faced by synthesis approaches to active learning is that they may pro-duce unnatural queries that are difficult for a human to label (Baum and Lang, 1992).", "However, our task formulation includes \"similar\" and \"N/A\" classes that encompass many of the more difficult or confusing comparisons, which we believe aids the effectiveness of the synthesis approach.", "Experiments We now present our experimental results on both the three-way and four-way tasks.", "Data Sets We test our three-way model on the VERB PHYSICS data set from (Forbes and Choi, 2017) .", "As there are only 5 properties in VERB PHYSICS, we also develop a new data set we call PROP-ERTY COMMON SENSE.", "We select 32 commonsense properties to form our property set (e.g., value, roundness, deliciousness, intelligence, etc.).", "We extract object nouns from the McRae Feature Norms dataset (McRae et al., 2005) and add selected named entities to form a object vocabulary of 689 distinct objects.", "We randomly generate 3148 object-property triples, label them and reserve 45% of the data for the test set.", "We further add 5 manually-selected applicable comparison examples per property to our test set, in order to make sure each property has some applicable testing examples.", "To verify the labeling, we have a second annotator redundantly label 200 examples and find a Cohen's Kappa of 0.64, which indicates good annotator agreement (we analyze the source of the disagreements in Section 4.1).", "The training set is used for the passive learning and pool-based active learning, and a human oracle provides labels in the synthesis active learning setting.", "Experimental Setup We experiment with three types of embeddings: GloVe, normalized 300-dimensional embeddings trained on a corpus of 6B tokens (Pennington et al., 2014) (the F&C method (Forbes and Choi, 2017) uses the 100-dimensional version, as it achieves the highest validation accuracy for their methods); Word2vec, normalized 300dimensional embeddings trained on 100B tokens (Mikolov et al., 2013) ; and LSTM, the normalized 1024-dimensional weight matrix from the softmax layer of the Google 1B LSTM language model (Jozefowicz et al., 2016) .", "For training PCE, we use an identity activation function and apply 50% dropout.", "We use the Adam optimizer with default settings to train the models for 800 epochs, minimizing cross entropy loss.", "For zero-shot learning, we adopt a hold-oneproperty-out scheme to test our models' zero-shot performance.", "Finally, for active learning, we use Word2vec embeddings.", "All the models are trained on 200 random training examples to warm up.", "We train for 20 epochs after each label acquisition.", "To smooth noise, we report the average of 20 different runs of random (passive learning) and least confident (LC) pool-based active learning (Culotta and McCallum, 2005) baselines.", "We report the average of only 6 runs for an expected model change (EMC) pool-based active learning (Cai et al., 2013) baseline due to its high computational cost, and of only 2 runs for our synthesis active learning approach due to its high labeling cost.", "The pool size is 1540 examples.", "Results In Table 1 , we compare the performance of the three-way PCE model against the existing state of the art on the VERB PHYSICS data set.", "The use of LSTM embeddings in PCE yields the best accuracy for all properties.", "Across all embedding choices, PCE performs as well or better than F&C, despite the fact that PCE does not use the annotated frames that F&C requires (approximately 188 labels per property).", "Thus, our approach matches or exceeds the performance of previous work using significantly less annotated knowledge.", "The lower performance of \"no reverse\" shows that the simple method of averaging over the reversed object pair is effective.", "Table 2 evaluates our models on properties not seen in training (zero-shot learning).", "We compare against a random baseline, and an Emb-Similarity baseline that classifies based on the cosine similarity of the object embeddings to the pole label embeddings (i.e., without the projection layer in PCE).", "PCE outperforms the baselines.", "Although the one-pole method was shown to perform similarly to the two-pole method for properties seen in training (Table 1) , we see that for zero-shot learning, using two poles is important.", "In Table 3 , we show that our four-way models with different embeddings beat both the majority and random baselines on the PROPERTY Table 3 : Accuracy on the four-way task on the PROPERTY COMMON SENSE data.", "COMMON SENSE data.", "Here, the LSTM embeddings perform similarly to the Word2vec embeddings, perhaps because the PROPERTY COM-MON SENSE vocabulary consists of less frequent nouns than in VERB PHYSICS.", "Thus, the Word2vec embeddings are able to catch up due to their larger vocabulary and much larger training corpus.", "Finally, in Figure 1 , we evaluate in the active learning setting.", "The synthesis approach performs best, especially later in training when the training pool for the pool-based methods has only uninformative examples remaining.", "Figure 2 helps explain the relative advantage of the synthesis approach: it is able to continue synthesizing informative (uncertain) queries throughout the entire training run.", "Discussion Sources of annotator disagreement As noted above, we found a \"good\" level of agreement (Cohen's Kappa of 0.64) for our PROPERTY COMMON SENSE data, which is lower than one might expect for task aimed at common sense.", "We analyzed the disagreements and found that they stem from two sources of subjectivity in the task.", "The first is that different labelers may have different thresholds for what counts as similar-a spider and an ant might be marked similar in size for one labeler, but not for another labeler.", "In our data, 58% of the disagreements are cases in which one annotator marks similar while the other says not similar.", "The second is that different labelers have different standards for whether a comparison is N/A.", "For example, in our data set, one labeler labels that a toaster is physically stronger than alcohol, and the other labeler says the comparison is N/A.", "37% of our disagreements are due to this type of subjectivity.", "The above two types of subjectivity account for almost all disagreements (95%), and the remaining 5% are due to annotation errors (one of the annotators makes mistake).", "Model Interpretation Since we adopt an identity activation function and a single layer design, it is possible to simplify the mathematical expression of our model to make it more interpretable.", "After accounting for model averaging, we have the following equality: P (L =< |Q) ∝ exp(R < Β· ((X βŠ• Y )W ) + R > Β· ((Y βŠ• X)W )) = exp(R T < (XW 1 + Y W 2 ) + R T > (Y W 1 + XW 2 )) ∝ exp((R < βˆ’ R > ) T (XW 1 + XW 2 )), where W = W 1 βŠ• W 2 .", "So we can define a score of \"R < \" for a object with embedding X as the following: score(X, R < ) = (R < βˆ’ R > ) T (XW 1 + XW 2 ).", "An object with a higher score for R < is more associated with the R < pole than the R > one.", "For example, score(\"elephant\",\"small\") represents how small an elephant is-a larger score indicates a smaller object.", "Table 4 shows smallness scores for 5 randomly picked objects from the VERB PHYSICS data set.", "PCE tends to assign higher scores to the smaller objects in the set.", "Sensitivity to pole labels PCE requires labels for the poles of the target object property.", "Table 5 : Trained and zero-shot accuracies for different word choices analysis to pole labels, evaluating the test accuracy of PCE as the pole label varies among different combinations of synonyms for the size and speed relations.", "We evaluate in both the trained setting (comparable to the results in Table 1 ) and the zero-shot setting (comparable to Table 2 ).", "We see that the trained accuracy remains essentially unchanged for different pole labels.", "In the zeroshot setting, all combinations achieve accuracy that beats the baselines in Table 2 , but the accuracy value is somewhat sensitive to the choice of pole label.", "Exploring how to select pole labels and experimenting with richer pole representations such as textual definitions are items of future work.", "Conclusion In this paper, we presented a method for extracting commonsense knowledge from embeddings.", "Our experiments demonstrate that the approach is effective at performing relative comparisons of object properties using less hand-annotated knowledge than in previous work.", "A synthesis active learner was found to boost accuracy, and further experiments with this approach are an item of future work." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "2.3", "3", "3.1", "3.2", "3.3", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Problem Definition and Methods", "Three-way Model", "Four-way Model", "Synthesis for Active Learning", "Experiments", "Data Sets", "Experimental Setup", "Results", "Sources of annotator disagreement", "Model Interpretation", "Sensitivity to pole labels", "Conclusion" ] }
GEM-SciDuet-train-104#paper-1274#slide-10
Active Learning
. a Lc Synthesis So o b uN n 1
. a Lc Synthesis So o b uN n 1
[]
GEM-SciDuet-train-105#paper-1275#slide-0
1275
Tackling the Story Ending Biases in The Story Cloze Test
The Story Cloze Test (SCT) is a recent framework for evaluating story comprehension and script learning. There have been a variety of models tackling the SCT so far. Although the original goal behind the SCT was to require systems to perform deep language understanding and commonsense reasoning for successful narrative understanding, some recent models could perform significantly better than the initial baselines by leveraging human-authorship biases discovered in the SCT dataset. In order to shed some light on this issue, we have performed various data analysis and analyzed a variety of top performing models presented for this task. Given the statistics we have aggregated, we have designed a new crowdsourcing scheme that creates a new SCT dataset, which overcomes some of the biases. We benchmark a few models on the new dataset and show that the topperforming model on the original SCT dataset fails to keep up its performance. Our findings further signify the importance of benchmarking NLP systems on various evolving test sets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112 ], "paper_content_text": [ "Introduction Story comprehension has been one of the longestrunning ambitions in artificial intelligence (Dijk, 1980; Charniak, 1972) .", "One of the challenges in expanding the field had been the lack of a solid evaluation framework and datasets on which comprehension models can be trained and tested.", "Mostafazadeh et al.", "(2016) introduced the Story Cloze Test (SCT) evaluation framework to address * This work was performed at University of Rochester.", "this issue.", "This test evaluates a story comprehension system where the system is given a foursentence short story as the 'context' and two alternative endings and to the story, labeled 'right ending' and 'wrong ending.'", "Then, the system's task is to choose the right ending.", "In order to support this task, Mostafazadeh et al.", "also provide the ROC Stories dataset, which is a collection of crowd-sourced complete five sentence stories through Amazon Mechanical Turk (MTurk).", "Each story follows a character through a fairly simple series of events to a conclusion.", "Several shallow and neural models, including the state-of-the-art script learning approaches, were presented as baselines (Mostafazadeh et al., 2016) for tackling the task, where they show that all their models perform only slightly better than a random baseline suggesting that richer models are required for tackling this task.", "A variety of new systems were proposed (Mihaylov and Frank, 2017; Schenk and Chiarcos, 2017; Schwartz et al., 2017b; Roemmele et al., 2017) as a part of the first shared task on SCT at LSDSem'17 workshop (Mostafazadeh et al., 2017) .", "Surprisingly, one of the models made a staggering improvement of 15% to the accuracy, partially due to using stylistic features isolated in the ending choices (Schwartz et al., 2017b) , discarding the narrative context.", "Clearly, this success does not seem to reflect the intent of the original task, where the systems should leverage narrative understanding as opposed to the statistical biases in the data.", "In this paper, we study the effect of such biases between the ending choices and present a new scheme to reduce such stylistic artifacts.", "The contribution of this paper is threefold: (1) we provide an extensive analysis of the SCT dataset to shed some light on the ending data characteristics (Section 3) (2) we develop a new strong classifier for tackling the SCT that uses a variety Context Right Ending Wrong Ending Ramona was very unhappy in her job.", "She asked for a raise, but was denied.", "The refusal prompted her to aggressively comb the want ads.", "She found an interesting new possibility and set up an interview.", "She was offered the new job at a higher salary.", "Ramona had no reason to want to change jobs anymore.", "The teacher was walking with a stack of papers.", "Outside started to rain.", "When the teacher tried to walk down a few steps, she ended up falling.", "The papers flew out of her hands and landed on the ground.", "A passer-by helped her up and helped her collect the papers.", "The teacher got up and walked home leaving the papers behind.", "(Chaturvedi et al., 2017) fails to keep up the performance on our new dataset.", "We discuss the implications of this experiment to the greater research community in terms of data collection and benchmarking practices in Section 6.", "All the code and datasets for this paper will be released to the public.", "We hope that the availability of the new evaluation set can further support the continued research on story understanding.", "Related Work This paper mainly extends the work on creating the Story Cloze Test set (Mostafazadeh et al., 2016) , hereinafter SCT-v1.0.", "The SCT-v1.0 dataset was created as follows: full five-sentence stories from the ROC Stories corpus were sampled, then, the initial four sentences were shown to a set of MTurk 2 crowd workers who were prompted to author 'right' and 'wrong' endings.", "Mostafazadeh et al.", "(Mostafazadeh et al., 2016) give special care to make sure there were no boundary cases for 'right' and 'wrong' endings by implementing extra rounds of data filtering.", "The resulting SCT-v1.0 dataset had a validation (hereinafter, SCT-v1.0 Val) and a test set (SCT-v1.0 test), each with 1,871 cases.", "Furthermore, we conducted an extensive ngram analysis, using word tokens, characters, partof-speech, and token-POS (similar to Schwartz et al.", "(Schwartz et al., 2017b) ) as features.", "We see char-grams such as \"sn't\" and \"not\" appear more commonly in the 'wrong endings', suggesting heavy negation.", "In 'right endings', pronouns are used more frequently versus proper nouns used in 'wrong endings'.", "Artifacts such as 'pizza' are common in 'wrong endings,' which could suggest that for a given topic, the authors may replace an object in a right ending with a wrong one and quickly think up a common item such as pizza to create a 'wrong' one.", "An extensive analysis of these features, including the n-gram analysis, can be found in the supplementary material.", "Model Following the analysis above, we developed a Story Cloze model, hereinafter EndingReg, that only uses the ending features while disregarding the story context for choosing the right ending.", "We expanded each Story Cloze Test case's ending options into a set of two single sentences.", "Then, for each sentence, we created the following features: 1.", "Number of tokens 2.", "VADER composite sentiment score 3.", "Yngve complexity score 4.", "Token-POS n-grams 5.", "POS n-grams 6.", "Four length character-grams All n-gram features needed to appear at least five times throughout the dataset.", "The features were collected for each five-sentence story and then fed into a logistic regression classifier.", "As an initial experiment, we trained this model using the SCT-v1.0 validation set and tested on the SCT-v1.0 test set.", "An L2 regularization penalty was used to enforce a Gaussian prior on the feature-space, where a grid search was conducted for hyper-parameter tuning.", "This model achieves an accuracy of 71.5% on the SCT-v1.0 dataset which is on par with the highest score achieved by any model using only the endings.", "Table 3 shows the accuracies ob-tained by models using only those particular features.", "We achieve minimal but sometimes important classification using token count, VADER, and Yngve in combination alone, better classification using POS or char-grams alone, and best classification using n-grams alone.", "By combining all of them we achieve the overall best results.", "Table 3 : Classification results on SCT-v1.0 using each of the feature sets designated in the columns.", "Data Collection Based on the findings above, a new test set for the SCT was deemed necessary.", "The premise of predicting an ending to a short story, as opposed to predicting say a middle sentence, enables a more systematic evaluation where human can agree on the cases 100%.", "Hence, our goal was to come up with a data collection scheme that overcomes the data collection biases, while keeping the original evaluation format.", "As the data analysis revealed, the token count, sentiment, and the complexity are not as important features for classification as the ending n-grams are.", "We set the following goals for sourcing the new 'right' and 'wrong' endings.", "They both should: 1.", "Contain a similar number of tokens 2.", "Have similar distributions of token n-grams and char-grams 3.", "Occur as standalone events with the same likelihood to occur, with topical, sentimental, or emotion consistencies when applicable.", "First, we crowdsourced 5,000 new five-sentence stories through Amazon Mechanical Turk.", "We prompted the users in the same manner described in Mostafazadeh et al.", "(2016) .", "In order to source new 'wrong' endings, we tried two different methods.", "In Method #1, we kept the original ending sourcing format of Mostafazadeh et al., but imposed some further restrictions.", "This was done by taking the first four sentences of the newly collected stories and asking an MTurker to write a 'right' and 'wrong' ending for each.", "The new restrictions were: 'Each sentence should stay within the same subject area of the story,' and 'The number of words in the Right and Wrong sentences should not differ by more than 2 words,' and 'When possible, the Right and Wrong sentences should try to keep a similar tone/sentiment as one another.'", "The motivation behind this technique was to reduce the statistical differences by asking the user to be mindful of considerations.", "In Method #2, we took the five sentences stories and prompted a second set of MTurk workers to modify the fifth sentence in order to make a resulting five-sentence story non-sensible.", "Here, the prompt instructs the workers to make sure the new 'wrong ending' sentence makes sense standalone, that it does not differ in the number of words from the original sentence by more than three words, and that the changes cannot be as simple as e.g., putting the word 'not' in front of a description or a verb.", "As a result, the workers had much less flexibility for changing the underlying linguistic structures which can help tackle the authorship style differences between the 'right' and 'wrong' endings.", "The results in Table 4 , which show classification accuracy when using EndingReg on the two new data sources, show that Method #2 is a slightly better data sourcing scheme in reducing the bias, since the EndingReg model's performance is slightly worse.", "The set was further filtered through human verification similar to Mostafazadeh et al.", "(2016) .", "The filtering was done by splitting each SCT-v1.0's two alternative endings into two independent five-sentence stories and asking three different MTurk users to categorize the story as either: one where the story made complete sense, one where the story made sense until the last sentence and one where the story does not make sense for another reason.", "Stories were only selected if all the three MTurk users verified that the story with the 'right ending' and the corresponding story with the 'wrong ending' were verified to be indeed right and wrong respectively.", "This ensured a higher quality of data and eliminating boundary cases.", "This entire process resulted in creating the Story Cloze Test v1.5 (SCT-v1.5) dataset, consisting of 1,571 stories for each validation and test sets.", "Method #1 Method #2 EndingReg 0.709 0.695 cogcomp 0.649 0.641 Table 4 : Comparison of initial data sourcing methods n βˆ’ gram char βˆ’ gram P OS SCT-v1.0 13.9 12.4 16.4 SCT-v1.5 7.0 6.3 7.5 Table 5 : Standard deviation of the word and character n-gram counts, as well as the part of speech (POS) counts, between the right and wrong endings.", "Results In order to test the decrease in n-gram bias, which was the most salient feature for the classification task using only the endings, we compare the variance between the n-gram counts from SCT-v1.0 to SCT-v1.5.", "The results are presented in Table 5 , which indicates the drop in the standard deviations in our new dataset.", "Table 6 shows the classification results of various models on SCT-v1.5.", "The drop in accuracy of the EndingReg model between the SCT-v1.0 and SCT-v1.5 shows a significant improvement on the statistical weight of the stylistic features generated by the model.", "Since the main features used are the token length and the various n-grams, this suggests that the new 'right endings' and 'wrong endings' have much more similar token n-gram, pos n-gram, postoken n-gram and char-gram overlap.", "Furthermore, the CogComp model's performance has significantly dropped on SCT-v1.5.", "Although this model seems to be using story comprehension features such as event sequencing, since the endings are included in the sequences, the biases within the endings have influenced the predictions and the weak performance of the model in SCT-v1.5 suggest that this model had picked up on the biases of SCT-v1.0 as opposed to really understanding the context.", "In particular, the posterior probabilities for each ending choice using their features are quite similar on the SCT-v1.5.", "These results place the classification accuracy of this top performing model on par with or worse than the models that did not use the ending features of the old SCT-v1.0 dataset (Mostafazadeh et al., 2017) , which suggest that the gap that once was held by models using the ending biases seems to be corrected for.", "Al- though we did not get to test all the other models published on SCT-v1.0 directly, we predict similar trends.", "It is important to point out that the 64.4% performance attained by our EndingReg model is still high for a model which completely discards the context.", "This indicates that although we could correct for some of the stylistic biases, there are some other hidden patterns in the new endings that would not have been accounted for without having the EndingReg baseline.", "This showcases the importance of maintaining benchmarks that evolve and improve over time, where systems should not be optimized for particular narrow test sets.", "We propose the community to report accuracies on both SCT-v1.0 and SCT-v1.5, both of which still have a huge gap between the best system and the human performance.", "Conclusion In this paper, we presented a comprehensive analysis of the stylistic features isolated in the endings of the original Story Cloze Test (SCT-v1.0).", "Using that analysis, along with a classifier we developed for testing new data collection schemes, we created a new SCT dataset, SCT-v1.5, which overcomes some of the biases.", "Based on the results presented in this paper, we believe that our SCT-v1.5 is a better benchmark for story comprehension.", "However, as shown in multiple AI tasks (Ettinger et al., 2017; Antol et al., 2015; Jabri et al., 2016; Poliak et al., 2018) , no collected dataset is entirely without its inherent biases and often the biases in datasets go undiscovered.", "We believe that evaluation benchmarks should evolve and improve over time and we are planning to incrementally update the Story Cloze Test benchmark.", "All the new versions, along with a leader-board showcasing the stateof-the-art results, will be tracked via CodaLab https://competitions.codalab.org/ competitions/15333.", "The success of our modified data collection method shows how extreme care must be given for sourcing new datasets.", "We suggest the next SCT challenges to be completely blind, where the participants cannot deliberately leverage any particular data biases.", "Along with this paper, we are releasing the datasets and the developed models to the community.", "All the announcements, new supplementary material, and datasets can be accessed through http://cs.", "rochester.edu/nlp/rocstories/.", "We hope that this work ignites further interest in the community for making progress on story understanding." ] }
{ "paper_header_number": [ "1", "2", "4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Model", "Data Collection", "Results", "Conclusion" ] }
GEM-SciDuet-train-105#paper-1275#slide-0
Story Understanding and Story Generation
An extremely challenging and long-running goal in AI (Charniak The biggest challenge: having commonsense knowledge for the interpretation of narrative events. Requires commonsense reasoning, going beyond pattern recognition and explicit information extraction. Allen, Omid B Tackling the Biasesakhshandeh,
An extremely challenging and long-running goal in AI (Charniak The biggest challenge: having commonsense knowledge for the interpretation of narrative events. Requires commonsense reasoning, going beyond pattern recognition and explicit information extraction. Allen, Omid B Tackling the Biasesakhshandeh,
[]
GEM-SciDuet-train-105#paper-1275#slide-1
1275
Tackling the Story Ending Biases in The Story Cloze Test
The Story Cloze Test (SCT) is a recent framework for evaluating story comprehension and script learning. There have been a variety of models tackling the SCT so far. Although the original goal behind the SCT was to require systems to perform deep language understanding and commonsense reasoning for successful narrative understanding, some recent models could perform significantly better than the initial baselines by leveraging human-authorship biases discovered in the SCT dataset. In order to shed some light on this issue, we have performed various data analysis and analyzed a variety of top performing models presented for this task. Given the statistics we have aggregated, we have designed a new crowdsourcing scheme that creates a new SCT dataset, which overcomes some of the biases. We benchmark a few models on the new dataset and show that the topperforming model on the original SCT dataset fails to keep up its performance. Our findings further signify the importance of benchmarking NLP systems on various evolving test sets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112 ], "paper_content_text": [ "Introduction Story comprehension has been one of the longestrunning ambitions in artificial intelligence (Dijk, 1980; Charniak, 1972) .", "One of the challenges in expanding the field had been the lack of a solid evaluation framework and datasets on which comprehension models can be trained and tested.", "Mostafazadeh et al.", "(2016) introduced the Story Cloze Test (SCT) evaluation framework to address * This work was performed at University of Rochester.", "this issue.", "This test evaluates a story comprehension system where the system is given a foursentence short story as the 'context' and two alternative endings and to the story, labeled 'right ending' and 'wrong ending.'", "Then, the system's task is to choose the right ending.", "In order to support this task, Mostafazadeh et al.", "also provide the ROC Stories dataset, which is a collection of crowd-sourced complete five sentence stories through Amazon Mechanical Turk (MTurk).", "Each story follows a character through a fairly simple series of events to a conclusion.", "Several shallow and neural models, including the state-of-the-art script learning approaches, were presented as baselines (Mostafazadeh et al., 2016) for tackling the task, where they show that all their models perform only slightly better than a random baseline suggesting that richer models are required for tackling this task.", "A variety of new systems were proposed (Mihaylov and Frank, 2017; Schenk and Chiarcos, 2017; Schwartz et al., 2017b; Roemmele et al., 2017) as a part of the first shared task on SCT at LSDSem'17 workshop (Mostafazadeh et al., 2017) .", "Surprisingly, one of the models made a staggering improvement of 15% to the accuracy, partially due to using stylistic features isolated in the ending choices (Schwartz et al., 2017b) , discarding the narrative context.", "Clearly, this success does not seem to reflect the intent of the original task, where the systems should leverage narrative understanding as opposed to the statistical biases in the data.", "In this paper, we study the effect of such biases between the ending choices and present a new scheme to reduce such stylistic artifacts.", "The contribution of this paper is threefold: (1) we provide an extensive analysis of the SCT dataset to shed some light on the ending data characteristics (Section 3) (2) we develop a new strong classifier for tackling the SCT that uses a variety Context Right Ending Wrong Ending Ramona was very unhappy in her job.", "She asked for a raise, but was denied.", "The refusal prompted her to aggressively comb the want ads.", "She found an interesting new possibility and set up an interview.", "She was offered the new job at a higher salary.", "Ramona had no reason to want to change jobs anymore.", "The teacher was walking with a stack of papers.", "Outside started to rain.", "When the teacher tried to walk down a few steps, she ended up falling.", "The papers flew out of her hands and landed on the ground.", "A passer-by helped her up and helped her collect the papers.", "The teacher got up and walked home leaving the papers behind.", "(Chaturvedi et al., 2017) fails to keep up the performance on our new dataset.", "We discuss the implications of this experiment to the greater research community in terms of data collection and benchmarking practices in Section 6.", "All the code and datasets for this paper will be released to the public.", "We hope that the availability of the new evaluation set can further support the continued research on story understanding.", "Related Work This paper mainly extends the work on creating the Story Cloze Test set (Mostafazadeh et al., 2016) , hereinafter SCT-v1.0.", "The SCT-v1.0 dataset was created as follows: full five-sentence stories from the ROC Stories corpus were sampled, then, the initial four sentences were shown to a set of MTurk 2 crowd workers who were prompted to author 'right' and 'wrong' endings.", "Mostafazadeh et al.", "(Mostafazadeh et al., 2016) give special care to make sure there were no boundary cases for 'right' and 'wrong' endings by implementing extra rounds of data filtering.", "The resulting SCT-v1.0 dataset had a validation (hereinafter, SCT-v1.0 Val) and a test set (SCT-v1.0 test), each with 1,871 cases.", "Furthermore, we conducted an extensive ngram analysis, using word tokens, characters, partof-speech, and token-POS (similar to Schwartz et al.", "(Schwartz et al., 2017b) ) as features.", "We see char-grams such as \"sn't\" and \"not\" appear more commonly in the 'wrong endings', suggesting heavy negation.", "In 'right endings', pronouns are used more frequently versus proper nouns used in 'wrong endings'.", "Artifacts such as 'pizza' are common in 'wrong endings,' which could suggest that for a given topic, the authors may replace an object in a right ending with a wrong one and quickly think up a common item such as pizza to create a 'wrong' one.", "An extensive analysis of these features, including the n-gram analysis, can be found in the supplementary material.", "Model Following the analysis above, we developed a Story Cloze model, hereinafter EndingReg, that only uses the ending features while disregarding the story context for choosing the right ending.", "We expanded each Story Cloze Test case's ending options into a set of two single sentences.", "Then, for each sentence, we created the following features: 1.", "Number of tokens 2.", "VADER composite sentiment score 3.", "Yngve complexity score 4.", "Token-POS n-grams 5.", "POS n-grams 6.", "Four length character-grams All n-gram features needed to appear at least five times throughout the dataset.", "The features were collected for each five-sentence story and then fed into a logistic regression classifier.", "As an initial experiment, we trained this model using the SCT-v1.0 validation set and tested on the SCT-v1.0 test set.", "An L2 regularization penalty was used to enforce a Gaussian prior on the feature-space, where a grid search was conducted for hyper-parameter tuning.", "This model achieves an accuracy of 71.5% on the SCT-v1.0 dataset which is on par with the highest score achieved by any model using only the endings.", "Table 3 shows the accuracies ob-tained by models using only those particular features.", "We achieve minimal but sometimes important classification using token count, VADER, and Yngve in combination alone, better classification using POS or char-grams alone, and best classification using n-grams alone.", "By combining all of them we achieve the overall best results.", "Table 3 : Classification results on SCT-v1.0 using each of the feature sets designated in the columns.", "Data Collection Based on the findings above, a new test set for the SCT was deemed necessary.", "The premise of predicting an ending to a short story, as opposed to predicting say a middle sentence, enables a more systematic evaluation where human can agree on the cases 100%.", "Hence, our goal was to come up with a data collection scheme that overcomes the data collection biases, while keeping the original evaluation format.", "As the data analysis revealed, the token count, sentiment, and the complexity are not as important features for classification as the ending n-grams are.", "We set the following goals for sourcing the new 'right' and 'wrong' endings.", "They both should: 1.", "Contain a similar number of tokens 2.", "Have similar distributions of token n-grams and char-grams 3.", "Occur as standalone events with the same likelihood to occur, with topical, sentimental, or emotion consistencies when applicable.", "First, we crowdsourced 5,000 new five-sentence stories through Amazon Mechanical Turk.", "We prompted the users in the same manner described in Mostafazadeh et al.", "(2016) .", "In order to source new 'wrong' endings, we tried two different methods.", "In Method #1, we kept the original ending sourcing format of Mostafazadeh et al., but imposed some further restrictions.", "This was done by taking the first four sentences of the newly collected stories and asking an MTurker to write a 'right' and 'wrong' ending for each.", "The new restrictions were: 'Each sentence should stay within the same subject area of the story,' and 'The number of words in the Right and Wrong sentences should not differ by more than 2 words,' and 'When possible, the Right and Wrong sentences should try to keep a similar tone/sentiment as one another.'", "The motivation behind this technique was to reduce the statistical differences by asking the user to be mindful of considerations.", "In Method #2, we took the five sentences stories and prompted a second set of MTurk workers to modify the fifth sentence in order to make a resulting five-sentence story non-sensible.", "Here, the prompt instructs the workers to make sure the new 'wrong ending' sentence makes sense standalone, that it does not differ in the number of words from the original sentence by more than three words, and that the changes cannot be as simple as e.g., putting the word 'not' in front of a description or a verb.", "As a result, the workers had much less flexibility for changing the underlying linguistic structures which can help tackle the authorship style differences between the 'right' and 'wrong' endings.", "The results in Table 4 , which show classification accuracy when using EndingReg on the two new data sources, show that Method #2 is a slightly better data sourcing scheme in reducing the bias, since the EndingReg model's performance is slightly worse.", "The set was further filtered through human verification similar to Mostafazadeh et al.", "(2016) .", "The filtering was done by splitting each SCT-v1.0's two alternative endings into two independent five-sentence stories and asking three different MTurk users to categorize the story as either: one where the story made complete sense, one where the story made sense until the last sentence and one where the story does not make sense for another reason.", "Stories were only selected if all the three MTurk users verified that the story with the 'right ending' and the corresponding story with the 'wrong ending' were verified to be indeed right and wrong respectively.", "This ensured a higher quality of data and eliminating boundary cases.", "This entire process resulted in creating the Story Cloze Test v1.5 (SCT-v1.5) dataset, consisting of 1,571 stories for each validation and test sets.", "Method #1 Method #2 EndingReg 0.709 0.695 cogcomp 0.649 0.641 Table 4 : Comparison of initial data sourcing methods n βˆ’ gram char βˆ’ gram P OS SCT-v1.0 13.9 12.4 16.4 SCT-v1.5 7.0 6.3 7.5 Table 5 : Standard deviation of the word and character n-gram counts, as well as the part of speech (POS) counts, between the right and wrong endings.", "Results In order to test the decrease in n-gram bias, which was the most salient feature for the classification task using only the endings, we compare the variance between the n-gram counts from SCT-v1.0 to SCT-v1.5.", "The results are presented in Table 5 , which indicates the drop in the standard deviations in our new dataset.", "Table 6 shows the classification results of various models on SCT-v1.5.", "The drop in accuracy of the EndingReg model between the SCT-v1.0 and SCT-v1.5 shows a significant improvement on the statistical weight of the stylistic features generated by the model.", "Since the main features used are the token length and the various n-grams, this suggests that the new 'right endings' and 'wrong endings' have much more similar token n-gram, pos n-gram, postoken n-gram and char-gram overlap.", "Furthermore, the CogComp model's performance has significantly dropped on SCT-v1.5.", "Although this model seems to be using story comprehension features such as event sequencing, since the endings are included in the sequences, the biases within the endings have influenced the predictions and the weak performance of the model in SCT-v1.5 suggest that this model had picked up on the biases of SCT-v1.0 as opposed to really understanding the context.", "In particular, the posterior probabilities for each ending choice using their features are quite similar on the SCT-v1.5.", "These results place the classification accuracy of this top performing model on par with or worse than the models that did not use the ending features of the old SCT-v1.0 dataset (Mostafazadeh et al., 2017) , which suggest that the gap that once was held by models using the ending biases seems to be corrected for.", "Al- though we did not get to test all the other models published on SCT-v1.0 directly, we predict similar trends.", "It is important to point out that the 64.4% performance attained by our EndingReg model is still high for a model which completely discards the context.", "This indicates that although we could correct for some of the stylistic biases, there are some other hidden patterns in the new endings that would not have been accounted for without having the EndingReg baseline.", "This showcases the importance of maintaining benchmarks that evolve and improve over time, where systems should not be optimized for particular narrow test sets.", "We propose the community to report accuracies on both SCT-v1.0 and SCT-v1.5, both of which still have a huge gap between the best system and the human performance.", "Conclusion In this paper, we presented a comprehensive analysis of the stylistic features isolated in the endings of the original Story Cloze Test (SCT-v1.0).", "Using that analysis, along with a classifier we developed for testing new data collection schemes, we created a new SCT dataset, SCT-v1.5, which overcomes some of the biases.", "Based on the results presented in this paper, we believe that our SCT-v1.5 is a better benchmark for story comprehension.", "However, as shown in multiple AI tasks (Ettinger et al., 2017; Antol et al., 2015; Jabri et al., 2016; Poliak et al., 2018) , no collected dataset is entirely without its inherent biases and often the biases in datasets go undiscovered.", "We believe that evaluation benchmarks should evolve and improve over time and we are planning to incrementally update the Story Cloze Test benchmark.", "All the new versions, along with a leader-board showcasing the stateof-the-art results, will be tracked via CodaLab https://competitions.codalab.org/ competitions/15333.", "The success of our modified data collection method shows how extreme care must be given for sourcing new datasets.", "We suggest the next SCT challenges to be completely blind, where the participants cannot deliberately leverage any particular data biases.", "Along with this paper, we are releasing the datasets and the developed models to the community.", "All the announcements, new supplementary material, and datasets can be accessed through http://cs.", "rochester.edu/nlp/rocstories/.", "We hope that this work ignites further interest in the community for making progress on story understanding." ] }
{ "paper_header_number": [ "1", "2", "4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Model", "Data Collection", "Results", "Conclusion" ] }
GEM-SciDuet-train-105#paper-1275#slide-1
ROC Stories Mostafazadeh et al 2016
A collection of high quality short five sentence stories. Each story: Has a specific beginning and ending, where something happens in between Has nothing irrelevant or redundant to the core story The Test Jennifer has a big exam tomorrow. She got so stressed, she pulled an all-nighter. She went into class the next day, weary as can be. Her teacher stated that the test is postponed for next week. Jennifer felt bittersweet about it Allen, Omid B Tackling the Biasesakhshandeh,
A collection of high quality short five sentence stories. Each story: Has a specific beginning and ending, where something happens in between Has nothing irrelevant or redundant to the core story The Test Jennifer has a big exam tomorrow. She got so stressed, she pulled an all-nighter. She went into class the next day, weary as can be. Her teacher stated that the test is postponed for next week. Jennifer felt bittersweet about it Allen, Omid B Tackling the Biasesakhshandeh,
[]
GEM-SciDuet-train-105#paper-1275#slide-2
1275
Tackling the Story Ending Biases in The Story Cloze Test
The Story Cloze Test (SCT) is a recent framework for evaluating story comprehension and script learning. There have been a variety of models tackling the SCT so far. Although the original goal behind the SCT was to require systems to perform deep language understanding and commonsense reasoning for successful narrative understanding, some recent models could perform significantly better than the initial baselines by leveraging human-authorship biases discovered in the SCT dataset. In order to shed some light on this issue, we have performed various data analysis and analyzed a variety of top performing models presented for this task. Given the statistics we have aggregated, we have designed a new crowdsourcing scheme that creates a new SCT dataset, which overcomes some of the biases. We benchmark a few models on the new dataset and show that the topperforming model on the original SCT dataset fails to keep up its performance. Our findings further signify the importance of benchmarking NLP systems on various evolving test sets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112 ], "paper_content_text": [ "Introduction Story comprehension has been one of the longestrunning ambitions in artificial intelligence (Dijk, 1980; Charniak, 1972) .", "One of the challenges in expanding the field had been the lack of a solid evaluation framework and datasets on which comprehension models can be trained and tested.", "Mostafazadeh et al.", "(2016) introduced the Story Cloze Test (SCT) evaluation framework to address * This work was performed at University of Rochester.", "this issue.", "This test evaluates a story comprehension system where the system is given a foursentence short story as the 'context' and two alternative endings and to the story, labeled 'right ending' and 'wrong ending.'", "Then, the system's task is to choose the right ending.", "In order to support this task, Mostafazadeh et al.", "also provide the ROC Stories dataset, which is a collection of crowd-sourced complete five sentence stories through Amazon Mechanical Turk (MTurk).", "Each story follows a character through a fairly simple series of events to a conclusion.", "Several shallow and neural models, including the state-of-the-art script learning approaches, were presented as baselines (Mostafazadeh et al., 2016) for tackling the task, where they show that all their models perform only slightly better than a random baseline suggesting that richer models are required for tackling this task.", "A variety of new systems were proposed (Mihaylov and Frank, 2017; Schenk and Chiarcos, 2017; Schwartz et al., 2017b; Roemmele et al., 2017) as a part of the first shared task on SCT at LSDSem'17 workshop (Mostafazadeh et al., 2017) .", "Surprisingly, one of the models made a staggering improvement of 15% to the accuracy, partially due to using stylistic features isolated in the ending choices (Schwartz et al., 2017b) , discarding the narrative context.", "Clearly, this success does not seem to reflect the intent of the original task, where the systems should leverage narrative understanding as opposed to the statistical biases in the data.", "In this paper, we study the effect of such biases between the ending choices and present a new scheme to reduce such stylistic artifacts.", "The contribution of this paper is threefold: (1) we provide an extensive analysis of the SCT dataset to shed some light on the ending data characteristics (Section 3) (2) we develop a new strong classifier for tackling the SCT that uses a variety Context Right Ending Wrong Ending Ramona was very unhappy in her job.", "She asked for a raise, but was denied.", "The refusal prompted her to aggressively comb the want ads.", "She found an interesting new possibility and set up an interview.", "She was offered the new job at a higher salary.", "Ramona had no reason to want to change jobs anymore.", "The teacher was walking with a stack of papers.", "Outside started to rain.", "When the teacher tried to walk down a few steps, she ended up falling.", "The papers flew out of her hands and landed on the ground.", "A passer-by helped her up and helped her collect the papers.", "The teacher got up and walked home leaving the papers behind.", "(Chaturvedi et al., 2017) fails to keep up the performance on our new dataset.", "We discuss the implications of this experiment to the greater research community in terms of data collection and benchmarking practices in Section 6.", "All the code and datasets for this paper will be released to the public.", "We hope that the availability of the new evaluation set can further support the continued research on story understanding.", "Related Work This paper mainly extends the work on creating the Story Cloze Test set (Mostafazadeh et al., 2016) , hereinafter SCT-v1.0.", "The SCT-v1.0 dataset was created as follows: full five-sentence stories from the ROC Stories corpus were sampled, then, the initial four sentences were shown to a set of MTurk 2 crowd workers who were prompted to author 'right' and 'wrong' endings.", "Mostafazadeh et al.", "(Mostafazadeh et al., 2016) give special care to make sure there were no boundary cases for 'right' and 'wrong' endings by implementing extra rounds of data filtering.", "The resulting SCT-v1.0 dataset had a validation (hereinafter, SCT-v1.0 Val) and a test set (SCT-v1.0 test), each with 1,871 cases.", "Furthermore, we conducted an extensive ngram analysis, using word tokens, characters, partof-speech, and token-POS (similar to Schwartz et al.", "(Schwartz et al., 2017b) ) as features.", "We see char-grams such as \"sn't\" and \"not\" appear more commonly in the 'wrong endings', suggesting heavy negation.", "In 'right endings', pronouns are used more frequently versus proper nouns used in 'wrong endings'.", "Artifacts such as 'pizza' are common in 'wrong endings,' which could suggest that for a given topic, the authors may replace an object in a right ending with a wrong one and quickly think up a common item such as pizza to create a 'wrong' one.", "An extensive analysis of these features, including the n-gram analysis, can be found in the supplementary material.", "Model Following the analysis above, we developed a Story Cloze model, hereinafter EndingReg, that only uses the ending features while disregarding the story context for choosing the right ending.", "We expanded each Story Cloze Test case's ending options into a set of two single sentences.", "Then, for each sentence, we created the following features: 1.", "Number of tokens 2.", "VADER composite sentiment score 3.", "Yngve complexity score 4.", "Token-POS n-grams 5.", "POS n-grams 6.", "Four length character-grams All n-gram features needed to appear at least five times throughout the dataset.", "The features were collected for each five-sentence story and then fed into a logistic regression classifier.", "As an initial experiment, we trained this model using the SCT-v1.0 validation set and tested on the SCT-v1.0 test set.", "An L2 regularization penalty was used to enforce a Gaussian prior on the feature-space, where a grid search was conducted for hyper-parameter tuning.", "This model achieves an accuracy of 71.5% on the SCT-v1.0 dataset which is on par with the highest score achieved by any model using only the endings.", "Table 3 shows the accuracies ob-tained by models using only those particular features.", "We achieve minimal but sometimes important classification using token count, VADER, and Yngve in combination alone, better classification using POS or char-grams alone, and best classification using n-grams alone.", "By combining all of them we achieve the overall best results.", "Table 3 : Classification results on SCT-v1.0 using each of the feature sets designated in the columns.", "Data Collection Based on the findings above, a new test set for the SCT was deemed necessary.", "The premise of predicting an ending to a short story, as opposed to predicting say a middle sentence, enables a more systematic evaluation where human can agree on the cases 100%.", "Hence, our goal was to come up with a data collection scheme that overcomes the data collection biases, while keeping the original evaluation format.", "As the data analysis revealed, the token count, sentiment, and the complexity are not as important features for classification as the ending n-grams are.", "We set the following goals for sourcing the new 'right' and 'wrong' endings.", "They both should: 1.", "Contain a similar number of tokens 2.", "Have similar distributions of token n-grams and char-grams 3.", "Occur as standalone events with the same likelihood to occur, with topical, sentimental, or emotion consistencies when applicable.", "First, we crowdsourced 5,000 new five-sentence stories through Amazon Mechanical Turk.", "We prompted the users in the same manner described in Mostafazadeh et al.", "(2016) .", "In order to source new 'wrong' endings, we tried two different methods.", "In Method #1, we kept the original ending sourcing format of Mostafazadeh et al., but imposed some further restrictions.", "This was done by taking the first four sentences of the newly collected stories and asking an MTurker to write a 'right' and 'wrong' ending for each.", "The new restrictions were: 'Each sentence should stay within the same subject area of the story,' and 'The number of words in the Right and Wrong sentences should not differ by more than 2 words,' and 'When possible, the Right and Wrong sentences should try to keep a similar tone/sentiment as one another.'", "The motivation behind this technique was to reduce the statistical differences by asking the user to be mindful of considerations.", "In Method #2, we took the five sentences stories and prompted a second set of MTurk workers to modify the fifth sentence in order to make a resulting five-sentence story non-sensible.", "Here, the prompt instructs the workers to make sure the new 'wrong ending' sentence makes sense standalone, that it does not differ in the number of words from the original sentence by more than three words, and that the changes cannot be as simple as e.g., putting the word 'not' in front of a description or a verb.", "As a result, the workers had much less flexibility for changing the underlying linguistic structures which can help tackle the authorship style differences between the 'right' and 'wrong' endings.", "The results in Table 4 , which show classification accuracy when using EndingReg on the two new data sources, show that Method #2 is a slightly better data sourcing scheme in reducing the bias, since the EndingReg model's performance is slightly worse.", "The set was further filtered through human verification similar to Mostafazadeh et al.", "(2016) .", "The filtering was done by splitting each SCT-v1.0's two alternative endings into two independent five-sentence stories and asking three different MTurk users to categorize the story as either: one where the story made complete sense, one where the story made sense until the last sentence and one where the story does not make sense for another reason.", "Stories were only selected if all the three MTurk users verified that the story with the 'right ending' and the corresponding story with the 'wrong ending' were verified to be indeed right and wrong respectively.", "This ensured a higher quality of data and eliminating boundary cases.", "This entire process resulted in creating the Story Cloze Test v1.5 (SCT-v1.5) dataset, consisting of 1,571 stories for each validation and test sets.", "Method #1 Method #2 EndingReg 0.709 0.695 cogcomp 0.649 0.641 Table 4 : Comparison of initial data sourcing methods n βˆ’ gram char βˆ’ gram P OS SCT-v1.0 13.9 12.4 16.4 SCT-v1.5 7.0 6.3 7.5 Table 5 : Standard deviation of the word and character n-gram counts, as well as the part of speech (POS) counts, between the right and wrong endings.", "Results In order to test the decrease in n-gram bias, which was the most salient feature for the classification task using only the endings, we compare the variance between the n-gram counts from SCT-v1.0 to SCT-v1.5.", "The results are presented in Table 5 , which indicates the drop in the standard deviations in our new dataset.", "Table 6 shows the classification results of various models on SCT-v1.5.", "The drop in accuracy of the EndingReg model between the SCT-v1.0 and SCT-v1.5 shows a significant improvement on the statistical weight of the stylistic features generated by the model.", "Since the main features used are the token length and the various n-grams, this suggests that the new 'right endings' and 'wrong endings' have much more similar token n-gram, pos n-gram, postoken n-gram and char-gram overlap.", "Furthermore, the CogComp model's performance has significantly dropped on SCT-v1.5.", "Although this model seems to be using story comprehension features such as event sequencing, since the endings are included in the sequences, the biases within the endings have influenced the predictions and the weak performance of the model in SCT-v1.5 suggest that this model had picked up on the biases of SCT-v1.0 as opposed to really understanding the context.", "In particular, the posterior probabilities for each ending choice using their features are quite similar on the SCT-v1.5.", "These results place the classification accuracy of this top performing model on par with or worse than the models that did not use the ending features of the old SCT-v1.0 dataset (Mostafazadeh et al., 2017) , which suggest that the gap that once was held by models using the ending biases seems to be corrected for.", "Al- though we did not get to test all the other models published on SCT-v1.0 directly, we predict similar trends.", "It is important to point out that the 64.4% performance attained by our EndingReg model is still high for a model which completely discards the context.", "This indicates that although we could correct for some of the stylistic biases, there are some other hidden patterns in the new endings that would not have been accounted for without having the EndingReg baseline.", "This showcases the importance of maintaining benchmarks that evolve and improve over time, where systems should not be optimized for particular narrow test sets.", "We propose the community to report accuracies on both SCT-v1.0 and SCT-v1.5, both of which still have a huge gap between the best system and the human performance.", "Conclusion In this paper, we presented a comprehensive analysis of the stylistic features isolated in the endings of the original Story Cloze Test (SCT-v1.0).", "Using that analysis, along with a classifier we developed for testing new data collection schemes, we created a new SCT dataset, SCT-v1.5, which overcomes some of the biases.", "Based on the results presented in this paper, we believe that our SCT-v1.5 is a better benchmark for story comprehension.", "However, as shown in multiple AI tasks (Ettinger et al., 2017; Antol et al., 2015; Jabri et al., 2016; Poliak et al., 2018) , no collected dataset is entirely without its inherent biases and often the biases in datasets go undiscovered.", "We believe that evaluation benchmarks should evolve and improve over time and we are planning to incrementally update the Story Cloze Test benchmark.", "All the new versions, along with a leader-board showcasing the stateof-the-art results, will be tracked via CodaLab https://competitions.codalab.org/ competitions/15333.", "The success of our modified data collection method shows how extreme care must be given for sourcing new datasets.", "We suggest the next SCT challenges to be completely blind, where the participants cannot deliberately leverage any particular data biases.", "Along with this paper, we are releasing the datasets and the developed models to the community.", "All the announcements, new supplementary material, and datasets can be accessed through http://cs.", "rochester.edu/nlp/rocstories/.", "We hope that this work ignites further interest in the community for making progress on story understanding." ] }
{ "paper_header_number": [ "1", "2", "4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Model", "Data Collection", "Results", "Conclusion" ] }
GEM-SciDuet-train-105#paper-1275#slide-2
Story Cloze Test Mostafazadeh et al 2016
The current benchmark for evaluating story understanding and narrative structure learning. Story Cloze Task: Given a context of four sentences, predict the ending of the story, i.e. Select from the right and wrong ending choices. Context Right Ending Wrong Ending Jim got his first credit card in college. He didnt have a job so he bought everything on his card. After he graduated he amounted a $10,000 debt. Jim realized that he was foolish to spend so much money. Jim decided to devised a plan for repayment. Jim decided to open another credit card. From now on we will refer to SCT as SCT-v1.0 Allen, Omid B Tackling the Biasesakhshandeh,
The current benchmark for evaluating story understanding and narrative structure learning. Story Cloze Task: Given a context of four sentences, predict the ending of the story, i.e. Select from the right and wrong ending choices. Context Right Ending Wrong Ending Jim got his first credit card in college. He didnt have a job so he bought everything on his card. After he graduated he amounted a $10,000 debt. Jim realized that he was foolish to spend so much money. Jim decided to devised a plan for repayment. Jim decided to open another credit card. From now on we will refer to SCT as SCT-v1.0 Allen, Omid B Tackling the Biasesakhshandeh,
[]
GEM-SciDuet-train-105#paper-1275#slide-3
1275
Tackling the Story Ending Biases in The Story Cloze Test
The Story Cloze Test (SCT) is a recent framework for evaluating story comprehension and script learning. There have been a variety of models tackling the SCT so far. Although the original goal behind the SCT was to require systems to perform deep language understanding and commonsense reasoning for successful narrative understanding, some recent models could perform significantly better than the initial baselines by leveraging human-authorship biases discovered in the SCT dataset. In order to shed some light on this issue, we have performed various data analysis and analyzed a variety of top performing models presented for this task. Given the statistics we have aggregated, we have designed a new crowdsourcing scheme that creates a new SCT dataset, which overcomes some of the biases. We benchmark a few models on the new dataset and show that the topperforming model on the original SCT dataset fails to keep up its performance. Our findings further signify the importance of benchmarking NLP systems on various evolving test sets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112 ], "paper_content_text": [ "Introduction Story comprehension has been one of the longestrunning ambitions in artificial intelligence (Dijk, 1980; Charniak, 1972) .", "One of the challenges in expanding the field had been the lack of a solid evaluation framework and datasets on which comprehension models can be trained and tested.", "Mostafazadeh et al.", "(2016) introduced the Story Cloze Test (SCT) evaluation framework to address * This work was performed at University of Rochester.", "this issue.", "This test evaluates a story comprehension system where the system is given a foursentence short story as the 'context' and two alternative endings and to the story, labeled 'right ending' and 'wrong ending.'", "Then, the system's task is to choose the right ending.", "In order to support this task, Mostafazadeh et al.", "also provide the ROC Stories dataset, which is a collection of crowd-sourced complete five sentence stories through Amazon Mechanical Turk (MTurk).", "Each story follows a character through a fairly simple series of events to a conclusion.", "Several shallow and neural models, including the state-of-the-art script learning approaches, were presented as baselines (Mostafazadeh et al., 2016) for tackling the task, where they show that all their models perform only slightly better than a random baseline suggesting that richer models are required for tackling this task.", "A variety of new systems were proposed (Mihaylov and Frank, 2017; Schenk and Chiarcos, 2017; Schwartz et al., 2017b; Roemmele et al., 2017) as a part of the first shared task on SCT at LSDSem'17 workshop (Mostafazadeh et al., 2017) .", "Surprisingly, one of the models made a staggering improvement of 15% to the accuracy, partially due to using stylistic features isolated in the ending choices (Schwartz et al., 2017b) , discarding the narrative context.", "Clearly, this success does not seem to reflect the intent of the original task, where the systems should leverage narrative understanding as opposed to the statistical biases in the data.", "In this paper, we study the effect of such biases between the ending choices and present a new scheme to reduce such stylistic artifacts.", "The contribution of this paper is threefold: (1) we provide an extensive analysis of the SCT dataset to shed some light on the ending data characteristics (Section 3) (2) we develop a new strong classifier for tackling the SCT that uses a variety Context Right Ending Wrong Ending Ramona was very unhappy in her job.", "She asked for a raise, but was denied.", "The refusal prompted her to aggressively comb the want ads.", "She found an interesting new possibility and set up an interview.", "She was offered the new job at a higher salary.", "Ramona had no reason to want to change jobs anymore.", "The teacher was walking with a stack of papers.", "Outside started to rain.", "When the teacher tried to walk down a few steps, she ended up falling.", "The papers flew out of her hands and landed on the ground.", "A passer-by helped her up and helped her collect the papers.", "The teacher got up and walked home leaving the papers behind.", "(Chaturvedi et al., 2017) fails to keep up the performance on our new dataset.", "We discuss the implications of this experiment to the greater research community in terms of data collection and benchmarking practices in Section 6.", "All the code and datasets for this paper will be released to the public.", "We hope that the availability of the new evaluation set can further support the continued research on story understanding.", "Related Work This paper mainly extends the work on creating the Story Cloze Test set (Mostafazadeh et al., 2016) , hereinafter SCT-v1.0.", "The SCT-v1.0 dataset was created as follows: full five-sentence stories from the ROC Stories corpus were sampled, then, the initial four sentences were shown to a set of MTurk 2 crowd workers who were prompted to author 'right' and 'wrong' endings.", "Mostafazadeh et al.", "(Mostafazadeh et al., 2016) give special care to make sure there were no boundary cases for 'right' and 'wrong' endings by implementing extra rounds of data filtering.", "The resulting SCT-v1.0 dataset had a validation (hereinafter, SCT-v1.0 Val) and a test set (SCT-v1.0 test), each with 1,871 cases.", "Furthermore, we conducted an extensive ngram analysis, using word tokens, characters, partof-speech, and token-POS (similar to Schwartz et al.", "(Schwartz et al., 2017b) ) as features.", "We see char-grams such as \"sn't\" and \"not\" appear more commonly in the 'wrong endings', suggesting heavy negation.", "In 'right endings', pronouns are used more frequently versus proper nouns used in 'wrong endings'.", "Artifacts such as 'pizza' are common in 'wrong endings,' which could suggest that for a given topic, the authors may replace an object in a right ending with a wrong one and quickly think up a common item such as pizza to create a 'wrong' one.", "An extensive analysis of these features, including the n-gram analysis, can be found in the supplementary material.", "Model Following the analysis above, we developed a Story Cloze model, hereinafter EndingReg, that only uses the ending features while disregarding the story context for choosing the right ending.", "We expanded each Story Cloze Test case's ending options into a set of two single sentences.", "Then, for each sentence, we created the following features: 1.", "Number of tokens 2.", "VADER composite sentiment score 3.", "Yngve complexity score 4.", "Token-POS n-grams 5.", "POS n-grams 6.", "Four length character-grams All n-gram features needed to appear at least five times throughout the dataset.", "The features were collected for each five-sentence story and then fed into a logistic regression classifier.", "As an initial experiment, we trained this model using the SCT-v1.0 validation set and tested on the SCT-v1.0 test set.", "An L2 regularization penalty was used to enforce a Gaussian prior on the feature-space, where a grid search was conducted for hyper-parameter tuning.", "This model achieves an accuracy of 71.5% on the SCT-v1.0 dataset which is on par with the highest score achieved by any model using only the endings.", "Table 3 shows the accuracies ob-tained by models using only those particular features.", "We achieve minimal but sometimes important classification using token count, VADER, and Yngve in combination alone, better classification using POS or char-grams alone, and best classification using n-grams alone.", "By combining all of them we achieve the overall best results.", "Table 3 : Classification results on SCT-v1.0 using each of the feature sets designated in the columns.", "Data Collection Based on the findings above, a new test set for the SCT was deemed necessary.", "The premise of predicting an ending to a short story, as opposed to predicting say a middle sentence, enables a more systematic evaluation where human can agree on the cases 100%.", "Hence, our goal was to come up with a data collection scheme that overcomes the data collection biases, while keeping the original evaluation format.", "As the data analysis revealed, the token count, sentiment, and the complexity are not as important features for classification as the ending n-grams are.", "We set the following goals for sourcing the new 'right' and 'wrong' endings.", "They both should: 1.", "Contain a similar number of tokens 2.", "Have similar distributions of token n-grams and char-grams 3.", "Occur as standalone events with the same likelihood to occur, with topical, sentimental, or emotion consistencies when applicable.", "First, we crowdsourced 5,000 new five-sentence stories through Amazon Mechanical Turk.", "We prompted the users in the same manner described in Mostafazadeh et al.", "(2016) .", "In order to source new 'wrong' endings, we tried two different methods.", "In Method #1, we kept the original ending sourcing format of Mostafazadeh et al., but imposed some further restrictions.", "This was done by taking the first four sentences of the newly collected stories and asking an MTurker to write a 'right' and 'wrong' ending for each.", "The new restrictions were: 'Each sentence should stay within the same subject area of the story,' and 'The number of words in the Right and Wrong sentences should not differ by more than 2 words,' and 'When possible, the Right and Wrong sentences should try to keep a similar tone/sentiment as one another.'", "The motivation behind this technique was to reduce the statistical differences by asking the user to be mindful of considerations.", "In Method #2, we took the five sentences stories and prompted a second set of MTurk workers to modify the fifth sentence in order to make a resulting five-sentence story non-sensible.", "Here, the prompt instructs the workers to make sure the new 'wrong ending' sentence makes sense standalone, that it does not differ in the number of words from the original sentence by more than three words, and that the changes cannot be as simple as e.g., putting the word 'not' in front of a description or a verb.", "As a result, the workers had much less flexibility for changing the underlying linguistic structures which can help tackle the authorship style differences between the 'right' and 'wrong' endings.", "The results in Table 4 , which show classification accuracy when using EndingReg on the two new data sources, show that Method #2 is a slightly better data sourcing scheme in reducing the bias, since the EndingReg model's performance is slightly worse.", "The set was further filtered through human verification similar to Mostafazadeh et al.", "(2016) .", "The filtering was done by splitting each SCT-v1.0's two alternative endings into two independent five-sentence stories and asking three different MTurk users to categorize the story as either: one where the story made complete sense, one where the story made sense until the last sentence and one where the story does not make sense for another reason.", "Stories were only selected if all the three MTurk users verified that the story with the 'right ending' and the corresponding story with the 'wrong ending' were verified to be indeed right and wrong respectively.", "This ensured a higher quality of data and eliminating boundary cases.", "This entire process resulted in creating the Story Cloze Test v1.5 (SCT-v1.5) dataset, consisting of 1,571 stories for each validation and test sets.", "Method #1 Method #2 EndingReg 0.709 0.695 cogcomp 0.649 0.641 Table 4 : Comparison of initial data sourcing methods n βˆ’ gram char βˆ’ gram P OS SCT-v1.0 13.9 12.4 16.4 SCT-v1.5 7.0 6.3 7.5 Table 5 : Standard deviation of the word and character n-gram counts, as well as the part of speech (POS) counts, between the right and wrong endings.", "Results In order to test the decrease in n-gram bias, which was the most salient feature for the classification task using only the endings, we compare the variance between the n-gram counts from SCT-v1.0 to SCT-v1.5.", "The results are presented in Table 5 , which indicates the drop in the standard deviations in our new dataset.", "Table 6 shows the classification results of various models on SCT-v1.5.", "The drop in accuracy of the EndingReg model between the SCT-v1.0 and SCT-v1.5 shows a significant improvement on the statistical weight of the stylistic features generated by the model.", "Since the main features used are the token length and the various n-grams, this suggests that the new 'right endings' and 'wrong endings' have much more similar token n-gram, pos n-gram, postoken n-gram and char-gram overlap.", "Furthermore, the CogComp model's performance has significantly dropped on SCT-v1.5.", "Although this model seems to be using story comprehension features such as event sequencing, since the endings are included in the sequences, the biases within the endings have influenced the predictions and the weak performance of the model in SCT-v1.5 suggest that this model had picked up on the biases of SCT-v1.0 as opposed to really understanding the context.", "In particular, the posterior probabilities for each ending choice using their features are quite similar on the SCT-v1.5.", "These results place the classification accuracy of this top performing model on par with or worse than the models that did not use the ending features of the old SCT-v1.0 dataset (Mostafazadeh et al., 2017) , which suggest that the gap that once was held by models using the ending biases seems to be corrected for.", "Al- though we did not get to test all the other models published on SCT-v1.0 directly, we predict similar trends.", "It is important to point out that the 64.4% performance attained by our EndingReg model is still high for a model which completely discards the context.", "This indicates that although we could correct for some of the stylistic biases, there are some other hidden patterns in the new endings that would not have been accounted for without having the EndingReg baseline.", "This showcases the importance of maintaining benchmarks that evolve and improve over time, where systems should not be optimized for particular narrow test sets.", "We propose the community to report accuracies on both SCT-v1.0 and SCT-v1.5, both of which still have a huge gap between the best system and the human performance.", "Conclusion In this paper, we presented a comprehensive analysis of the stylistic features isolated in the endings of the original Story Cloze Test (SCT-v1.0).", "Using that analysis, along with a classifier we developed for testing new data collection schemes, we created a new SCT dataset, SCT-v1.5, which overcomes some of the biases.", "Based on the results presented in this paper, we believe that our SCT-v1.5 is a better benchmark for story comprehension.", "However, as shown in multiple AI tasks (Ettinger et al., 2017; Antol et al., 2015; Jabri et al., 2016; Poliak et al., 2018) , no collected dataset is entirely without its inherent biases and often the biases in datasets go undiscovered.", "We believe that evaluation benchmarks should evolve and improve over time and we are planning to incrementally update the Story Cloze Test benchmark.", "All the new versions, along with a leader-board showcasing the stateof-the-art results, will be tracked via CodaLab https://competitions.codalab.org/ competitions/15333.", "The success of our modified data collection method shows how extreme care must be given for sourcing new datasets.", "We suggest the next SCT challenges to be completely blind, where the participants cannot deliberately leverage any particular data biases.", "Along with this paper, we are releasing the datasets and the developed models to the community.", "All the announcements, new supplementary material, and datasets can be accessed through http://cs.", "rochester.edu/nlp/rocstories/.", "We hope that this work ignites further interest in the community for making progress on story understanding." ] }
{ "paper_header_number": [ "1", "2", "4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Model", "Data Collection", "Results", "Conclusion" ] }
GEM-SciDuet-train-105#paper-1275#slide-3
Results On SCT 10
Baseline Results LSDSem17 and Other Models Constant Choose First cogcomp Logistic Narrative-Chains-AP mflor Rule Based Narrative-Chains-Stories Pranav Goel Logistic cogcomp(UIUC) - Linear classiftcation system that measures a storys coherence based on the sequence of events, emotional trajectory, and plot consistency (includes endings). msap(UW) - Linear classifier based on language modeling probabilities of the entire story, and linguistic features of only the ending sentences. Allen, Omid B Tackling the Biasesakhshandeh,
Baseline Results LSDSem17 and Other Models Constant Choose First cogcomp Logistic Narrative-Chains-AP mflor Rule Based Narrative-Chains-Stories Pranav Goel Logistic cogcomp(UIUC) - Linear classiftcation system that measures a storys coherence based on the sequence of events, emotional trajectory, and plot consistency (includes endings). msap(UW) - Linear classifier based on language modeling probabilities of the entire story, and linguistic features of only the ending sentences. Allen, Omid B Tackling the Biasesakhshandeh,
[]
GEM-SciDuet-train-105#paper-1275#slide-4
1275
Tackling the Story Ending Biases in The Story Cloze Test
The Story Cloze Test (SCT) is a recent framework for evaluating story comprehension and script learning. There have been a variety of models tackling the SCT so far. Although the original goal behind the SCT was to require systems to perform deep language understanding and commonsense reasoning for successful narrative understanding, some recent models could perform significantly better than the initial baselines by leveraging human-authorship biases discovered in the SCT dataset. In order to shed some light on this issue, we have performed various data analysis and analyzed a variety of top performing models presented for this task. Given the statistics we have aggregated, we have designed a new crowdsourcing scheme that creates a new SCT dataset, which overcomes some of the biases. We benchmark a few models on the new dataset and show that the topperforming model on the original SCT dataset fails to keep up its performance. Our findings further signify the importance of benchmarking NLP systems on various evolving test sets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112 ], "paper_content_text": [ "Introduction Story comprehension has been one of the longestrunning ambitions in artificial intelligence (Dijk, 1980; Charniak, 1972) .", "One of the challenges in expanding the field had been the lack of a solid evaluation framework and datasets on which comprehension models can be trained and tested.", "Mostafazadeh et al.", "(2016) introduced the Story Cloze Test (SCT) evaluation framework to address * This work was performed at University of Rochester.", "this issue.", "This test evaluates a story comprehension system where the system is given a foursentence short story as the 'context' and two alternative endings and to the story, labeled 'right ending' and 'wrong ending.'", "Then, the system's task is to choose the right ending.", "In order to support this task, Mostafazadeh et al.", "also provide the ROC Stories dataset, which is a collection of crowd-sourced complete five sentence stories through Amazon Mechanical Turk (MTurk).", "Each story follows a character through a fairly simple series of events to a conclusion.", "Several shallow and neural models, including the state-of-the-art script learning approaches, were presented as baselines (Mostafazadeh et al., 2016) for tackling the task, where they show that all their models perform only slightly better than a random baseline suggesting that richer models are required for tackling this task.", "A variety of new systems were proposed (Mihaylov and Frank, 2017; Schenk and Chiarcos, 2017; Schwartz et al., 2017b; Roemmele et al., 2017) as a part of the first shared task on SCT at LSDSem'17 workshop (Mostafazadeh et al., 2017) .", "Surprisingly, one of the models made a staggering improvement of 15% to the accuracy, partially due to using stylistic features isolated in the ending choices (Schwartz et al., 2017b) , discarding the narrative context.", "Clearly, this success does not seem to reflect the intent of the original task, where the systems should leverage narrative understanding as opposed to the statistical biases in the data.", "In this paper, we study the effect of such biases between the ending choices and present a new scheme to reduce such stylistic artifacts.", "The contribution of this paper is threefold: (1) we provide an extensive analysis of the SCT dataset to shed some light on the ending data characteristics (Section 3) (2) we develop a new strong classifier for tackling the SCT that uses a variety Context Right Ending Wrong Ending Ramona was very unhappy in her job.", "She asked for a raise, but was denied.", "The refusal prompted her to aggressively comb the want ads.", "She found an interesting new possibility and set up an interview.", "She was offered the new job at a higher salary.", "Ramona had no reason to want to change jobs anymore.", "The teacher was walking with a stack of papers.", "Outside started to rain.", "When the teacher tried to walk down a few steps, she ended up falling.", "The papers flew out of her hands and landed on the ground.", "A passer-by helped her up and helped her collect the papers.", "The teacher got up and walked home leaving the papers behind.", "(Chaturvedi et al., 2017) fails to keep up the performance on our new dataset.", "We discuss the implications of this experiment to the greater research community in terms of data collection and benchmarking practices in Section 6.", "All the code and datasets for this paper will be released to the public.", "We hope that the availability of the new evaluation set can further support the continued research on story understanding.", "Related Work This paper mainly extends the work on creating the Story Cloze Test set (Mostafazadeh et al., 2016) , hereinafter SCT-v1.0.", "The SCT-v1.0 dataset was created as follows: full five-sentence stories from the ROC Stories corpus were sampled, then, the initial four sentences were shown to a set of MTurk 2 crowd workers who were prompted to author 'right' and 'wrong' endings.", "Mostafazadeh et al.", "(Mostafazadeh et al., 2016) give special care to make sure there were no boundary cases for 'right' and 'wrong' endings by implementing extra rounds of data filtering.", "The resulting SCT-v1.0 dataset had a validation (hereinafter, SCT-v1.0 Val) and a test set (SCT-v1.0 test), each with 1,871 cases.", "Furthermore, we conducted an extensive ngram analysis, using word tokens, characters, partof-speech, and token-POS (similar to Schwartz et al.", "(Schwartz et al., 2017b) ) as features.", "We see char-grams such as \"sn't\" and \"not\" appear more commonly in the 'wrong endings', suggesting heavy negation.", "In 'right endings', pronouns are used more frequently versus proper nouns used in 'wrong endings'.", "Artifacts such as 'pizza' are common in 'wrong endings,' which could suggest that for a given topic, the authors may replace an object in a right ending with a wrong one and quickly think up a common item such as pizza to create a 'wrong' one.", "An extensive analysis of these features, including the n-gram analysis, can be found in the supplementary material.", "Model Following the analysis above, we developed a Story Cloze model, hereinafter EndingReg, that only uses the ending features while disregarding the story context for choosing the right ending.", "We expanded each Story Cloze Test case's ending options into a set of two single sentences.", "Then, for each sentence, we created the following features: 1.", "Number of tokens 2.", "VADER composite sentiment score 3.", "Yngve complexity score 4.", "Token-POS n-grams 5.", "POS n-grams 6.", "Four length character-grams All n-gram features needed to appear at least five times throughout the dataset.", "The features were collected for each five-sentence story and then fed into a logistic regression classifier.", "As an initial experiment, we trained this model using the SCT-v1.0 validation set and tested on the SCT-v1.0 test set.", "An L2 regularization penalty was used to enforce a Gaussian prior on the feature-space, where a grid search was conducted for hyper-parameter tuning.", "This model achieves an accuracy of 71.5% on the SCT-v1.0 dataset which is on par with the highest score achieved by any model using only the endings.", "Table 3 shows the accuracies ob-tained by models using only those particular features.", "We achieve minimal but sometimes important classification using token count, VADER, and Yngve in combination alone, better classification using POS or char-grams alone, and best classification using n-grams alone.", "By combining all of them we achieve the overall best results.", "Table 3 : Classification results on SCT-v1.0 using each of the feature sets designated in the columns.", "Data Collection Based on the findings above, a new test set for the SCT was deemed necessary.", "The premise of predicting an ending to a short story, as opposed to predicting say a middle sentence, enables a more systematic evaluation where human can agree on the cases 100%.", "Hence, our goal was to come up with a data collection scheme that overcomes the data collection biases, while keeping the original evaluation format.", "As the data analysis revealed, the token count, sentiment, and the complexity are not as important features for classification as the ending n-grams are.", "We set the following goals for sourcing the new 'right' and 'wrong' endings.", "They both should: 1.", "Contain a similar number of tokens 2.", "Have similar distributions of token n-grams and char-grams 3.", "Occur as standalone events with the same likelihood to occur, with topical, sentimental, or emotion consistencies when applicable.", "First, we crowdsourced 5,000 new five-sentence stories through Amazon Mechanical Turk.", "We prompted the users in the same manner described in Mostafazadeh et al.", "(2016) .", "In order to source new 'wrong' endings, we tried two different methods.", "In Method #1, we kept the original ending sourcing format of Mostafazadeh et al., but imposed some further restrictions.", "This was done by taking the first four sentences of the newly collected stories and asking an MTurker to write a 'right' and 'wrong' ending for each.", "The new restrictions were: 'Each sentence should stay within the same subject area of the story,' and 'The number of words in the Right and Wrong sentences should not differ by more than 2 words,' and 'When possible, the Right and Wrong sentences should try to keep a similar tone/sentiment as one another.'", "The motivation behind this technique was to reduce the statistical differences by asking the user to be mindful of considerations.", "In Method #2, we took the five sentences stories and prompted a second set of MTurk workers to modify the fifth sentence in order to make a resulting five-sentence story non-sensible.", "Here, the prompt instructs the workers to make sure the new 'wrong ending' sentence makes sense standalone, that it does not differ in the number of words from the original sentence by more than three words, and that the changes cannot be as simple as e.g., putting the word 'not' in front of a description or a verb.", "As a result, the workers had much less flexibility for changing the underlying linguistic structures which can help tackle the authorship style differences between the 'right' and 'wrong' endings.", "The results in Table 4 , which show classification accuracy when using EndingReg on the two new data sources, show that Method #2 is a slightly better data sourcing scheme in reducing the bias, since the EndingReg model's performance is slightly worse.", "The set was further filtered through human verification similar to Mostafazadeh et al.", "(2016) .", "The filtering was done by splitting each SCT-v1.0's two alternative endings into two independent five-sentence stories and asking three different MTurk users to categorize the story as either: one where the story made complete sense, one where the story made sense until the last sentence and one where the story does not make sense for another reason.", "Stories were only selected if all the three MTurk users verified that the story with the 'right ending' and the corresponding story with the 'wrong ending' were verified to be indeed right and wrong respectively.", "This ensured a higher quality of data and eliminating boundary cases.", "This entire process resulted in creating the Story Cloze Test v1.5 (SCT-v1.5) dataset, consisting of 1,571 stories for each validation and test sets.", "Method #1 Method #2 EndingReg 0.709 0.695 cogcomp 0.649 0.641 Table 4 : Comparison of initial data sourcing methods n βˆ’ gram char βˆ’ gram P OS SCT-v1.0 13.9 12.4 16.4 SCT-v1.5 7.0 6.3 7.5 Table 5 : Standard deviation of the word and character n-gram counts, as well as the part of speech (POS) counts, between the right and wrong endings.", "Results In order to test the decrease in n-gram bias, which was the most salient feature for the classification task using only the endings, we compare the variance between the n-gram counts from SCT-v1.0 to SCT-v1.5.", "The results are presented in Table 5 , which indicates the drop in the standard deviations in our new dataset.", "Table 6 shows the classification results of various models on SCT-v1.5.", "The drop in accuracy of the EndingReg model between the SCT-v1.0 and SCT-v1.5 shows a significant improvement on the statistical weight of the stylistic features generated by the model.", "Since the main features used are the token length and the various n-grams, this suggests that the new 'right endings' and 'wrong endings' have much more similar token n-gram, pos n-gram, postoken n-gram and char-gram overlap.", "Furthermore, the CogComp model's performance has significantly dropped on SCT-v1.5.", "Although this model seems to be using story comprehension features such as event sequencing, since the endings are included in the sequences, the biases within the endings have influenced the predictions and the weak performance of the model in SCT-v1.5 suggest that this model had picked up on the biases of SCT-v1.0 as opposed to really understanding the context.", "In particular, the posterior probabilities for each ending choice using their features are quite similar on the SCT-v1.5.", "These results place the classification accuracy of this top performing model on par with or worse than the models that did not use the ending features of the old SCT-v1.0 dataset (Mostafazadeh et al., 2017) , which suggest that the gap that once was held by models using the ending biases seems to be corrected for.", "Al- though we did not get to test all the other models published on SCT-v1.0 directly, we predict similar trends.", "It is important to point out that the 64.4% performance attained by our EndingReg model is still high for a model which completely discards the context.", "This indicates that although we could correct for some of the stylistic biases, there are some other hidden patterns in the new endings that would not have been accounted for without having the EndingReg baseline.", "This showcases the importance of maintaining benchmarks that evolve and improve over time, where systems should not be optimized for particular narrow test sets.", "We propose the community to report accuracies on both SCT-v1.0 and SCT-v1.5, both of which still have a huge gap between the best system and the human performance.", "Conclusion In this paper, we presented a comprehensive analysis of the stylistic features isolated in the endings of the original Story Cloze Test (SCT-v1.0).", "Using that analysis, along with a classifier we developed for testing new data collection schemes, we created a new SCT dataset, SCT-v1.5, which overcomes some of the biases.", "Based on the results presented in this paper, we believe that our SCT-v1.5 is a better benchmark for story comprehension.", "However, as shown in multiple AI tasks (Ettinger et al., 2017; Antol et al., 2015; Jabri et al., 2016; Poliak et al., 2018) , no collected dataset is entirely without its inherent biases and often the biases in datasets go undiscovered.", "We believe that evaluation benchmarks should evolve and improve over time and we are planning to incrementally update the Story Cloze Test benchmark.", "All the new versions, along with a leader-board showcasing the stateof-the-art results, will be tracked via CodaLab https://competitions.codalab.org/ competitions/15333.", "The success of our modified data collection method shows how extreme care must be given for sourcing new datasets.", "We suggest the next SCT challenges to be completely blind, where the participants cannot deliberately leverage any particular data biases.", "Along with this paper, we are releasing the datasets and the developed models to the community.", "All the announcements, new supplementary material, and datasets can be accessed through http://cs.", "rochester.edu/nlp/rocstories/.", "We hope that this work ignites further interest in the community for making progress on story understanding." ] }
{ "paper_header_number": [ "1", "2", "4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Model", "Data Collection", "Results", "Conclusion" ] }
GEM-SciDuet-train-105#paper-1275#slide-4
Story Ending Biases
Mostafazadeh et al. (2016) were very careful with the task design, the data collection process, and establishing various baselines sampled from ROC Stories created Wrong Ending stories through Amazon MTurk had an AMT to verify quality Despite that, Schwartz et al. found stylistic differences between right and wrong endings: number of words n-gram distribution character n-gram distribution Their classifier without feeding context achieves 72.4% accuracy on SCT-v1.0! **similar results confirmed by other models, (Cai et al., 2017) Allen, Omid B Tackling the Biasesakhshandeh,
Mostafazadeh et al. (2016) were very careful with the task design, the data collection process, and establishing various baselines sampled from ROC Stories created Wrong Ending stories through Amazon MTurk had an AMT to verify quality Despite that, Schwartz et al. found stylistic differences between right and wrong endings: number of words n-gram distribution character n-gram distribution Their classifier without feeding context achieves 72.4% accuracy on SCT-v1.0! **similar results confirmed by other models, (Cai et al., 2017) Allen, Omid B Tackling the Biasesakhshandeh,
[]
GEM-SciDuet-train-105#paper-1275#slide-5
1275
Tackling the Story Ending Biases in The Story Cloze Test
The Story Cloze Test (SCT) is a recent framework for evaluating story comprehension and script learning. There have been a variety of models tackling the SCT so far. Although the original goal behind the SCT was to require systems to perform deep language understanding and commonsense reasoning for successful narrative understanding, some recent models could perform significantly better than the initial baselines by leveraging human-authorship biases discovered in the SCT dataset. In order to shed some light on this issue, we have performed various data analysis and analyzed a variety of top performing models presented for this task. Given the statistics we have aggregated, we have designed a new crowdsourcing scheme that creates a new SCT dataset, which overcomes some of the biases. We benchmark a few models on the new dataset and show that the topperforming model on the original SCT dataset fails to keep up its performance. Our findings further signify the importance of benchmarking NLP systems on various evolving test sets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112 ], "paper_content_text": [ "Introduction Story comprehension has been one of the longestrunning ambitions in artificial intelligence (Dijk, 1980; Charniak, 1972) .", "One of the challenges in expanding the field had been the lack of a solid evaluation framework and datasets on which comprehension models can be trained and tested.", "Mostafazadeh et al.", "(2016) introduced the Story Cloze Test (SCT) evaluation framework to address * This work was performed at University of Rochester.", "this issue.", "This test evaluates a story comprehension system where the system is given a foursentence short story as the 'context' and two alternative endings and to the story, labeled 'right ending' and 'wrong ending.'", "Then, the system's task is to choose the right ending.", "In order to support this task, Mostafazadeh et al.", "also provide the ROC Stories dataset, which is a collection of crowd-sourced complete five sentence stories through Amazon Mechanical Turk (MTurk).", "Each story follows a character through a fairly simple series of events to a conclusion.", "Several shallow and neural models, including the state-of-the-art script learning approaches, were presented as baselines (Mostafazadeh et al., 2016) for tackling the task, where they show that all their models perform only slightly better than a random baseline suggesting that richer models are required for tackling this task.", "A variety of new systems were proposed (Mihaylov and Frank, 2017; Schenk and Chiarcos, 2017; Schwartz et al., 2017b; Roemmele et al., 2017) as a part of the first shared task on SCT at LSDSem'17 workshop (Mostafazadeh et al., 2017) .", "Surprisingly, one of the models made a staggering improvement of 15% to the accuracy, partially due to using stylistic features isolated in the ending choices (Schwartz et al., 2017b) , discarding the narrative context.", "Clearly, this success does not seem to reflect the intent of the original task, where the systems should leverage narrative understanding as opposed to the statistical biases in the data.", "In this paper, we study the effect of such biases between the ending choices and present a new scheme to reduce such stylistic artifacts.", "The contribution of this paper is threefold: (1) we provide an extensive analysis of the SCT dataset to shed some light on the ending data characteristics (Section 3) (2) we develop a new strong classifier for tackling the SCT that uses a variety Context Right Ending Wrong Ending Ramona was very unhappy in her job.", "She asked for a raise, but was denied.", "The refusal prompted her to aggressively comb the want ads.", "She found an interesting new possibility and set up an interview.", "She was offered the new job at a higher salary.", "Ramona had no reason to want to change jobs anymore.", "The teacher was walking with a stack of papers.", "Outside started to rain.", "When the teacher tried to walk down a few steps, she ended up falling.", "The papers flew out of her hands and landed on the ground.", "A passer-by helped her up and helped her collect the papers.", "The teacher got up and walked home leaving the papers behind.", "(Chaturvedi et al., 2017) fails to keep up the performance on our new dataset.", "We discuss the implications of this experiment to the greater research community in terms of data collection and benchmarking practices in Section 6.", "All the code and datasets for this paper will be released to the public.", "We hope that the availability of the new evaluation set can further support the continued research on story understanding.", "Related Work This paper mainly extends the work on creating the Story Cloze Test set (Mostafazadeh et al., 2016) , hereinafter SCT-v1.0.", "The SCT-v1.0 dataset was created as follows: full five-sentence stories from the ROC Stories corpus were sampled, then, the initial four sentences were shown to a set of MTurk 2 crowd workers who were prompted to author 'right' and 'wrong' endings.", "Mostafazadeh et al.", "(Mostafazadeh et al., 2016) give special care to make sure there were no boundary cases for 'right' and 'wrong' endings by implementing extra rounds of data filtering.", "The resulting SCT-v1.0 dataset had a validation (hereinafter, SCT-v1.0 Val) and a test set (SCT-v1.0 test), each with 1,871 cases.", "Furthermore, we conducted an extensive ngram analysis, using word tokens, characters, partof-speech, and token-POS (similar to Schwartz et al.", "(Schwartz et al., 2017b) ) as features.", "We see char-grams such as \"sn't\" and \"not\" appear more commonly in the 'wrong endings', suggesting heavy negation.", "In 'right endings', pronouns are used more frequently versus proper nouns used in 'wrong endings'.", "Artifacts such as 'pizza' are common in 'wrong endings,' which could suggest that for a given topic, the authors may replace an object in a right ending with a wrong one and quickly think up a common item such as pizza to create a 'wrong' one.", "An extensive analysis of these features, including the n-gram analysis, can be found in the supplementary material.", "Model Following the analysis above, we developed a Story Cloze model, hereinafter EndingReg, that only uses the ending features while disregarding the story context for choosing the right ending.", "We expanded each Story Cloze Test case's ending options into a set of two single sentences.", "Then, for each sentence, we created the following features: 1.", "Number of tokens 2.", "VADER composite sentiment score 3.", "Yngve complexity score 4.", "Token-POS n-grams 5.", "POS n-grams 6.", "Four length character-grams All n-gram features needed to appear at least five times throughout the dataset.", "The features were collected for each five-sentence story and then fed into a logistic regression classifier.", "As an initial experiment, we trained this model using the SCT-v1.0 validation set and tested on the SCT-v1.0 test set.", "An L2 regularization penalty was used to enforce a Gaussian prior on the feature-space, where a grid search was conducted for hyper-parameter tuning.", "This model achieves an accuracy of 71.5% on the SCT-v1.0 dataset which is on par with the highest score achieved by any model using only the endings.", "Table 3 shows the accuracies ob-tained by models using only those particular features.", "We achieve minimal but sometimes important classification using token count, VADER, and Yngve in combination alone, better classification using POS or char-grams alone, and best classification using n-grams alone.", "By combining all of them we achieve the overall best results.", "Table 3 : Classification results on SCT-v1.0 using each of the feature sets designated in the columns.", "Data Collection Based on the findings above, a new test set for the SCT was deemed necessary.", "The premise of predicting an ending to a short story, as opposed to predicting say a middle sentence, enables a more systematic evaluation where human can agree on the cases 100%.", "Hence, our goal was to come up with a data collection scheme that overcomes the data collection biases, while keeping the original evaluation format.", "As the data analysis revealed, the token count, sentiment, and the complexity are not as important features for classification as the ending n-grams are.", "We set the following goals for sourcing the new 'right' and 'wrong' endings.", "They both should: 1.", "Contain a similar number of tokens 2.", "Have similar distributions of token n-grams and char-grams 3.", "Occur as standalone events with the same likelihood to occur, with topical, sentimental, or emotion consistencies when applicable.", "First, we crowdsourced 5,000 new five-sentence stories through Amazon Mechanical Turk.", "We prompted the users in the same manner described in Mostafazadeh et al.", "(2016) .", "In order to source new 'wrong' endings, we tried two different methods.", "In Method #1, we kept the original ending sourcing format of Mostafazadeh et al., but imposed some further restrictions.", "This was done by taking the first four sentences of the newly collected stories and asking an MTurker to write a 'right' and 'wrong' ending for each.", "The new restrictions were: 'Each sentence should stay within the same subject area of the story,' and 'The number of words in the Right and Wrong sentences should not differ by more than 2 words,' and 'When possible, the Right and Wrong sentences should try to keep a similar tone/sentiment as one another.'", "The motivation behind this technique was to reduce the statistical differences by asking the user to be mindful of considerations.", "In Method #2, we took the five sentences stories and prompted a second set of MTurk workers to modify the fifth sentence in order to make a resulting five-sentence story non-sensible.", "Here, the prompt instructs the workers to make sure the new 'wrong ending' sentence makes sense standalone, that it does not differ in the number of words from the original sentence by more than three words, and that the changes cannot be as simple as e.g., putting the word 'not' in front of a description or a verb.", "As a result, the workers had much less flexibility for changing the underlying linguistic structures which can help tackle the authorship style differences between the 'right' and 'wrong' endings.", "The results in Table 4 , which show classification accuracy when using EndingReg on the two new data sources, show that Method #2 is a slightly better data sourcing scheme in reducing the bias, since the EndingReg model's performance is slightly worse.", "The set was further filtered through human verification similar to Mostafazadeh et al.", "(2016) .", "The filtering was done by splitting each SCT-v1.0's two alternative endings into two independent five-sentence stories and asking three different MTurk users to categorize the story as either: one where the story made complete sense, one where the story made sense until the last sentence and one where the story does not make sense for another reason.", "Stories were only selected if all the three MTurk users verified that the story with the 'right ending' and the corresponding story with the 'wrong ending' were verified to be indeed right and wrong respectively.", "This ensured a higher quality of data and eliminating boundary cases.", "This entire process resulted in creating the Story Cloze Test v1.5 (SCT-v1.5) dataset, consisting of 1,571 stories for each validation and test sets.", "Method #1 Method #2 EndingReg 0.709 0.695 cogcomp 0.649 0.641 Table 4 : Comparison of initial data sourcing methods n βˆ’ gram char βˆ’ gram P OS SCT-v1.0 13.9 12.4 16.4 SCT-v1.5 7.0 6.3 7.5 Table 5 : Standard deviation of the word and character n-gram counts, as well as the part of speech (POS) counts, between the right and wrong endings.", "Results In order to test the decrease in n-gram bias, which was the most salient feature for the classification task using only the endings, we compare the variance between the n-gram counts from SCT-v1.0 to SCT-v1.5.", "The results are presented in Table 5 , which indicates the drop in the standard deviations in our new dataset.", "Table 6 shows the classification results of various models on SCT-v1.5.", "The drop in accuracy of the EndingReg model between the SCT-v1.0 and SCT-v1.5 shows a significant improvement on the statistical weight of the stylistic features generated by the model.", "Since the main features used are the token length and the various n-grams, this suggests that the new 'right endings' and 'wrong endings' have much more similar token n-gram, pos n-gram, postoken n-gram and char-gram overlap.", "Furthermore, the CogComp model's performance has significantly dropped on SCT-v1.5.", "Although this model seems to be using story comprehension features such as event sequencing, since the endings are included in the sequences, the biases within the endings have influenced the predictions and the weak performance of the model in SCT-v1.5 suggest that this model had picked up on the biases of SCT-v1.0 as opposed to really understanding the context.", "In particular, the posterior probabilities for each ending choice using their features are quite similar on the SCT-v1.5.", "These results place the classification accuracy of this top performing model on par with or worse than the models that did not use the ending features of the old SCT-v1.0 dataset (Mostafazadeh et al., 2017) , which suggest that the gap that once was held by models using the ending biases seems to be corrected for.", "Al- though we did not get to test all the other models published on SCT-v1.0 directly, we predict similar trends.", "It is important to point out that the 64.4% performance attained by our EndingReg model is still high for a model which completely discards the context.", "This indicates that although we could correct for some of the stylistic biases, there are some other hidden patterns in the new endings that would not have been accounted for without having the EndingReg baseline.", "This showcases the importance of maintaining benchmarks that evolve and improve over time, where systems should not be optimized for particular narrow test sets.", "We propose the community to report accuracies on both SCT-v1.0 and SCT-v1.5, both of which still have a huge gap between the best system and the human performance.", "Conclusion In this paper, we presented a comprehensive analysis of the stylistic features isolated in the endings of the original Story Cloze Test (SCT-v1.0).", "Using that analysis, along with a classifier we developed for testing new data collection schemes, we created a new SCT dataset, SCT-v1.5, which overcomes some of the biases.", "Based on the results presented in this paper, we believe that our SCT-v1.5 is a better benchmark for story comprehension.", "However, as shown in multiple AI tasks (Ettinger et al., 2017; Antol et al., 2015; Jabri et al., 2016; Poliak et al., 2018) , no collected dataset is entirely without its inherent biases and often the biases in datasets go undiscovered.", "We believe that evaluation benchmarks should evolve and improve over time and we are planning to incrementally update the Story Cloze Test benchmark.", "All the new versions, along with a leader-board showcasing the stateof-the-art results, will be tracked via CodaLab https://competitions.codalab.org/ competitions/15333.", "The success of our modified data collection method shows how extreme care must be given for sourcing new datasets.", "We suggest the next SCT challenges to be completely blind, where the participants cannot deliberately leverage any particular data biases.", "Along with this paper, we are releasing the datasets and the developed models to the community.", "All the announcements, new supplementary material, and datasets can be accessed through http://cs.", "rochester.edu/nlp/rocstories/.", "We hope that this work ignites further interest in the community for making progress on story understanding." ] }
{ "paper_header_number": [ "1", "2", "4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Model", "Data Collection", "Results", "Conclusion" ] }
GEM-SciDuet-train-105#paper-1275#slide-5
Biases in Various Al Datasets
From NLI, to VQA, and now Story Cloze Test, our narrow benchmarks inevitably have data creation artifacts and hence yield biased models. Allen, Omid B Tackling the Biasesakhshandeh,
From NLI, to VQA, and now Story Cloze Test, our narrow benchmarks inevitably have data creation artifacts and hence yield biased models. Allen, Omid B Tackling the Biasesakhshandeh,
[]
GEM-SciDuet-train-105#paper-1275#slide-6
1275
Tackling the Story Ending Biases in The Story Cloze Test
The Story Cloze Test (SCT) is a recent framework for evaluating story comprehension and script learning. There have been a variety of models tackling the SCT so far. Although the original goal behind the SCT was to require systems to perform deep language understanding and commonsense reasoning for successful narrative understanding, some recent models could perform significantly better than the initial baselines by leveraging human-authorship biases discovered in the SCT dataset. In order to shed some light on this issue, we have performed various data analysis and analyzed a variety of top performing models presented for this task. Given the statistics we have aggregated, we have designed a new crowdsourcing scheme that creates a new SCT dataset, which overcomes some of the biases. We benchmark a few models on the new dataset and show that the topperforming model on the original SCT dataset fails to keep up its performance. Our findings further signify the importance of benchmarking NLP systems on various evolving test sets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112 ], "paper_content_text": [ "Introduction Story comprehension has been one of the longestrunning ambitions in artificial intelligence (Dijk, 1980; Charniak, 1972) .", "One of the challenges in expanding the field had been the lack of a solid evaluation framework and datasets on which comprehension models can be trained and tested.", "Mostafazadeh et al.", "(2016) introduced the Story Cloze Test (SCT) evaluation framework to address * This work was performed at University of Rochester.", "this issue.", "This test evaluates a story comprehension system where the system is given a foursentence short story as the 'context' and two alternative endings and to the story, labeled 'right ending' and 'wrong ending.'", "Then, the system's task is to choose the right ending.", "In order to support this task, Mostafazadeh et al.", "also provide the ROC Stories dataset, which is a collection of crowd-sourced complete five sentence stories through Amazon Mechanical Turk (MTurk).", "Each story follows a character through a fairly simple series of events to a conclusion.", "Several shallow and neural models, including the state-of-the-art script learning approaches, were presented as baselines (Mostafazadeh et al., 2016) for tackling the task, where they show that all their models perform only slightly better than a random baseline suggesting that richer models are required for tackling this task.", "A variety of new systems were proposed (Mihaylov and Frank, 2017; Schenk and Chiarcos, 2017; Schwartz et al., 2017b; Roemmele et al., 2017) as a part of the first shared task on SCT at LSDSem'17 workshop (Mostafazadeh et al., 2017) .", "Surprisingly, one of the models made a staggering improvement of 15% to the accuracy, partially due to using stylistic features isolated in the ending choices (Schwartz et al., 2017b) , discarding the narrative context.", "Clearly, this success does not seem to reflect the intent of the original task, where the systems should leverage narrative understanding as opposed to the statistical biases in the data.", "In this paper, we study the effect of such biases between the ending choices and present a new scheme to reduce such stylistic artifacts.", "The contribution of this paper is threefold: (1) we provide an extensive analysis of the SCT dataset to shed some light on the ending data characteristics (Section 3) (2) we develop a new strong classifier for tackling the SCT that uses a variety Context Right Ending Wrong Ending Ramona was very unhappy in her job.", "She asked for a raise, but was denied.", "The refusal prompted her to aggressively comb the want ads.", "She found an interesting new possibility and set up an interview.", "She was offered the new job at a higher salary.", "Ramona had no reason to want to change jobs anymore.", "The teacher was walking with a stack of papers.", "Outside started to rain.", "When the teacher tried to walk down a few steps, she ended up falling.", "The papers flew out of her hands and landed on the ground.", "A passer-by helped her up and helped her collect the papers.", "The teacher got up and walked home leaving the papers behind.", "(Chaturvedi et al., 2017) fails to keep up the performance on our new dataset.", "We discuss the implications of this experiment to the greater research community in terms of data collection and benchmarking practices in Section 6.", "All the code and datasets for this paper will be released to the public.", "We hope that the availability of the new evaluation set can further support the continued research on story understanding.", "Related Work This paper mainly extends the work on creating the Story Cloze Test set (Mostafazadeh et al., 2016) , hereinafter SCT-v1.0.", "The SCT-v1.0 dataset was created as follows: full five-sentence stories from the ROC Stories corpus were sampled, then, the initial four sentences were shown to a set of MTurk 2 crowd workers who were prompted to author 'right' and 'wrong' endings.", "Mostafazadeh et al.", "(Mostafazadeh et al., 2016) give special care to make sure there were no boundary cases for 'right' and 'wrong' endings by implementing extra rounds of data filtering.", "The resulting SCT-v1.0 dataset had a validation (hereinafter, SCT-v1.0 Val) and a test set (SCT-v1.0 test), each with 1,871 cases.", "Furthermore, we conducted an extensive ngram analysis, using word tokens, characters, partof-speech, and token-POS (similar to Schwartz et al.", "(Schwartz et al., 2017b) ) as features.", "We see char-grams such as \"sn't\" and \"not\" appear more commonly in the 'wrong endings', suggesting heavy negation.", "In 'right endings', pronouns are used more frequently versus proper nouns used in 'wrong endings'.", "Artifacts such as 'pizza' are common in 'wrong endings,' which could suggest that for a given topic, the authors may replace an object in a right ending with a wrong one and quickly think up a common item such as pizza to create a 'wrong' one.", "An extensive analysis of these features, including the n-gram analysis, can be found in the supplementary material.", "Model Following the analysis above, we developed a Story Cloze model, hereinafter EndingReg, that only uses the ending features while disregarding the story context for choosing the right ending.", "We expanded each Story Cloze Test case's ending options into a set of two single sentences.", "Then, for each sentence, we created the following features: 1.", "Number of tokens 2.", "VADER composite sentiment score 3.", "Yngve complexity score 4.", "Token-POS n-grams 5.", "POS n-grams 6.", "Four length character-grams All n-gram features needed to appear at least five times throughout the dataset.", "The features were collected for each five-sentence story and then fed into a logistic regression classifier.", "As an initial experiment, we trained this model using the SCT-v1.0 validation set and tested on the SCT-v1.0 test set.", "An L2 regularization penalty was used to enforce a Gaussian prior on the feature-space, where a grid search was conducted for hyper-parameter tuning.", "This model achieves an accuracy of 71.5% on the SCT-v1.0 dataset which is on par with the highest score achieved by any model using only the endings.", "Table 3 shows the accuracies ob-tained by models using only those particular features.", "We achieve minimal but sometimes important classification using token count, VADER, and Yngve in combination alone, better classification using POS or char-grams alone, and best classification using n-grams alone.", "By combining all of them we achieve the overall best results.", "Table 3 : Classification results on SCT-v1.0 using each of the feature sets designated in the columns.", "Data Collection Based on the findings above, a new test set for the SCT was deemed necessary.", "The premise of predicting an ending to a short story, as opposed to predicting say a middle sentence, enables a more systematic evaluation where human can agree on the cases 100%.", "Hence, our goal was to come up with a data collection scheme that overcomes the data collection biases, while keeping the original evaluation format.", "As the data analysis revealed, the token count, sentiment, and the complexity are not as important features for classification as the ending n-grams are.", "We set the following goals for sourcing the new 'right' and 'wrong' endings.", "They both should: 1.", "Contain a similar number of tokens 2.", "Have similar distributions of token n-grams and char-grams 3.", "Occur as standalone events with the same likelihood to occur, with topical, sentimental, or emotion consistencies when applicable.", "First, we crowdsourced 5,000 new five-sentence stories through Amazon Mechanical Turk.", "We prompted the users in the same manner described in Mostafazadeh et al.", "(2016) .", "In order to source new 'wrong' endings, we tried two different methods.", "In Method #1, we kept the original ending sourcing format of Mostafazadeh et al., but imposed some further restrictions.", "This was done by taking the first four sentences of the newly collected stories and asking an MTurker to write a 'right' and 'wrong' ending for each.", "The new restrictions were: 'Each sentence should stay within the same subject area of the story,' and 'The number of words in the Right and Wrong sentences should not differ by more than 2 words,' and 'When possible, the Right and Wrong sentences should try to keep a similar tone/sentiment as one another.'", "The motivation behind this technique was to reduce the statistical differences by asking the user to be mindful of considerations.", "In Method #2, we took the five sentences stories and prompted a second set of MTurk workers to modify the fifth sentence in order to make a resulting five-sentence story non-sensible.", "Here, the prompt instructs the workers to make sure the new 'wrong ending' sentence makes sense standalone, that it does not differ in the number of words from the original sentence by more than three words, and that the changes cannot be as simple as e.g., putting the word 'not' in front of a description or a verb.", "As a result, the workers had much less flexibility for changing the underlying linguistic structures which can help tackle the authorship style differences between the 'right' and 'wrong' endings.", "The results in Table 4 , which show classification accuracy when using EndingReg on the two new data sources, show that Method #2 is a slightly better data sourcing scheme in reducing the bias, since the EndingReg model's performance is slightly worse.", "The set was further filtered through human verification similar to Mostafazadeh et al.", "(2016) .", "The filtering was done by splitting each SCT-v1.0's two alternative endings into two independent five-sentence stories and asking three different MTurk users to categorize the story as either: one where the story made complete sense, one where the story made sense until the last sentence and one where the story does not make sense for another reason.", "Stories were only selected if all the three MTurk users verified that the story with the 'right ending' and the corresponding story with the 'wrong ending' were verified to be indeed right and wrong respectively.", "This ensured a higher quality of data and eliminating boundary cases.", "This entire process resulted in creating the Story Cloze Test v1.5 (SCT-v1.5) dataset, consisting of 1,571 stories for each validation and test sets.", "Method #1 Method #2 EndingReg 0.709 0.695 cogcomp 0.649 0.641 Table 4 : Comparison of initial data sourcing methods n βˆ’ gram char βˆ’ gram P OS SCT-v1.0 13.9 12.4 16.4 SCT-v1.5 7.0 6.3 7.5 Table 5 : Standard deviation of the word and character n-gram counts, as well as the part of speech (POS) counts, between the right and wrong endings.", "Results In order to test the decrease in n-gram bias, which was the most salient feature for the classification task using only the endings, we compare the variance between the n-gram counts from SCT-v1.0 to SCT-v1.5.", "The results are presented in Table 5 , which indicates the drop in the standard deviations in our new dataset.", "Table 6 shows the classification results of various models on SCT-v1.5.", "The drop in accuracy of the EndingReg model between the SCT-v1.0 and SCT-v1.5 shows a significant improvement on the statistical weight of the stylistic features generated by the model.", "Since the main features used are the token length and the various n-grams, this suggests that the new 'right endings' and 'wrong endings' have much more similar token n-gram, pos n-gram, postoken n-gram and char-gram overlap.", "Furthermore, the CogComp model's performance has significantly dropped on SCT-v1.5.", "Although this model seems to be using story comprehension features such as event sequencing, since the endings are included in the sequences, the biases within the endings have influenced the predictions and the weak performance of the model in SCT-v1.5 suggest that this model had picked up on the biases of SCT-v1.0 as opposed to really understanding the context.", "In particular, the posterior probabilities for each ending choice using their features are quite similar on the SCT-v1.5.", "These results place the classification accuracy of this top performing model on par with or worse than the models that did not use the ending features of the old SCT-v1.0 dataset (Mostafazadeh et al., 2017) , which suggest that the gap that once was held by models using the ending biases seems to be corrected for.", "Al- though we did not get to test all the other models published on SCT-v1.0 directly, we predict similar trends.", "It is important to point out that the 64.4% performance attained by our EndingReg model is still high for a model which completely discards the context.", "This indicates that although we could correct for some of the stylistic biases, there are some other hidden patterns in the new endings that would not have been accounted for without having the EndingReg baseline.", "This showcases the importance of maintaining benchmarks that evolve and improve over time, where systems should not be optimized for particular narrow test sets.", "We propose the community to report accuracies on both SCT-v1.0 and SCT-v1.5, both of which still have a huge gap between the best system and the human performance.", "Conclusion In this paper, we presented a comprehensive analysis of the stylistic features isolated in the endings of the original Story Cloze Test (SCT-v1.0).", "Using that analysis, along with a classifier we developed for testing new data collection schemes, we created a new SCT dataset, SCT-v1.5, which overcomes some of the biases.", "Based on the results presented in this paper, we believe that our SCT-v1.5 is a better benchmark for story comprehension.", "However, as shown in multiple AI tasks (Ettinger et al., 2017; Antol et al., 2015; Jabri et al., 2016; Poliak et al., 2018) , no collected dataset is entirely without its inherent biases and often the biases in datasets go undiscovered.", "We believe that evaluation benchmarks should evolve and improve over time and we are planning to incrementally update the Story Cloze Test benchmark.", "All the new versions, along with a leader-board showcasing the stateof-the-art results, will be tracked via CodaLab https://competitions.codalab.org/ competitions/15333.", "The success of our modified data collection method shows how extreme care must be given for sourcing new datasets.", "We suggest the next SCT challenges to be completely blind, where the participants cannot deliberately leverage any particular data biases.", "Along with this paper, we are releasing the datasets and the developed models to the community.", "All the announcements, new supplementary material, and datasets can be accessed through http://cs.", "rochester.edu/nlp/rocstories/.", "We hope that this work ignites further interest in the community for making progress on story understanding." ] }
{ "paper_header_number": [ "1", "2", "4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Model", "Data Collection", "Results", "Conclusion" ] }
GEM-SciDuet-train-105#paper-1275#slide-6
Our Main Contributions
The summary of this talk 1. Analyzed SCT-v1.0 ending features 2. Developed a strong classifier on SCT-v1.0 using only ending features 3. Developed a new crowd-sourcing scheme to tackle the ending biases 4. Collected a new dataset, SCT-v1.5 Allen, Omid B Tackling the Biasesakhshandeh,
The summary of this talk 1. Analyzed SCT-v1.0 ending features 2. Developed a strong classifier on SCT-v1.0 using only ending features 3. Developed a new crowd-sourcing scheme to tackle the ending biases 4. Collected a new dataset, SCT-v1.5 Allen, Omid B Tackling the Biasesakhshandeh,
[]
GEM-SciDuet-train-105#paper-1275#slide-7
1275
Tackling the Story Ending Biases in The Story Cloze Test
The Story Cloze Test (SCT) is a recent framework for evaluating story comprehension and script learning. There have been a variety of models tackling the SCT so far. Although the original goal behind the SCT was to require systems to perform deep language understanding and commonsense reasoning for successful narrative understanding, some recent models could perform significantly better than the initial baselines by leveraging human-authorship biases discovered in the SCT dataset. In order to shed some light on this issue, we have performed various data analysis and analyzed a variety of top performing models presented for this task. Given the statistics we have aggregated, we have designed a new crowdsourcing scheme that creates a new SCT dataset, which overcomes some of the biases. We benchmark a few models on the new dataset and show that the topperforming model on the original SCT dataset fails to keep up its performance. Our findings further signify the importance of benchmarking NLP systems on various evolving test sets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112 ], "paper_content_text": [ "Introduction Story comprehension has been one of the longestrunning ambitions in artificial intelligence (Dijk, 1980; Charniak, 1972) .", "One of the challenges in expanding the field had been the lack of a solid evaluation framework and datasets on which comprehension models can be trained and tested.", "Mostafazadeh et al.", "(2016) introduced the Story Cloze Test (SCT) evaluation framework to address * This work was performed at University of Rochester.", "this issue.", "This test evaluates a story comprehension system where the system is given a foursentence short story as the 'context' and two alternative endings and to the story, labeled 'right ending' and 'wrong ending.'", "Then, the system's task is to choose the right ending.", "In order to support this task, Mostafazadeh et al.", "also provide the ROC Stories dataset, which is a collection of crowd-sourced complete five sentence stories through Amazon Mechanical Turk (MTurk).", "Each story follows a character through a fairly simple series of events to a conclusion.", "Several shallow and neural models, including the state-of-the-art script learning approaches, were presented as baselines (Mostafazadeh et al., 2016) for tackling the task, where they show that all their models perform only slightly better than a random baseline suggesting that richer models are required for tackling this task.", "A variety of new systems were proposed (Mihaylov and Frank, 2017; Schenk and Chiarcos, 2017; Schwartz et al., 2017b; Roemmele et al., 2017) as a part of the first shared task on SCT at LSDSem'17 workshop (Mostafazadeh et al., 2017) .", "Surprisingly, one of the models made a staggering improvement of 15% to the accuracy, partially due to using stylistic features isolated in the ending choices (Schwartz et al., 2017b) , discarding the narrative context.", "Clearly, this success does not seem to reflect the intent of the original task, where the systems should leverage narrative understanding as opposed to the statistical biases in the data.", "In this paper, we study the effect of such biases between the ending choices and present a new scheme to reduce such stylistic artifacts.", "The contribution of this paper is threefold: (1) we provide an extensive analysis of the SCT dataset to shed some light on the ending data characteristics (Section 3) (2) we develop a new strong classifier for tackling the SCT that uses a variety Context Right Ending Wrong Ending Ramona was very unhappy in her job.", "She asked for a raise, but was denied.", "The refusal prompted her to aggressively comb the want ads.", "She found an interesting new possibility and set up an interview.", "She was offered the new job at a higher salary.", "Ramona had no reason to want to change jobs anymore.", "The teacher was walking with a stack of papers.", "Outside started to rain.", "When the teacher tried to walk down a few steps, she ended up falling.", "The papers flew out of her hands and landed on the ground.", "A passer-by helped her up and helped her collect the papers.", "The teacher got up and walked home leaving the papers behind.", "(Chaturvedi et al., 2017) fails to keep up the performance on our new dataset.", "We discuss the implications of this experiment to the greater research community in terms of data collection and benchmarking practices in Section 6.", "All the code and datasets for this paper will be released to the public.", "We hope that the availability of the new evaluation set can further support the continued research on story understanding.", "Related Work This paper mainly extends the work on creating the Story Cloze Test set (Mostafazadeh et al., 2016) , hereinafter SCT-v1.0.", "The SCT-v1.0 dataset was created as follows: full five-sentence stories from the ROC Stories corpus were sampled, then, the initial four sentences were shown to a set of MTurk 2 crowd workers who were prompted to author 'right' and 'wrong' endings.", "Mostafazadeh et al.", "(Mostafazadeh et al., 2016) give special care to make sure there were no boundary cases for 'right' and 'wrong' endings by implementing extra rounds of data filtering.", "The resulting SCT-v1.0 dataset had a validation (hereinafter, SCT-v1.0 Val) and a test set (SCT-v1.0 test), each with 1,871 cases.", "Furthermore, we conducted an extensive ngram analysis, using word tokens, characters, partof-speech, and token-POS (similar to Schwartz et al.", "(Schwartz et al., 2017b) ) as features.", "We see char-grams such as \"sn't\" and \"not\" appear more commonly in the 'wrong endings', suggesting heavy negation.", "In 'right endings', pronouns are used more frequently versus proper nouns used in 'wrong endings'.", "Artifacts such as 'pizza' are common in 'wrong endings,' which could suggest that for a given topic, the authors may replace an object in a right ending with a wrong one and quickly think up a common item such as pizza to create a 'wrong' one.", "An extensive analysis of these features, including the n-gram analysis, can be found in the supplementary material.", "Model Following the analysis above, we developed a Story Cloze model, hereinafter EndingReg, that only uses the ending features while disregarding the story context for choosing the right ending.", "We expanded each Story Cloze Test case's ending options into a set of two single sentences.", "Then, for each sentence, we created the following features: 1.", "Number of tokens 2.", "VADER composite sentiment score 3.", "Yngve complexity score 4.", "Token-POS n-grams 5.", "POS n-grams 6.", "Four length character-grams All n-gram features needed to appear at least five times throughout the dataset.", "The features were collected for each five-sentence story and then fed into a logistic regression classifier.", "As an initial experiment, we trained this model using the SCT-v1.0 validation set and tested on the SCT-v1.0 test set.", "An L2 regularization penalty was used to enforce a Gaussian prior on the feature-space, where a grid search was conducted for hyper-parameter tuning.", "This model achieves an accuracy of 71.5% on the SCT-v1.0 dataset which is on par with the highest score achieved by any model using only the endings.", "Table 3 shows the accuracies ob-tained by models using only those particular features.", "We achieve minimal but sometimes important classification using token count, VADER, and Yngve in combination alone, better classification using POS or char-grams alone, and best classification using n-grams alone.", "By combining all of them we achieve the overall best results.", "Table 3 : Classification results on SCT-v1.0 using each of the feature sets designated in the columns.", "Data Collection Based on the findings above, a new test set for the SCT was deemed necessary.", "The premise of predicting an ending to a short story, as opposed to predicting say a middle sentence, enables a more systematic evaluation where human can agree on the cases 100%.", "Hence, our goal was to come up with a data collection scheme that overcomes the data collection biases, while keeping the original evaluation format.", "As the data analysis revealed, the token count, sentiment, and the complexity are not as important features for classification as the ending n-grams are.", "We set the following goals for sourcing the new 'right' and 'wrong' endings.", "They both should: 1.", "Contain a similar number of tokens 2.", "Have similar distributions of token n-grams and char-grams 3.", "Occur as standalone events with the same likelihood to occur, with topical, sentimental, or emotion consistencies when applicable.", "First, we crowdsourced 5,000 new five-sentence stories through Amazon Mechanical Turk.", "We prompted the users in the same manner described in Mostafazadeh et al.", "(2016) .", "In order to source new 'wrong' endings, we tried two different methods.", "In Method #1, we kept the original ending sourcing format of Mostafazadeh et al., but imposed some further restrictions.", "This was done by taking the first four sentences of the newly collected stories and asking an MTurker to write a 'right' and 'wrong' ending for each.", "The new restrictions were: 'Each sentence should stay within the same subject area of the story,' and 'The number of words in the Right and Wrong sentences should not differ by more than 2 words,' and 'When possible, the Right and Wrong sentences should try to keep a similar tone/sentiment as one another.'", "The motivation behind this technique was to reduce the statistical differences by asking the user to be mindful of considerations.", "In Method #2, we took the five sentences stories and prompted a second set of MTurk workers to modify the fifth sentence in order to make a resulting five-sentence story non-sensible.", "Here, the prompt instructs the workers to make sure the new 'wrong ending' sentence makes sense standalone, that it does not differ in the number of words from the original sentence by more than three words, and that the changes cannot be as simple as e.g., putting the word 'not' in front of a description or a verb.", "As a result, the workers had much less flexibility for changing the underlying linguistic structures which can help tackle the authorship style differences between the 'right' and 'wrong' endings.", "The results in Table 4 , which show classification accuracy when using EndingReg on the two new data sources, show that Method #2 is a slightly better data sourcing scheme in reducing the bias, since the EndingReg model's performance is slightly worse.", "The set was further filtered through human verification similar to Mostafazadeh et al.", "(2016) .", "The filtering was done by splitting each SCT-v1.0's two alternative endings into two independent five-sentence stories and asking three different MTurk users to categorize the story as either: one where the story made complete sense, one where the story made sense until the last sentence and one where the story does not make sense for another reason.", "Stories were only selected if all the three MTurk users verified that the story with the 'right ending' and the corresponding story with the 'wrong ending' were verified to be indeed right and wrong respectively.", "This ensured a higher quality of data and eliminating boundary cases.", "This entire process resulted in creating the Story Cloze Test v1.5 (SCT-v1.5) dataset, consisting of 1,571 stories for each validation and test sets.", "Method #1 Method #2 EndingReg 0.709 0.695 cogcomp 0.649 0.641 Table 4 : Comparison of initial data sourcing methods n βˆ’ gram char βˆ’ gram P OS SCT-v1.0 13.9 12.4 16.4 SCT-v1.5 7.0 6.3 7.5 Table 5 : Standard deviation of the word and character n-gram counts, as well as the part of speech (POS) counts, between the right and wrong endings.", "Results In order to test the decrease in n-gram bias, which was the most salient feature for the classification task using only the endings, we compare the variance between the n-gram counts from SCT-v1.0 to SCT-v1.5.", "The results are presented in Table 5 , which indicates the drop in the standard deviations in our new dataset.", "Table 6 shows the classification results of various models on SCT-v1.5.", "The drop in accuracy of the EndingReg model between the SCT-v1.0 and SCT-v1.5 shows a significant improvement on the statistical weight of the stylistic features generated by the model.", "Since the main features used are the token length and the various n-grams, this suggests that the new 'right endings' and 'wrong endings' have much more similar token n-gram, pos n-gram, postoken n-gram and char-gram overlap.", "Furthermore, the CogComp model's performance has significantly dropped on SCT-v1.5.", "Although this model seems to be using story comprehension features such as event sequencing, since the endings are included in the sequences, the biases within the endings have influenced the predictions and the weak performance of the model in SCT-v1.5 suggest that this model had picked up on the biases of SCT-v1.0 as opposed to really understanding the context.", "In particular, the posterior probabilities for each ending choice using their features are quite similar on the SCT-v1.5.", "These results place the classification accuracy of this top performing model on par with or worse than the models that did not use the ending features of the old SCT-v1.0 dataset (Mostafazadeh et al., 2017) , which suggest that the gap that once was held by models using the ending biases seems to be corrected for.", "Al- though we did not get to test all the other models published on SCT-v1.0 directly, we predict similar trends.", "It is important to point out that the 64.4% performance attained by our EndingReg model is still high for a model which completely discards the context.", "This indicates that although we could correct for some of the stylistic biases, there are some other hidden patterns in the new endings that would not have been accounted for without having the EndingReg baseline.", "This showcases the importance of maintaining benchmarks that evolve and improve over time, where systems should not be optimized for particular narrow test sets.", "We propose the community to report accuracies on both SCT-v1.0 and SCT-v1.5, both of which still have a huge gap between the best system and the human performance.", "Conclusion In this paper, we presented a comprehensive analysis of the stylistic features isolated in the endings of the original Story Cloze Test (SCT-v1.0).", "Using that analysis, along with a classifier we developed for testing new data collection schemes, we created a new SCT dataset, SCT-v1.5, which overcomes some of the biases.", "Based on the results presented in this paper, we believe that our SCT-v1.5 is a better benchmark for story comprehension.", "However, as shown in multiple AI tasks (Ettinger et al., 2017; Antol et al., 2015; Jabri et al., 2016; Poliak et al., 2018) , no collected dataset is entirely without its inherent biases and often the biases in datasets go undiscovered.", "We believe that evaluation benchmarks should evolve and improve over time and we are planning to incrementally update the Story Cloze Test benchmark.", "All the new versions, along with a leader-board showcasing the stateof-the-art results, will be tracked via CodaLab https://competitions.codalab.org/ competitions/15333.", "The success of our modified data collection method shows how extreme care must be given for sourcing new datasets.", "We suggest the next SCT challenges to be completely blind, where the participants cannot deliberately leverage any particular data biases.", "Along with this paper, we are releasing the datasets and the developed models to the community.", "All the announcements, new supplementary material, and datasets can be accessed through http://cs.", "rochester.edu/nlp/rocstories/.", "We hope that this work ignites further interest in the community for making progress on story understanding." ] }
{ "paper_header_number": [ "1", "2", "4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Model", "Data Collection", "Results", "Conclusion" ] }
GEM-SciDuet-train-105#paper-1275#slide-7
Statistical Analysis of Endings
We did an extensive analysis comparing the Right Endings and Wrong Part of Speech n-grams Combined Token + POS n-grams Analysis was done by performing A two sample t-test between token count, sentiment, an complexity Count measurements for the n-grams between Right and Wrong Endings Allen, Omid B Tackling the Biasesakhshandeh,
We did an extensive analysis comparing the Right Endings and Wrong Part of Speech n-grams Combined Token + POS n-grams Analysis was done by performing A two sample t-test between token count, sentiment, an complexity Count measurements for the n-grams between Right and Wrong Endings Allen, Omid B Tackling the Biasesakhshandeh,
[]
GEM-SciDuet-train-105#paper-1275#slide-8
1275
Tackling the Story Ending Biases in The Story Cloze Test
The Story Cloze Test (SCT) is a recent framework for evaluating story comprehension and script learning. There have been a variety of models tackling the SCT so far. Although the original goal behind the SCT was to require systems to perform deep language understanding and commonsense reasoning for successful narrative understanding, some recent models could perform significantly better than the initial baselines by leveraging human-authorship biases discovered in the SCT dataset. In order to shed some light on this issue, we have performed various data analysis and analyzed a variety of top performing models presented for this task. Given the statistics we have aggregated, we have designed a new crowdsourcing scheme that creates a new SCT dataset, which overcomes some of the biases. We benchmark a few models on the new dataset and show that the topperforming model on the original SCT dataset fails to keep up its performance. Our findings further signify the importance of benchmarking NLP systems on various evolving test sets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112 ], "paper_content_text": [ "Introduction Story comprehension has been one of the longestrunning ambitions in artificial intelligence (Dijk, 1980; Charniak, 1972) .", "One of the challenges in expanding the field had been the lack of a solid evaluation framework and datasets on which comprehension models can be trained and tested.", "Mostafazadeh et al.", "(2016) introduced the Story Cloze Test (SCT) evaluation framework to address * This work was performed at University of Rochester.", "this issue.", "This test evaluates a story comprehension system where the system is given a foursentence short story as the 'context' and two alternative endings and to the story, labeled 'right ending' and 'wrong ending.'", "Then, the system's task is to choose the right ending.", "In order to support this task, Mostafazadeh et al.", "also provide the ROC Stories dataset, which is a collection of crowd-sourced complete five sentence stories through Amazon Mechanical Turk (MTurk).", "Each story follows a character through a fairly simple series of events to a conclusion.", "Several shallow and neural models, including the state-of-the-art script learning approaches, were presented as baselines (Mostafazadeh et al., 2016) for tackling the task, where they show that all their models perform only slightly better than a random baseline suggesting that richer models are required for tackling this task.", "A variety of new systems were proposed (Mihaylov and Frank, 2017; Schenk and Chiarcos, 2017; Schwartz et al., 2017b; Roemmele et al., 2017) as a part of the first shared task on SCT at LSDSem'17 workshop (Mostafazadeh et al., 2017) .", "Surprisingly, one of the models made a staggering improvement of 15% to the accuracy, partially due to using stylistic features isolated in the ending choices (Schwartz et al., 2017b) , discarding the narrative context.", "Clearly, this success does not seem to reflect the intent of the original task, where the systems should leverage narrative understanding as opposed to the statistical biases in the data.", "In this paper, we study the effect of such biases between the ending choices and present a new scheme to reduce such stylistic artifacts.", "The contribution of this paper is threefold: (1) we provide an extensive analysis of the SCT dataset to shed some light on the ending data characteristics (Section 3) (2) we develop a new strong classifier for tackling the SCT that uses a variety Context Right Ending Wrong Ending Ramona was very unhappy in her job.", "She asked for a raise, but was denied.", "The refusal prompted her to aggressively comb the want ads.", "She found an interesting new possibility and set up an interview.", "She was offered the new job at a higher salary.", "Ramona had no reason to want to change jobs anymore.", "The teacher was walking with a stack of papers.", "Outside started to rain.", "When the teacher tried to walk down a few steps, she ended up falling.", "The papers flew out of her hands and landed on the ground.", "A passer-by helped her up and helped her collect the papers.", "The teacher got up and walked home leaving the papers behind.", "(Chaturvedi et al., 2017) fails to keep up the performance on our new dataset.", "We discuss the implications of this experiment to the greater research community in terms of data collection and benchmarking practices in Section 6.", "All the code and datasets for this paper will be released to the public.", "We hope that the availability of the new evaluation set can further support the continued research on story understanding.", "Related Work This paper mainly extends the work on creating the Story Cloze Test set (Mostafazadeh et al., 2016) , hereinafter SCT-v1.0.", "The SCT-v1.0 dataset was created as follows: full five-sentence stories from the ROC Stories corpus were sampled, then, the initial four sentences were shown to a set of MTurk 2 crowd workers who were prompted to author 'right' and 'wrong' endings.", "Mostafazadeh et al.", "(Mostafazadeh et al., 2016) give special care to make sure there were no boundary cases for 'right' and 'wrong' endings by implementing extra rounds of data filtering.", "The resulting SCT-v1.0 dataset had a validation (hereinafter, SCT-v1.0 Val) and a test set (SCT-v1.0 test), each with 1,871 cases.", "Furthermore, we conducted an extensive ngram analysis, using word tokens, characters, partof-speech, and token-POS (similar to Schwartz et al.", "(Schwartz et al., 2017b) ) as features.", "We see char-grams such as \"sn't\" and \"not\" appear more commonly in the 'wrong endings', suggesting heavy negation.", "In 'right endings', pronouns are used more frequently versus proper nouns used in 'wrong endings'.", "Artifacts such as 'pizza' are common in 'wrong endings,' which could suggest that for a given topic, the authors may replace an object in a right ending with a wrong one and quickly think up a common item such as pizza to create a 'wrong' one.", "An extensive analysis of these features, including the n-gram analysis, can be found in the supplementary material.", "Model Following the analysis above, we developed a Story Cloze model, hereinafter EndingReg, that only uses the ending features while disregarding the story context for choosing the right ending.", "We expanded each Story Cloze Test case's ending options into a set of two single sentences.", "Then, for each sentence, we created the following features: 1.", "Number of tokens 2.", "VADER composite sentiment score 3.", "Yngve complexity score 4.", "Token-POS n-grams 5.", "POS n-grams 6.", "Four length character-grams All n-gram features needed to appear at least five times throughout the dataset.", "The features were collected for each five-sentence story and then fed into a logistic regression classifier.", "As an initial experiment, we trained this model using the SCT-v1.0 validation set and tested on the SCT-v1.0 test set.", "An L2 regularization penalty was used to enforce a Gaussian prior on the feature-space, where a grid search was conducted for hyper-parameter tuning.", "This model achieves an accuracy of 71.5% on the SCT-v1.0 dataset which is on par with the highest score achieved by any model using only the endings.", "Table 3 shows the accuracies ob-tained by models using only those particular features.", "We achieve minimal but sometimes important classification using token count, VADER, and Yngve in combination alone, better classification using POS or char-grams alone, and best classification using n-grams alone.", "By combining all of them we achieve the overall best results.", "Table 3 : Classification results on SCT-v1.0 using each of the feature sets designated in the columns.", "Data Collection Based on the findings above, a new test set for the SCT was deemed necessary.", "The premise of predicting an ending to a short story, as opposed to predicting say a middle sentence, enables a more systematic evaluation where human can agree on the cases 100%.", "Hence, our goal was to come up with a data collection scheme that overcomes the data collection biases, while keeping the original evaluation format.", "As the data analysis revealed, the token count, sentiment, and the complexity are not as important features for classification as the ending n-grams are.", "We set the following goals for sourcing the new 'right' and 'wrong' endings.", "They both should: 1.", "Contain a similar number of tokens 2.", "Have similar distributions of token n-grams and char-grams 3.", "Occur as standalone events with the same likelihood to occur, with topical, sentimental, or emotion consistencies when applicable.", "First, we crowdsourced 5,000 new five-sentence stories through Amazon Mechanical Turk.", "We prompted the users in the same manner described in Mostafazadeh et al.", "(2016) .", "In order to source new 'wrong' endings, we tried two different methods.", "In Method #1, we kept the original ending sourcing format of Mostafazadeh et al., but imposed some further restrictions.", "This was done by taking the first four sentences of the newly collected stories and asking an MTurker to write a 'right' and 'wrong' ending for each.", "The new restrictions were: 'Each sentence should stay within the same subject area of the story,' and 'The number of words in the Right and Wrong sentences should not differ by more than 2 words,' and 'When possible, the Right and Wrong sentences should try to keep a similar tone/sentiment as one another.'", "The motivation behind this technique was to reduce the statistical differences by asking the user to be mindful of considerations.", "In Method #2, we took the five sentences stories and prompted a second set of MTurk workers to modify the fifth sentence in order to make a resulting five-sentence story non-sensible.", "Here, the prompt instructs the workers to make sure the new 'wrong ending' sentence makes sense standalone, that it does not differ in the number of words from the original sentence by more than three words, and that the changes cannot be as simple as e.g., putting the word 'not' in front of a description or a verb.", "As a result, the workers had much less flexibility for changing the underlying linguistic structures which can help tackle the authorship style differences between the 'right' and 'wrong' endings.", "The results in Table 4 , which show classification accuracy when using EndingReg on the two new data sources, show that Method #2 is a slightly better data sourcing scheme in reducing the bias, since the EndingReg model's performance is slightly worse.", "The set was further filtered through human verification similar to Mostafazadeh et al.", "(2016) .", "The filtering was done by splitting each SCT-v1.0's two alternative endings into two independent five-sentence stories and asking three different MTurk users to categorize the story as either: one where the story made complete sense, one where the story made sense until the last sentence and one where the story does not make sense for another reason.", "Stories were only selected if all the three MTurk users verified that the story with the 'right ending' and the corresponding story with the 'wrong ending' were verified to be indeed right and wrong respectively.", "This ensured a higher quality of data and eliminating boundary cases.", "This entire process resulted in creating the Story Cloze Test v1.5 (SCT-v1.5) dataset, consisting of 1,571 stories for each validation and test sets.", "Method #1 Method #2 EndingReg 0.709 0.695 cogcomp 0.649 0.641 Table 4 : Comparison of initial data sourcing methods n βˆ’ gram char βˆ’ gram P OS SCT-v1.0 13.9 12.4 16.4 SCT-v1.5 7.0 6.3 7.5 Table 5 : Standard deviation of the word and character n-gram counts, as well as the part of speech (POS) counts, between the right and wrong endings.", "Results In order to test the decrease in n-gram bias, which was the most salient feature for the classification task using only the endings, we compare the variance between the n-gram counts from SCT-v1.0 to SCT-v1.5.", "The results are presented in Table 5 , which indicates the drop in the standard deviations in our new dataset.", "Table 6 shows the classification results of various models on SCT-v1.5.", "The drop in accuracy of the EndingReg model between the SCT-v1.0 and SCT-v1.5 shows a significant improvement on the statistical weight of the stylistic features generated by the model.", "Since the main features used are the token length and the various n-grams, this suggests that the new 'right endings' and 'wrong endings' have much more similar token n-gram, pos n-gram, postoken n-gram and char-gram overlap.", "Furthermore, the CogComp model's performance has significantly dropped on SCT-v1.5.", "Although this model seems to be using story comprehension features such as event sequencing, since the endings are included in the sequences, the biases within the endings have influenced the predictions and the weak performance of the model in SCT-v1.5 suggest that this model had picked up on the biases of SCT-v1.0 as opposed to really understanding the context.", "In particular, the posterior probabilities for each ending choice using their features are quite similar on the SCT-v1.5.", "These results place the classification accuracy of this top performing model on par with or worse than the models that did not use the ending features of the old SCT-v1.0 dataset (Mostafazadeh et al., 2017) , which suggest that the gap that once was held by models using the ending biases seems to be corrected for.", "Al- though we did not get to test all the other models published on SCT-v1.0 directly, we predict similar trends.", "It is important to point out that the 64.4% performance attained by our EndingReg model is still high for a model which completely discards the context.", "This indicates that although we could correct for some of the stylistic biases, there are some other hidden patterns in the new endings that would not have been accounted for without having the EndingReg baseline.", "This showcases the importance of maintaining benchmarks that evolve and improve over time, where systems should not be optimized for particular narrow test sets.", "We propose the community to report accuracies on both SCT-v1.0 and SCT-v1.5, both of which still have a huge gap between the best system and the human performance.", "Conclusion In this paper, we presented a comprehensive analysis of the stylistic features isolated in the endings of the original Story Cloze Test (SCT-v1.0).", "Using that analysis, along with a classifier we developed for testing new data collection schemes, we created a new SCT dataset, SCT-v1.5, which overcomes some of the biases.", "Based on the results presented in this paper, we believe that our SCT-v1.5 is a better benchmark for story comprehension.", "However, as shown in multiple AI tasks (Ettinger et al., 2017; Antol et al., 2015; Jabri et al., 2016; Poliak et al., 2018) , no collected dataset is entirely without its inherent biases and often the biases in datasets go undiscovered.", "We believe that evaluation benchmarks should evolve and improve over time and we are planning to incrementally update the Story Cloze Test benchmark.", "All the new versions, along with a leader-board showcasing the stateof-the-art results, will be tracked via CodaLab https://competitions.codalab.org/ competitions/15333.", "The success of our modified data collection method shows how extreme care must be given for sourcing new datasets.", "We suggest the next SCT challenges to be completely blind, where the participants cannot deliberately leverage any particular data biases.", "Along with this paper, we are releasing the datasets and the developed models to the community.", "All the announcements, new supplementary material, and datasets can be accessed through http://cs.", "rochester.edu/nlp/rocstories/.", "We hope that this work ignites further interest in the community for making progress on story understanding." ] }
{ "paper_header_number": [ "1", "2", "4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Model", "Data Collection", "Results", "Conclusion" ] }
GEM-SciDuet-train-105#paper-1275#slide-8
Analysis Token Count
right endings wrong endings Conclusion: Right Endings tend to be longer than Wrong Endings. Allen, Omid B Tackling the Biasesakhshandeh,
right endings wrong endings Conclusion: Right Endings tend to be longer than Wrong Endings. Allen, Omid B Tackling the Biasesakhshandeh,
[]
GEM-SciDuet-train-105#paper-1275#slide-9
1275
Tackling the Story Ending Biases in The Story Cloze Test
The Story Cloze Test (SCT) is a recent framework for evaluating story comprehension and script learning. There have been a variety of models tackling the SCT so far. Although the original goal behind the SCT was to require systems to perform deep language understanding and commonsense reasoning for successful narrative understanding, some recent models could perform significantly better than the initial baselines by leveraging human-authorship biases discovered in the SCT dataset. In order to shed some light on this issue, we have performed various data analysis and analyzed a variety of top performing models presented for this task. Given the statistics we have aggregated, we have designed a new crowdsourcing scheme that creates a new SCT dataset, which overcomes some of the biases. We benchmark a few models on the new dataset and show that the topperforming model on the original SCT dataset fails to keep up its performance. Our findings further signify the importance of benchmarking NLP systems on various evolving test sets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112 ], "paper_content_text": [ "Introduction Story comprehension has been one of the longestrunning ambitions in artificial intelligence (Dijk, 1980; Charniak, 1972) .", "One of the challenges in expanding the field had been the lack of a solid evaluation framework and datasets on which comprehension models can be trained and tested.", "Mostafazadeh et al.", "(2016) introduced the Story Cloze Test (SCT) evaluation framework to address * This work was performed at University of Rochester.", "this issue.", "This test evaluates a story comprehension system where the system is given a foursentence short story as the 'context' and two alternative endings and to the story, labeled 'right ending' and 'wrong ending.'", "Then, the system's task is to choose the right ending.", "In order to support this task, Mostafazadeh et al.", "also provide the ROC Stories dataset, which is a collection of crowd-sourced complete five sentence stories through Amazon Mechanical Turk (MTurk).", "Each story follows a character through a fairly simple series of events to a conclusion.", "Several shallow and neural models, including the state-of-the-art script learning approaches, were presented as baselines (Mostafazadeh et al., 2016) for tackling the task, where they show that all their models perform only slightly better than a random baseline suggesting that richer models are required for tackling this task.", "A variety of new systems were proposed (Mihaylov and Frank, 2017; Schenk and Chiarcos, 2017; Schwartz et al., 2017b; Roemmele et al., 2017) as a part of the first shared task on SCT at LSDSem'17 workshop (Mostafazadeh et al., 2017) .", "Surprisingly, one of the models made a staggering improvement of 15% to the accuracy, partially due to using stylistic features isolated in the ending choices (Schwartz et al., 2017b) , discarding the narrative context.", "Clearly, this success does not seem to reflect the intent of the original task, where the systems should leverage narrative understanding as opposed to the statistical biases in the data.", "In this paper, we study the effect of such biases between the ending choices and present a new scheme to reduce such stylistic artifacts.", "The contribution of this paper is threefold: (1) we provide an extensive analysis of the SCT dataset to shed some light on the ending data characteristics (Section 3) (2) we develop a new strong classifier for tackling the SCT that uses a variety Context Right Ending Wrong Ending Ramona was very unhappy in her job.", "She asked for a raise, but was denied.", "The refusal prompted her to aggressively comb the want ads.", "She found an interesting new possibility and set up an interview.", "She was offered the new job at a higher salary.", "Ramona had no reason to want to change jobs anymore.", "The teacher was walking with a stack of papers.", "Outside started to rain.", "When the teacher tried to walk down a few steps, she ended up falling.", "The papers flew out of her hands and landed on the ground.", "A passer-by helped her up and helped her collect the papers.", "The teacher got up and walked home leaving the papers behind.", "(Chaturvedi et al., 2017) fails to keep up the performance on our new dataset.", "We discuss the implications of this experiment to the greater research community in terms of data collection and benchmarking practices in Section 6.", "All the code and datasets for this paper will be released to the public.", "We hope that the availability of the new evaluation set can further support the continued research on story understanding.", "Related Work This paper mainly extends the work on creating the Story Cloze Test set (Mostafazadeh et al., 2016) , hereinafter SCT-v1.0.", "The SCT-v1.0 dataset was created as follows: full five-sentence stories from the ROC Stories corpus were sampled, then, the initial four sentences were shown to a set of MTurk 2 crowd workers who were prompted to author 'right' and 'wrong' endings.", "Mostafazadeh et al.", "(Mostafazadeh et al., 2016) give special care to make sure there were no boundary cases for 'right' and 'wrong' endings by implementing extra rounds of data filtering.", "The resulting SCT-v1.0 dataset had a validation (hereinafter, SCT-v1.0 Val) and a test set (SCT-v1.0 test), each with 1,871 cases.", "Furthermore, we conducted an extensive ngram analysis, using word tokens, characters, partof-speech, and token-POS (similar to Schwartz et al.", "(Schwartz et al., 2017b) ) as features.", "We see char-grams such as \"sn't\" and \"not\" appear more commonly in the 'wrong endings', suggesting heavy negation.", "In 'right endings', pronouns are used more frequently versus proper nouns used in 'wrong endings'.", "Artifacts such as 'pizza' are common in 'wrong endings,' which could suggest that for a given topic, the authors may replace an object in a right ending with a wrong one and quickly think up a common item such as pizza to create a 'wrong' one.", "An extensive analysis of these features, including the n-gram analysis, can be found in the supplementary material.", "Model Following the analysis above, we developed a Story Cloze model, hereinafter EndingReg, that only uses the ending features while disregarding the story context for choosing the right ending.", "We expanded each Story Cloze Test case's ending options into a set of two single sentences.", "Then, for each sentence, we created the following features: 1.", "Number of tokens 2.", "VADER composite sentiment score 3.", "Yngve complexity score 4.", "Token-POS n-grams 5.", "POS n-grams 6.", "Four length character-grams All n-gram features needed to appear at least five times throughout the dataset.", "The features were collected for each five-sentence story and then fed into a logistic regression classifier.", "As an initial experiment, we trained this model using the SCT-v1.0 validation set and tested on the SCT-v1.0 test set.", "An L2 regularization penalty was used to enforce a Gaussian prior on the feature-space, where a grid search was conducted for hyper-parameter tuning.", "This model achieves an accuracy of 71.5% on the SCT-v1.0 dataset which is on par with the highest score achieved by any model using only the endings.", "Table 3 shows the accuracies ob-tained by models using only those particular features.", "We achieve minimal but sometimes important classification using token count, VADER, and Yngve in combination alone, better classification using POS or char-grams alone, and best classification using n-grams alone.", "By combining all of them we achieve the overall best results.", "Table 3 : Classification results on SCT-v1.0 using each of the feature sets designated in the columns.", "Data Collection Based on the findings above, a new test set for the SCT was deemed necessary.", "The premise of predicting an ending to a short story, as opposed to predicting say a middle sentence, enables a more systematic evaluation where human can agree on the cases 100%.", "Hence, our goal was to come up with a data collection scheme that overcomes the data collection biases, while keeping the original evaluation format.", "As the data analysis revealed, the token count, sentiment, and the complexity are not as important features for classification as the ending n-grams are.", "We set the following goals for sourcing the new 'right' and 'wrong' endings.", "They both should: 1.", "Contain a similar number of tokens 2.", "Have similar distributions of token n-grams and char-grams 3.", "Occur as standalone events with the same likelihood to occur, with topical, sentimental, or emotion consistencies when applicable.", "First, we crowdsourced 5,000 new five-sentence stories through Amazon Mechanical Turk.", "We prompted the users in the same manner described in Mostafazadeh et al.", "(2016) .", "In order to source new 'wrong' endings, we tried two different methods.", "In Method #1, we kept the original ending sourcing format of Mostafazadeh et al., but imposed some further restrictions.", "This was done by taking the first four sentences of the newly collected stories and asking an MTurker to write a 'right' and 'wrong' ending for each.", "The new restrictions were: 'Each sentence should stay within the same subject area of the story,' and 'The number of words in the Right and Wrong sentences should not differ by more than 2 words,' and 'When possible, the Right and Wrong sentences should try to keep a similar tone/sentiment as one another.'", "The motivation behind this technique was to reduce the statistical differences by asking the user to be mindful of considerations.", "In Method #2, we took the five sentences stories and prompted a second set of MTurk workers to modify the fifth sentence in order to make a resulting five-sentence story non-sensible.", "Here, the prompt instructs the workers to make sure the new 'wrong ending' sentence makes sense standalone, that it does not differ in the number of words from the original sentence by more than three words, and that the changes cannot be as simple as e.g., putting the word 'not' in front of a description or a verb.", "As a result, the workers had much less flexibility for changing the underlying linguistic structures which can help tackle the authorship style differences between the 'right' and 'wrong' endings.", "The results in Table 4 , which show classification accuracy when using EndingReg on the two new data sources, show that Method #2 is a slightly better data sourcing scheme in reducing the bias, since the EndingReg model's performance is slightly worse.", "The set was further filtered through human verification similar to Mostafazadeh et al.", "(2016) .", "The filtering was done by splitting each SCT-v1.0's two alternative endings into two independent five-sentence stories and asking three different MTurk users to categorize the story as either: one where the story made complete sense, one where the story made sense until the last sentence and one where the story does not make sense for another reason.", "Stories were only selected if all the three MTurk users verified that the story with the 'right ending' and the corresponding story with the 'wrong ending' were verified to be indeed right and wrong respectively.", "This ensured a higher quality of data and eliminating boundary cases.", "This entire process resulted in creating the Story Cloze Test v1.5 (SCT-v1.5) dataset, consisting of 1,571 stories for each validation and test sets.", "Method #1 Method #2 EndingReg 0.709 0.695 cogcomp 0.649 0.641 Table 4 : Comparison of initial data sourcing methods n βˆ’ gram char βˆ’ gram P OS SCT-v1.0 13.9 12.4 16.4 SCT-v1.5 7.0 6.3 7.5 Table 5 : Standard deviation of the word and character n-gram counts, as well as the part of speech (POS) counts, between the right and wrong endings.", "Results In order to test the decrease in n-gram bias, which was the most salient feature for the classification task using only the endings, we compare the variance between the n-gram counts from SCT-v1.0 to SCT-v1.5.", "The results are presented in Table 5 , which indicates the drop in the standard deviations in our new dataset.", "Table 6 shows the classification results of various models on SCT-v1.5.", "The drop in accuracy of the EndingReg model between the SCT-v1.0 and SCT-v1.5 shows a significant improvement on the statistical weight of the stylistic features generated by the model.", "Since the main features used are the token length and the various n-grams, this suggests that the new 'right endings' and 'wrong endings' have much more similar token n-gram, pos n-gram, postoken n-gram and char-gram overlap.", "Furthermore, the CogComp model's performance has significantly dropped on SCT-v1.5.", "Although this model seems to be using story comprehension features such as event sequencing, since the endings are included in the sequences, the biases within the endings have influenced the predictions and the weak performance of the model in SCT-v1.5 suggest that this model had picked up on the biases of SCT-v1.0 as opposed to really understanding the context.", "In particular, the posterior probabilities for each ending choice using their features are quite similar on the SCT-v1.5.", "These results place the classification accuracy of this top performing model on par with or worse than the models that did not use the ending features of the old SCT-v1.0 dataset (Mostafazadeh et al., 2017) , which suggest that the gap that once was held by models using the ending biases seems to be corrected for.", "Al- though we did not get to test all the other models published on SCT-v1.0 directly, we predict similar trends.", "It is important to point out that the 64.4% performance attained by our EndingReg model is still high for a model which completely discards the context.", "This indicates that although we could correct for some of the stylistic biases, there are some other hidden patterns in the new endings that would not have been accounted for without having the EndingReg baseline.", "This showcases the importance of maintaining benchmarks that evolve and improve over time, where systems should not be optimized for particular narrow test sets.", "We propose the community to report accuracies on both SCT-v1.0 and SCT-v1.5, both of which still have a huge gap between the best system and the human performance.", "Conclusion In this paper, we presented a comprehensive analysis of the stylistic features isolated in the endings of the original Story Cloze Test (SCT-v1.0).", "Using that analysis, along with a classifier we developed for testing new data collection schemes, we created a new SCT dataset, SCT-v1.5, which overcomes some of the biases.", "Based on the results presented in this paper, we believe that our SCT-v1.5 is a better benchmark for story comprehension.", "However, as shown in multiple AI tasks (Ettinger et al., 2017; Antol et al., 2015; Jabri et al., 2016; Poliak et al., 2018) , no collected dataset is entirely without its inherent biases and often the biases in datasets go undiscovered.", "We believe that evaluation benchmarks should evolve and improve over time and we are planning to incrementally update the Story Cloze Test benchmark.", "All the new versions, along with a leader-board showcasing the stateof-the-art results, will be tracked via CodaLab https://competitions.codalab.org/ competitions/15333.", "The success of our modified data collection method shows how extreme care must be given for sourcing new datasets.", "We suggest the next SCT challenges to be completely blind, where the participants cannot deliberately leverage any particular data biases.", "Along with this paper, we are releasing the datasets and the developed models to the community.", "All the announcements, new supplementary material, and datasets can be accessed through http://cs.", "rochester.edu/nlp/rocstories/.", "We hope that this work ignites further interest in the community for making progress on story understanding." ] }
{ "paper_header_number": [ "1", "2", "4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Model", "Data Collection", "Results", "Conclusion" ] }
GEM-SciDuet-train-105#paper-1275#slide-9
Analysis Sentiment Analysis
Used the Stanford Sentiment Analyzer [0-4] and Vader Sentiment Tagger [-1,1]. right endings wrong endings p-value VADER Sentiment score is significant, right endings tend to be more positive than wrong endings. The of most stories would probably yield neutral to positive higher and more concentrated peak around Right Endings wider distribution of Right Endings Allen, Omid B Tackling the Biasesakhshandeh,
Used the Stanford Sentiment Analyzer [0-4] and Vader Sentiment Tagger [-1,1]. right endings wrong endings p-value VADER Sentiment score is significant, right endings tend to be more positive than wrong endings. The of most stories would probably yield neutral to positive higher and more concentrated peak around Right Endings wider distribution of Right Endings Allen, Omid B Tackling the Biasesakhshandeh,
[]
GEM-SciDuet-train-105#paper-1275#slide-10
1275
Tackling the Story Ending Biases in The Story Cloze Test
The Story Cloze Test (SCT) is a recent framework for evaluating story comprehension and script learning. There have been a variety of models tackling the SCT so far. Although the original goal behind the SCT was to require systems to perform deep language understanding and commonsense reasoning for successful narrative understanding, some recent models could perform significantly better than the initial baselines by leveraging human-authorship biases discovered in the SCT dataset. In order to shed some light on this issue, we have performed various data analysis and analyzed a variety of top performing models presented for this task. Given the statistics we have aggregated, we have designed a new crowdsourcing scheme that creates a new SCT dataset, which overcomes some of the biases. We benchmark a few models on the new dataset and show that the topperforming model on the original SCT dataset fails to keep up its performance. Our findings further signify the importance of benchmarking NLP systems on various evolving test sets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112 ], "paper_content_text": [ "Introduction Story comprehension has been one of the longestrunning ambitions in artificial intelligence (Dijk, 1980; Charniak, 1972) .", "One of the challenges in expanding the field had been the lack of a solid evaluation framework and datasets on which comprehension models can be trained and tested.", "Mostafazadeh et al.", "(2016) introduced the Story Cloze Test (SCT) evaluation framework to address * This work was performed at University of Rochester.", "this issue.", "This test evaluates a story comprehension system where the system is given a foursentence short story as the 'context' and two alternative endings and to the story, labeled 'right ending' and 'wrong ending.'", "Then, the system's task is to choose the right ending.", "In order to support this task, Mostafazadeh et al.", "also provide the ROC Stories dataset, which is a collection of crowd-sourced complete five sentence stories through Amazon Mechanical Turk (MTurk).", "Each story follows a character through a fairly simple series of events to a conclusion.", "Several shallow and neural models, including the state-of-the-art script learning approaches, were presented as baselines (Mostafazadeh et al., 2016) for tackling the task, where they show that all their models perform only slightly better than a random baseline suggesting that richer models are required for tackling this task.", "A variety of new systems were proposed (Mihaylov and Frank, 2017; Schenk and Chiarcos, 2017; Schwartz et al., 2017b; Roemmele et al., 2017) as a part of the first shared task on SCT at LSDSem'17 workshop (Mostafazadeh et al., 2017) .", "Surprisingly, one of the models made a staggering improvement of 15% to the accuracy, partially due to using stylistic features isolated in the ending choices (Schwartz et al., 2017b) , discarding the narrative context.", "Clearly, this success does not seem to reflect the intent of the original task, where the systems should leverage narrative understanding as opposed to the statistical biases in the data.", "In this paper, we study the effect of such biases between the ending choices and present a new scheme to reduce such stylistic artifacts.", "The contribution of this paper is threefold: (1) we provide an extensive analysis of the SCT dataset to shed some light on the ending data characteristics (Section 3) (2) we develop a new strong classifier for tackling the SCT that uses a variety Context Right Ending Wrong Ending Ramona was very unhappy in her job.", "She asked for a raise, but was denied.", "The refusal prompted her to aggressively comb the want ads.", "She found an interesting new possibility and set up an interview.", "She was offered the new job at a higher salary.", "Ramona had no reason to want to change jobs anymore.", "The teacher was walking with a stack of papers.", "Outside started to rain.", "When the teacher tried to walk down a few steps, she ended up falling.", "The papers flew out of her hands and landed on the ground.", "A passer-by helped her up and helped her collect the papers.", "The teacher got up and walked home leaving the papers behind.", "(Chaturvedi et al., 2017) fails to keep up the performance on our new dataset.", "We discuss the implications of this experiment to the greater research community in terms of data collection and benchmarking practices in Section 6.", "All the code and datasets for this paper will be released to the public.", "We hope that the availability of the new evaluation set can further support the continued research on story understanding.", "Related Work This paper mainly extends the work on creating the Story Cloze Test set (Mostafazadeh et al., 2016) , hereinafter SCT-v1.0.", "The SCT-v1.0 dataset was created as follows: full five-sentence stories from the ROC Stories corpus were sampled, then, the initial four sentences were shown to a set of MTurk 2 crowd workers who were prompted to author 'right' and 'wrong' endings.", "Mostafazadeh et al.", "(Mostafazadeh et al., 2016) give special care to make sure there were no boundary cases for 'right' and 'wrong' endings by implementing extra rounds of data filtering.", "The resulting SCT-v1.0 dataset had a validation (hereinafter, SCT-v1.0 Val) and a test set (SCT-v1.0 test), each with 1,871 cases.", "Furthermore, we conducted an extensive ngram analysis, using word tokens, characters, partof-speech, and token-POS (similar to Schwartz et al.", "(Schwartz et al., 2017b) ) as features.", "We see char-grams such as \"sn't\" and \"not\" appear more commonly in the 'wrong endings', suggesting heavy negation.", "In 'right endings', pronouns are used more frequently versus proper nouns used in 'wrong endings'.", "Artifacts such as 'pizza' are common in 'wrong endings,' which could suggest that for a given topic, the authors may replace an object in a right ending with a wrong one and quickly think up a common item such as pizza to create a 'wrong' one.", "An extensive analysis of these features, including the n-gram analysis, can be found in the supplementary material.", "Model Following the analysis above, we developed a Story Cloze model, hereinafter EndingReg, that only uses the ending features while disregarding the story context for choosing the right ending.", "We expanded each Story Cloze Test case's ending options into a set of two single sentences.", "Then, for each sentence, we created the following features: 1.", "Number of tokens 2.", "VADER composite sentiment score 3.", "Yngve complexity score 4.", "Token-POS n-grams 5.", "POS n-grams 6.", "Four length character-grams All n-gram features needed to appear at least five times throughout the dataset.", "The features were collected for each five-sentence story and then fed into a logistic regression classifier.", "As an initial experiment, we trained this model using the SCT-v1.0 validation set and tested on the SCT-v1.0 test set.", "An L2 regularization penalty was used to enforce a Gaussian prior on the feature-space, where a grid search was conducted for hyper-parameter tuning.", "This model achieves an accuracy of 71.5% on the SCT-v1.0 dataset which is on par with the highest score achieved by any model using only the endings.", "Table 3 shows the accuracies ob-tained by models using only those particular features.", "We achieve minimal but sometimes important classification using token count, VADER, and Yngve in combination alone, better classification using POS or char-grams alone, and best classification using n-grams alone.", "By combining all of them we achieve the overall best results.", "Table 3 : Classification results on SCT-v1.0 using each of the feature sets designated in the columns.", "Data Collection Based on the findings above, a new test set for the SCT was deemed necessary.", "The premise of predicting an ending to a short story, as opposed to predicting say a middle sentence, enables a more systematic evaluation where human can agree on the cases 100%.", "Hence, our goal was to come up with a data collection scheme that overcomes the data collection biases, while keeping the original evaluation format.", "As the data analysis revealed, the token count, sentiment, and the complexity are not as important features for classification as the ending n-grams are.", "We set the following goals for sourcing the new 'right' and 'wrong' endings.", "They both should: 1.", "Contain a similar number of tokens 2.", "Have similar distributions of token n-grams and char-grams 3.", "Occur as standalone events with the same likelihood to occur, with topical, sentimental, or emotion consistencies when applicable.", "First, we crowdsourced 5,000 new five-sentence stories through Amazon Mechanical Turk.", "We prompted the users in the same manner described in Mostafazadeh et al.", "(2016) .", "In order to source new 'wrong' endings, we tried two different methods.", "In Method #1, we kept the original ending sourcing format of Mostafazadeh et al., but imposed some further restrictions.", "This was done by taking the first four sentences of the newly collected stories and asking an MTurker to write a 'right' and 'wrong' ending for each.", "The new restrictions were: 'Each sentence should stay within the same subject area of the story,' and 'The number of words in the Right and Wrong sentences should not differ by more than 2 words,' and 'When possible, the Right and Wrong sentences should try to keep a similar tone/sentiment as one another.'", "The motivation behind this technique was to reduce the statistical differences by asking the user to be mindful of considerations.", "In Method #2, we took the five sentences stories and prompted a second set of MTurk workers to modify the fifth sentence in order to make a resulting five-sentence story non-sensible.", "Here, the prompt instructs the workers to make sure the new 'wrong ending' sentence makes sense standalone, that it does not differ in the number of words from the original sentence by more than three words, and that the changes cannot be as simple as e.g., putting the word 'not' in front of a description or a verb.", "As a result, the workers had much less flexibility for changing the underlying linguistic structures which can help tackle the authorship style differences between the 'right' and 'wrong' endings.", "The results in Table 4 , which show classification accuracy when using EndingReg on the two new data sources, show that Method #2 is a slightly better data sourcing scheme in reducing the bias, since the EndingReg model's performance is slightly worse.", "The set was further filtered through human verification similar to Mostafazadeh et al.", "(2016) .", "The filtering was done by splitting each SCT-v1.0's two alternative endings into two independent five-sentence stories and asking three different MTurk users to categorize the story as either: one where the story made complete sense, one where the story made sense until the last sentence and one where the story does not make sense for another reason.", "Stories were only selected if all the three MTurk users verified that the story with the 'right ending' and the corresponding story with the 'wrong ending' were verified to be indeed right and wrong respectively.", "This ensured a higher quality of data and eliminating boundary cases.", "This entire process resulted in creating the Story Cloze Test v1.5 (SCT-v1.5) dataset, consisting of 1,571 stories for each validation and test sets.", "Method #1 Method #2 EndingReg 0.709 0.695 cogcomp 0.649 0.641 Table 4 : Comparison of initial data sourcing methods n βˆ’ gram char βˆ’ gram P OS SCT-v1.0 13.9 12.4 16.4 SCT-v1.5 7.0 6.3 7.5 Table 5 : Standard deviation of the word and character n-gram counts, as well as the part of speech (POS) counts, between the right and wrong endings.", "Results In order to test the decrease in n-gram bias, which was the most salient feature for the classification task using only the endings, we compare the variance between the n-gram counts from SCT-v1.0 to SCT-v1.5.", "The results are presented in Table 5 , which indicates the drop in the standard deviations in our new dataset.", "Table 6 shows the classification results of various models on SCT-v1.5.", "The drop in accuracy of the EndingReg model between the SCT-v1.0 and SCT-v1.5 shows a significant improvement on the statistical weight of the stylistic features generated by the model.", "Since the main features used are the token length and the various n-grams, this suggests that the new 'right endings' and 'wrong endings' have much more similar token n-gram, pos n-gram, postoken n-gram and char-gram overlap.", "Furthermore, the CogComp model's performance has significantly dropped on SCT-v1.5.", "Although this model seems to be using story comprehension features such as event sequencing, since the endings are included in the sequences, the biases within the endings have influenced the predictions and the weak performance of the model in SCT-v1.5 suggest that this model had picked up on the biases of SCT-v1.0 as opposed to really understanding the context.", "In particular, the posterior probabilities for each ending choice using their features are quite similar on the SCT-v1.5.", "These results place the classification accuracy of this top performing model on par with or worse than the models that did not use the ending features of the old SCT-v1.0 dataset (Mostafazadeh et al., 2017) , which suggest that the gap that once was held by models using the ending biases seems to be corrected for.", "Al- though we did not get to test all the other models published on SCT-v1.0 directly, we predict similar trends.", "It is important to point out that the 64.4% performance attained by our EndingReg model is still high for a model which completely discards the context.", "This indicates that although we could correct for some of the stylistic biases, there are some other hidden patterns in the new endings that would not have been accounted for without having the EndingReg baseline.", "This showcases the importance of maintaining benchmarks that evolve and improve over time, where systems should not be optimized for particular narrow test sets.", "We propose the community to report accuracies on both SCT-v1.0 and SCT-v1.5, both of which still have a huge gap between the best system and the human performance.", "Conclusion In this paper, we presented a comprehensive analysis of the stylistic features isolated in the endings of the original Story Cloze Test (SCT-v1.0).", "Using that analysis, along with a classifier we developed for testing new data collection schemes, we created a new SCT dataset, SCT-v1.5, which overcomes some of the biases.", "Based on the results presented in this paper, we believe that our SCT-v1.5 is a better benchmark for story comprehension.", "However, as shown in multiple AI tasks (Ettinger et al., 2017; Antol et al., 2015; Jabri et al., 2016; Poliak et al., 2018) , no collected dataset is entirely without its inherent biases and often the biases in datasets go undiscovered.", "We believe that evaluation benchmarks should evolve and improve over time and we are planning to incrementally update the Story Cloze Test benchmark.", "All the new versions, along with a leader-board showcasing the stateof-the-art results, will be tracked via CodaLab https://competitions.codalab.org/ competitions/15333.", "The success of our modified data collection method shows how extreme care must be given for sourcing new datasets.", "We suggest the next SCT challenges to be completely blind, where the participants cannot deliberately leverage any particular data biases.", "Along with this paper, we are releasing the datasets and the developed models to the community.", "All the announcements, new supplementary material, and datasets can be accessed through http://cs.", "rochester.edu/nlp/rocstories/.", "We hope that this work ignites further interest in the community for making progress on story understanding." ] }
{ "paper_header_number": [ "1", "2", "4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Model", "Data Collection", "Results", "Conclusion" ] }
GEM-SciDuet-train-105#paper-1275#slide-10
Analysis Syntactic Complexity Measurement
right endings wrong endings p-value Conclusion: Yngve score was generally more stable and Wrong Endings are more complex than Right Endings. Image from Roark et al. 2014 Allen, Omid B Tackling the Biasesakhshandeh,
right endings wrong endings p-value Conclusion: Yngve score was generally more stable and Wrong Endings are more complex than Right Endings. Image from Roark et al. 2014 Allen, Omid B Tackling the Biasesakhshandeh,
[]
GEM-SciDuet-train-105#paper-1275#slide-11
1275
Tackling the Story Ending Biases in The Story Cloze Test
The Story Cloze Test (SCT) is a recent framework for evaluating story comprehension and script learning. There have been a variety of models tackling the SCT so far. Although the original goal behind the SCT was to require systems to perform deep language understanding and commonsense reasoning for successful narrative understanding, some recent models could perform significantly better than the initial baselines by leveraging human-authorship biases discovered in the SCT dataset. In order to shed some light on this issue, we have performed various data analysis and analyzed a variety of top performing models presented for this task. Given the statistics we have aggregated, we have designed a new crowdsourcing scheme that creates a new SCT dataset, which overcomes some of the biases. We benchmark a few models on the new dataset and show that the topperforming model on the original SCT dataset fails to keep up its performance. Our findings further signify the importance of benchmarking NLP systems on various evolving test sets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112 ], "paper_content_text": [ "Introduction Story comprehension has been one of the longestrunning ambitions in artificial intelligence (Dijk, 1980; Charniak, 1972) .", "One of the challenges in expanding the field had been the lack of a solid evaluation framework and datasets on which comprehension models can be trained and tested.", "Mostafazadeh et al.", "(2016) introduced the Story Cloze Test (SCT) evaluation framework to address * This work was performed at University of Rochester.", "this issue.", "This test evaluates a story comprehension system where the system is given a foursentence short story as the 'context' and two alternative endings and to the story, labeled 'right ending' and 'wrong ending.'", "Then, the system's task is to choose the right ending.", "In order to support this task, Mostafazadeh et al.", "also provide the ROC Stories dataset, which is a collection of crowd-sourced complete five sentence stories through Amazon Mechanical Turk (MTurk).", "Each story follows a character through a fairly simple series of events to a conclusion.", "Several shallow and neural models, including the state-of-the-art script learning approaches, were presented as baselines (Mostafazadeh et al., 2016) for tackling the task, where they show that all their models perform only slightly better than a random baseline suggesting that richer models are required for tackling this task.", "A variety of new systems were proposed (Mihaylov and Frank, 2017; Schenk and Chiarcos, 2017; Schwartz et al., 2017b; Roemmele et al., 2017) as a part of the first shared task on SCT at LSDSem'17 workshop (Mostafazadeh et al., 2017) .", "Surprisingly, one of the models made a staggering improvement of 15% to the accuracy, partially due to using stylistic features isolated in the ending choices (Schwartz et al., 2017b) , discarding the narrative context.", "Clearly, this success does not seem to reflect the intent of the original task, where the systems should leverage narrative understanding as opposed to the statistical biases in the data.", "In this paper, we study the effect of such biases between the ending choices and present a new scheme to reduce such stylistic artifacts.", "The contribution of this paper is threefold: (1) we provide an extensive analysis of the SCT dataset to shed some light on the ending data characteristics (Section 3) (2) we develop a new strong classifier for tackling the SCT that uses a variety Context Right Ending Wrong Ending Ramona was very unhappy in her job.", "She asked for a raise, but was denied.", "The refusal prompted her to aggressively comb the want ads.", "She found an interesting new possibility and set up an interview.", "She was offered the new job at a higher salary.", "Ramona had no reason to want to change jobs anymore.", "The teacher was walking with a stack of papers.", "Outside started to rain.", "When the teacher tried to walk down a few steps, she ended up falling.", "The papers flew out of her hands and landed on the ground.", "A passer-by helped her up and helped her collect the papers.", "The teacher got up and walked home leaving the papers behind.", "(Chaturvedi et al., 2017) fails to keep up the performance on our new dataset.", "We discuss the implications of this experiment to the greater research community in terms of data collection and benchmarking practices in Section 6.", "All the code and datasets for this paper will be released to the public.", "We hope that the availability of the new evaluation set can further support the continued research on story understanding.", "Related Work This paper mainly extends the work on creating the Story Cloze Test set (Mostafazadeh et al., 2016) , hereinafter SCT-v1.0.", "The SCT-v1.0 dataset was created as follows: full five-sentence stories from the ROC Stories corpus were sampled, then, the initial four sentences were shown to a set of MTurk 2 crowd workers who were prompted to author 'right' and 'wrong' endings.", "Mostafazadeh et al.", "(Mostafazadeh et al., 2016) give special care to make sure there were no boundary cases for 'right' and 'wrong' endings by implementing extra rounds of data filtering.", "The resulting SCT-v1.0 dataset had a validation (hereinafter, SCT-v1.0 Val) and a test set (SCT-v1.0 test), each with 1,871 cases.", "Furthermore, we conducted an extensive ngram analysis, using word tokens, characters, partof-speech, and token-POS (similar to Schwartz et al.", "(Schwartz et al., 2017b) ) as features.", "We see char-grams such as \"sn't\" and \"not\" appear more commonly in the 'wrong endings', suggesting heavy negation.", "In 'right endings', pronouns are used more frequently versus proper nouns used in 'wrong endings'.", "Artifacts such as 'pizza' are common in 'wrong endings,' which could suggest that for a given topic, the authors may replace an object in a right ending with a wrong one and quickly think up a common item such as pizza to create a 'wrong' one.", "An extensive analysis of these features, including the n-gram analysis, can be found in the supplementary material.", "Model Following the analysis above, we developed a Story Cloze model, hereinafter EndingReg, that only uses the ending features while disregarding the story context for choosing the right ending.", "We expanded each Story Cloze Test case's ending options into a set of two single sentences.", "Then, for each sentence, we created the following features: 1.", "Number of tokens 2.", "VADER composite sentiment score 3.", "Yngve complexity score 4.", "Token-POS n-grams 5.", "POS n-grams 6.", "Four length character-grams All n-gram features needed to appear at least five times throughout the dataset.", "The features were collected for each five-sentence story and then fed into a logistic regression classifier.", "As an initial experiment, we trained this model using the SCT-v1.0 validation set and tested on the SCT-v1.0 test set.", "An L2 regularization penalty was used to enforce a Gaussian prior on the feature-space, where a grid search was conducted for hyper-parameter tuning.", "This model achieves an accuracy of 71.5% on the SCT-v1.0 dataset which is on par with the highest score achieved by any model using only the endings.", "Table 3 shows the accuracies ob-tained by models using only those particular features.", "We achieve minimal but sometimes important classification using token count, VADER, and Yngve in combination alone, better classification using POS or char-grams alone, and best classification using n-grams alone.", "By combining all of them we achieve the overall best results.", "Table 3 : Classification results on SCT-v1.0 using each of the feature sets designated in the columns.", "Data Collection Based on the findings above, a new test set for the SCT was deemed necessary.", "The premise of predicting an ending to a short story, as opposed to predicting say a middle sentence, enables a more systematic evaluation where human can agree on the cases 100%.", "Hence, our goal was to come up with a data collection scheme that overcomes the data collection biases, while keeping the original evaluation format.", "As the data analysis revealed, the token count, sentiment, and the complexity are not as important features for classification as the ending n-grams are.", "We set the following goals for sourcing the new 'right' and 'wrong' endings.", "They both should: 1.", "Contain a similar number of tokens 2.", "Have similar distributions of token n-grams and char-grams 3.", "Occur as standalone events with the same likelihood to occur, with topical, sentimental, or emotion consistencies when applicable.", "First, we crowdsourced 5,000 new five-sentence stories through Amazon Mechanical Turk.", "We prompted the users in the same manner described in Mostafazadeh et al.", "(2016) .", "In order to source new 'wrong' endings, we tried two different methods.", "In Method #1, we kept the original ending sourcing format of Mostafazadeh et al., but imposed some further restrictions.", "This was done by taking the first four sentences of the newly collected stories and asking an MTurker to write a 'right' and 'wrong' ending for each.", "The new restrictions were: 'Each sentence should stay within the same subject area of the story,' and 'The number of words in the Right and Wrong sentences should not differ by more than 2 words,' and 'When possible, the Right and Wrong sentences should try to keep a similar tone/sentiment as one another.'", "The motivation behind this technique was to reduce the statistical differences by asking the user to be mindful of considerations.", "In Method #2, we took the five sentences stories and prompted a second set of MTurk workers to modify the fifth sentence in order to make a resulting five-sentence story non-sensible.", "Here, the prompt instructs the workers to make sure the new 'wrong ending' sentence makes sense standalone, that it does not differ in the number of words from the original sentence by more than three words, and that the changes cannot be as simple as e.g., putting the word 'not' in front of a description or a verb.", "As a result, the workers had much less flexibility for changing the underlying linguistic structures which can help tackle the authorship style differences between the 'right' and 'wrong' endings.", "The results in Table 4 , which show classification accuracy when using EndingReg on the two new data sources, show that Method #2 is a slightly better data sourcing scheme in reducing the bias, since the EndingReg model's performance is slightly worse.", "The set was further filtered through human verification similar to Mostafazadeh et al.", "(2016) .", "The filtering was done by splitting each SCT-v1.0's two alternative endings into two independent five-sentence stories and asking three different MTurk users to categorize the story as either: one where the story made complete sense, one where the story made sense until the last sentence and one where the story does not make sense for another reason.", "Stories were only selected if all the three MTurk users verified that the story with the 'right ending' and the corresponding story with the 'wrong ending' were verified to be indeed right and wrong respectively.", "This ensured a higher quality of data and eliminating boundary cases.", "This entire process resulted in creating the Story Cloze Test v1.5 (SCT-v1.5) dataset, consisting of 1,571 stories for each validation and test sets.", "Method #1 Method #2 EndingReg 0.709 0.695 cogcomp 0.649 0.641 Table 4 : Comparison of initial data sourcing methods n βˆ’ gram char βˆ’ gram P OS SCT-v1.0 13.9 12.4 16.4 SCT-v1.5 7.0 6.3 7.5 Table 5 : Standard deviation of the word and character n-gram counts, as well as the part of speech (POS) counts, between the right and wrong endings.", "Results In order to test the decrease in n-gram bias, which was the most salient feature for the classification task using only the endings, we compare the variance between the n-gram counts from SCT-v1.0 to SCT-v1.5.", "The results are presented in Table 5 , which indicates the drop in the standard deviations in our new dataset.", "Table 6 shows the classification results of various models on SCT-v1.5.", "The drop in accuracy of the EndingReg model between the SCT-v1.0 and SCT-v1.5 shows a significant improvement on the statistical weight of the stylistic features generated by the model.", "Since the main features used are the token length and the various n-grams, this suggests that the new 'right endings' and 'wrong endings' have much more similar token n-gram, pos n-gram, postoken n-gram and char-gram overlap.", "Furthermore, the CogComp model's performance has significantly dropped on SCT-v1.5.", "Although this model seems to be using story comprehension features such as event sequencing, since the endings are included in the sequences, the biases within the endings have influenced the predictions and the weak performance of the model in SCT-v1.5 suggest that this model had picked up on the biases of SCT-v1.0 as opposed to really understanding the context.", "In particular, the posterior probabilities for each ending choice using their features are quite similar on the SCT-v1.5.", "These results place the classification accuracy of this top performing model on par with or worse than the models that did not use the ending features of the old SCT-v1.0 dataset (Mostafazadeh et al., 2017) , which suggest that the gap that once was held by models using the ending biases seems to be corrected for.", "Al- though we did not get to test all the other models published on SCT-v1.0 directly, we predict similar trends.", "It is important to point out that the 64.4% performance attained by our EndingReg model is still high for a model which completely discards the context.", "This indicates that although we could correct for some of the stylistic biases, there are some other hidden patterns in the new endings that would not have been accounted for without having the EndingReg baseline.", "This showcases the importance of maintaining benchmarks that evolve and improve over time, where systems should not be optimized for particular narrow test sets.", "We propose the community to report accuracies on both SCT-v1.0 and SCT-v1.5, both of which still have a huge gap between the best system and the human performance.", "Conclusion In this paper, we presented a comprehensive analysis of the stylistic features isolated in the endings of the original Story Cloze Test (SCT-v1.0).", "Using that analysis, along with a classifier we developed for testing new data collection schemes, we created a new SCT dataset, SCT-v1.5, which overcomes some of the biases.", "Based on the results presented in this paper, we believe that our SCT-v1.5 is a better benchmark for story comprehension.", "However, as shown in multiple AI tasks (Ettinger et al., 2017; Antol et al., 2015; Jabri et al., 2016; Poliak et al., 2018) , no collected dataset is entirely without its inherent biases and often the biases in datasets go undiscovered.", "We believe that evaluation benchmarks should evolve and improve over time and we are planning to incrementally update the Story Cloze Test benchmark.", "All the new versions, along with a leader-board showcasing the stateof-the-art results, will be tracked via CodaLab https://competitions.codalab.org/ competitions/15333.", "The success of our modified data collection method shows how extreme care must be given for sourcing new datasets.", "We suggest the next SCT challenges to be completely blind, where the participants cannot deliberately leverage any particular data biases.", "Along with this paper, we are releasing the datasets and the developed models to the community.", "All the announcements, new supplementary material, and datasets can be accessed through http://cs.", "rochester.edu/nlp/rocstories/.", "We hope that this work ignites further interest in the community for making progress on story understanding." ] }
{ "paper_header_number": [ "1", "2", "4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Model", "Data Collection", "Results", "Conclusion" ] }
GEM-SciDuet-train-105#paper-1275#slide-11
Analysis N gram Counts
1-5 length stemmed token n-grams, with START token 4 character size n-grams Part of Speech n-grams POS tag and bucketed Combined Token + POS n-grams Analysis: got or learn often in Right decid often in Wrong Wrong frequently have tokens like nt or snt Right Endings are more likely to feature pronouns (PRP) whereas Wrong Endings are likely to use the proper noun (NNP). Allen, Omid B Tackling the Biasesakhshandeh,
1-5 length stemmed token n-grams, with START token 4 character size n-grams Part of Speech n-grams POS tag and bucketed Combined Token + POS n-grams Analysis: got or learn often in Right decid often in Wrong Wrong frequently have tokens like nt or snt Right Endings are more likely to feature pronouns (PRP) whereas Wrong Endings are likely to use the proper noun (NNP). Allen, Omid B Tackling the Biasesakhshandeh,
[]
GEM-SciDuet-train-105#paper-1275#slide-12
1275
Tackling the Story Ending Biases in The Story Cloze Test
The Story Cloze Test (SCT) is a recent framework for evaluating story comprehension and script learning. There have been a variety of models tackling the SCT so far. Although the original goal behind the SCT was to require systems to perform deep language understanding and commonsense reasoning for successful narrative understanding, some recent models could perform significantly better than the initial baselines by leveraging human-authorship biases discovered in the SCT dataset. In order to shed some light on this issue, we have performed various data analysis and analyzed a variety of top performing models presented for this task. Given the statistics we have aggregated, we have designed a new crowdsourcing scheme that creates a new SCT dataset, which overcomes some of the biases. We benchmark a few models on the new dataset and show that the topperforming model on the original SCT dataset fails to keep up its performance. Our findings further signify the importance of benchmarking NLP systems on various evolving test sets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112 ], "paper_content_text": [ "Introduction Story comprehension has been one of the longestrunning ambitions in artificial intelligence (Dijk, 1980; Charniak, 1972) .", "One of the challenges in expanding the field had been the lack of a solid evaluation framework and datasets on which comprehension models can be trained and tested.", "Mostafazadeh et al.", "(2016) introduced the Story Cloze Test (SCT) evaluation framework to address * This work was performed at University of Rochester.", "this issue.", "This test evaluates a story comprehension system where the system is given a foursentence short story as the 'context' and two alternative endings and to the story, labeled 'right ending' and 'wrong ending.'", "Then, the system's task is to choose the right ending.", "In order to support this task, Mostafazadeh et al.", "also provide the ROC Stories dataset, which is a collection of crowd-sourced complete five sentence stories through Amazon Mechanical Turk (MTurk).", "Each story follows a character through a fairly simple series of events to a conclusion.", "Several shallow and neural models, including the state-of-the-art script learning approaches, were presented as baselines (Mostafazadeh et al., 2016) for tackling the task, where they show that all their models perform only slightly better than a random baseline suggesting that richer models are required for tackling this task.", "A variety of new systems were proposed (Mihaylov and Frank, 2017; Schenk and Chiarcos, 2017; Schwartz et al., 2017b; Roemmele et al., 2017) as a part of the first shared task on SCT at LSDSem'17 workshop (Mostafazadeh et al., 2017) .", "Surprisingly, one of the models made a staggering improvement of 15% to the accuracy, partially due to using stylistic features isolated in the ending choices (Schwartz et al., 2017b) , discarding the narrative context.", "Clearly, this success does not seem to reflect the intent of the original task, where the systems should leverage narrative understanding as opposed to the statistical biases in the data.", "In this paper, we study the effect of such biases between the ending choices and present a new scheme to reduce such stylistic artifacts.", "The contribution of this paper is threefold: (1) we provide an extensive analysis of the SCT dataset to shed some light on the ending data characteristics (Section 3) (2) we develop a new strong classifier for tackling the SCT that uses a variety Context Right Ending Wrong Ending Ramona was very unhappy in her job.", "She asked for a raise, but was denied.", "The refusal prompted her to aggressively comb the want ads.", "She found an interesting new possibility and set up an interview.", "She was offered the new job at a higher salary.", "Ramona had no reason to want to change jobs anymore.", "The teacher was walking with a stack of papers.", "Outside started to rain.", "When the teacher tried to walk down a few steps, she ended up falling.", "The papers flew out of her hands and landed on the ground.", "A passer-by helped her up and helped her collect the papers.", "The teacher got up and walked home leaving the papers behind.", "(Chaturvedi et al., 2017) fails to keep up the performance on our new dataset.", "We discuss the implications of this experiment to the greater research community in terms of data collection and benchmarking practices in Section 6.", "All the code and datasets for this paper will be released to the public.", "We hope that the availability of the new evaluation set can further support the continued research on story understanding.", "Related Work This paper mainly extends the work on creating the Story Cloze Test set (Mostafazadeh et al., 2016) , hereinafter SCT-v1.0.", "The SCT-v1.0 dataset was created as follows: full five-sentence stories from the ROC Stories corpus were sampled, then, the initial four sentences were shown to a set of MTurk 2 crowd workers who were prompted to author 'right' and 'wrong' endings.", "Mostafazadeh et al.", "(Mostafazadeh et al., 2016) give special care to make sure there were no boundary cases for 'right' and 'wrong' endings by implementing extra rounds of data filtering.", "The resulting SCT-v1.0 dataset had a validation (hereinafter, SCT-v1.0 Val) and a test set (SCT-v1.0 test), each with 1,871 cases.", "Furthermore, we conducted an extensive ngram analysis, using word tokens, characters, partof-speech, and token-POS (similar to Schwartz et al.", "(Schwartz et al., 2017b) ) as features.", "We see char-grams such as \"sn't\" and \"not\" appear more commonly in the 'wrong endings', suggesting heavy negation.", "In 'right endings', pronouns are used more frequently versus proper nouns used in 'wrong endings'.", "Artifacts such as 'pizza' are common in 'wrong endings,' which could suggest that for a given topic, the authors may replace an object in a right ending with a wrong one and quickly think up a common item such as pizza to create a 'wrong' one.", "An extensive analysis of these features, including the n-gram analysis, can be found in the supplementary material.", "Model Following the analysis above, we developed a Story Cloze model, hereinafter EndingReg, that only uses the ending features while disregarding the story context for choosing the right ending.", "We expanded each Story Cloze Test case's ending options into a set of two single sentences.", "Then, for each sentence, we created the following features: 1.", "Number of tokens 2.", "VADER composite sentiment score 3.", "Yngve complexity score 4.", "Token-POS n-grams 5.", "POS n-grams 6.", "Four length character-grams All n-gram features needed to appear at least five times throughout the dataset.", "The features were collected for each five-sentence story and then fed into a logistic regression classifier.", "As an initial experiment, we trained this model using the SCT-v1.0 validation set and tested on the SCT-v1.0 test set.", "An L2 regularization penalty was used to enforce a Gaussian prior on the feature-space, where a grid search was conducted for hyper-parameter tuning.", "This model achieves an accuracy of 71.5% on the SCT-v1.0 dataset which is on par with the highest score achieved by any model using only the endings.", "Table 3 shows the accuracies ob-tained by models using only those particular features.", "We achieve minimal but sometimes important classification using token count, VADER, and Yngve in combination alone, better classification using POS or char-grams alone, and best classification using n-grams alone.", "By combining all of them we achieve the overall best results.", "Table 3 : Classification results on SCT-v1.0 using each of the feature sets designated in the columns.", "Data Collection Based on the findings above, a new test set for the SCT was deemed necessary.", "The premise of predicting an ending to a short story, as opposed to predicting say a middle sentence, enables a more systematic evaluation where human can agree on the cases 100%.", "Hence, our goal was to come up with a data collection scheme that overcomes the data collection biases, while keeping the original evaluation format.", "As the data analysis revealed, the token count, sentiment, and the complexity are not as important features for classification as the ending n-grams are.", "We set the following goals for sourcing the new 'right' and 'wrong' endings.", "They both should: 1.", "Contain a similar number of tokens 2.", "Have similar distributions of token n-grams and char-grams 3.", "Occur as standalone events with the same likelihood to occur, with topical, sentimental, or emotion consistencies when applicable.", "First, we crowdsourced 5,000 new five-sentence stories through Amazon Mechanical Turk.", "We prompted the users in the same manner described in Mostafazadeh et al.", "(2016) .", "In order to source new 'wrong' endings, we tried two different methods.", "In Method #1, we kept the original ending sourcing format of Mostafazadeh et al., but imposed some further restrictions.", "This was done by taking the first four sentences of the newly collected stories and asking an MTurker to write a 'right' and 'wrong' ending for each.", "The new restrictions were: 'Each sentence should stay within the same subject area of the story,' and 'The number of words in the Right and Wrong sentences should not differ by more than 2 words,' and 'When possible, the Right and Wrong sentences should try to keep a similar tone/sentiment as one another.'", "The motivation behind this technique was to reduce the statistical differences by asking the user to be mindful of considerations.", "In Method #2, we took the five sentences stories and prompted a second set of MTurk workers to modify the fifth sentence in order to make a resulting five-sentence story non-sensible.", "Here, the prompt instructs the workers to make sure the new 'wrong ending' sentence makes sense standalone, that it does not differ in the number of words from the original sentence by more than three words, and that the changes cannot be as simple as e.g., putting the word 'not' in front of a description or a verb.", "As a result, the workers had much less flexibility for changing the underlying linguistic structures which can help tackle the authorship style differences between the 'right' and 'wrong' endings.", "The results in Table 4 , which show classification accuracy when using EndingReg on the two new data sources, show that Method #2 is a slightly better data sourcing scheme in reducing the bias, since the EndingReg model's performance is slightly worse.", "The set was further filtered through human verification similar to Mostafazadeh et al.", "(2016) .", "The filtering was done by splitting each SCT-v1.0's two alternative endings into two independent five-sentence stories and asking three different MTurk users to categorize the story as either: one where the story made complete sense, one where the story made sense until the last sentence and one where the story does not make sense for another reason.", "Stories were only selected if all the three MTurk users verified that the story with the 'right ending' and the corresponding story with the 'wrong ending' were verified to be indeed right and wrong respectively.", "This ensured a higher quality of data and eliminating boundary cases.", "This entire process resulted in creating the Story Cloze Test v1.5 (SCT-v1.5) dataset, consisting of 1,571 stories for each validation and test sets.", "Method #1 Method #2 EndingReg 0.709 0.695 cogcomp 0.649 0.641 Table 4 : Comparison of initial data sourcing methods n βˆ’ gram char βˆ’ gram P OS SCT-v1.0 13.9 12.4 16.4 SCT-v1.5 7.0 6.3 7.5 Table 5 : Standard deviation of the word and character n-gram counts, as well as the part of speech (POS) counts, between the right and wrong endings.", "Results In order to test the decrease in n-gram bias, which was the most salient feature for the classification task using only the endings, we compare the variance between the n-gram counts from SCT-v1.0 to SCT-v1.5.", "The results are presented in Table 5 , which indicates the drop in the standard deviations in our new dataset.", "Table 6 shows the classification results of various models on SCT-v1.5.", "The drop in accuracy of the EndingReg model between the SCT-v1.0 and SCT-v1.5 shows a significant improvement on the statistical weight of the stylistic features generated by the model.", "Since the main features used are the token length and the various n-grams, this suggests that the new 'right endings' and 'wrong endings' have much more similar token n-gram, pos n-gram, postoken n-gram and char-gram overlap.", "Furthermore, the CogComp model's performance has significantly dropped on SCT-v1.5.", "Although this model seems to be using story comprehension features such as event sequencing, since the endings are included in the sequences, the biases within the endings have influenced the predictions and the weak performance of the model in SCT-v1.5 suggest that this model had picked up on the biases of SCT-v1.0 as opposed to really understanding the context.", "In particular, the posterior probabilities for each ending choice using their features are quite similar on the SCT-v1.5.", "These results place the classification accuracy of this top performing model on par with or worse than the models that did not use the ending features of the old SCT-v1.0 dataset (Mostafazadeh et al., 2017) , which suggest that the gap that once was held by models using the ending biases seems to be corrected for.", "Al- though we did not get to test all the other models published on SCT-v1.0 directly, we predict similar trends.", "It is important to point out that the 64.4% performance attained by our EndingReg model is still high for a model which completely discards the context.", "This indicates that although we could correct for some of the stylistic biases, there are some other hidden patterns in the new endings that would not have been accounted for without having the EndingReg baseline.", "This showcases the importance of maintaining benchmarks that evolve and improve over time, where systems should not be optimized for particular narrow test sets.", "We propose the community to report accuracies on both SCT-v1.0 and SCT-v1.5, both of which still have a huge gap between the best system and the human performance.", "Conclusion In this paper, we presented a comprehensive analysis of the stylistic features isolated in the endings of the original Story Cloze Test (SCT-v1.0).", "Using that analysis, along with a classifier we developed for testing new data collection schemes, we created a new SCT dataset, SCT-v1.5, which overcomes some of the biases.", "Based on the results presented in this paper, we believe that our SCT-v1.5 is a better benchmark for story comprehension.", "However, as shown in multiple AI tasks (Ettinger et al., 2017; Antol et al., 2015; Jabri et al., 2016; Poliak et al., 2018) , no collected dataset is entirely without its inherent biases and often the biases in datasets go undiscovered.", "We believe that evaluation benchmarks should evolve and improve over time and we are planning to incrementally update the Story Cloze Test benchmark.", "All the new versions, along with a leader-board showcasing the stateof-the-art results, will be tracked via CodaLab https://competitions.codalab.org/ competitions/15333.", "The success of our modified data collection method shows how extreme care must be given for sourcing new datasets.", "We suggest the next SCT challenges to be completely blind, where the participants cannot deliberately leverage any particular data biases.", "Along with this paper, we are releasing the datasets and the developed models to the community.", "All the announcements, new supplementary material, and datasets can be accessed through http://cs.", "rochester.edu/nlp/rocstories/.", "We hope that this work ignites further interest in the community for making progress on story understanding." ] }
{ "paper_header_number": [ "1", "2", "4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Model", "Data Collection", "Results", "Conclusion" ] }
GEM-SciDuet-train-105#paper-1275#slide-12
EndingReg Model
A Logistic Regression Model to perform the Story Cloze Test using only the following features extracted from the endings: *also added an L2 regularization penalty and used a grid search was conducted for parameter tuning tokencount, VADER, yngve ngram pos chargrams All accuracy Allen, Omid B Tackling the Biasesakhshandeh,
A Logistic Regression Model to perform the Story Cloze Test using only the following features extracted from the endings: *also added an L2 regularization penalty and used a grid search was conducted for parameter tuning tokencount, VADER, yngve ngram pos chargrams All accuracy Allen, Omid B Tackling the Biasesakhshandeh,
[]
GEM-SciDuet-train-105#paper-1275#slide-13
1275
Tackling the Story Ending Biases in The Story Cloze Test
The Story Cloze Test (SCT) is a recent framework for evaluating story comprehension and script learning. There have been a variety of models tackling the SCT so far. Although the original goal behind the SCT was to require systems to perform deep language understanding and commonsense reasoning for successful narrative understanding, some recent models could perform significantly better than the initial baselines by leveraging human-authorship biases discovered in the SCT dataset. In order to shed some light on this issue, we have performed various data analysis and analyzed a variety of top performing models presented for this task. Given the statistics we have aggregated, we have designed a new crowdsourcing scheme that creates a new SCT dataset, which overcomes some of the biases. We benchmark a few models on the new dataset and show that the topperforming model on the original SCT dataset fails to keep up its performance. Our findings further signify the importance of benchmarking NLP systems on various evolving test sets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112 ], "paper_content_text": [ "Introduction Story comprehension has been one of the longestrunning ambitions in artificial intelligence (Dijk, 1980; Charniak, 1972) .", "One of the challenges in expanding the field had been the lack of a solid evaluation framework and datasets on which comprehension models can be trained and tested.", "Mostafazadeh et al.", "(2016) introduced the Story Cloze Test (SCT) evaluation framework to address * This work was performed at University of Rochester.", "this issue.", "This test evaluates a story comprehension system where the system is given a foursentence short story as the 'context' and two alternative endings and to the story, labeled 'right ending' and 'wrong ending.'", "Then, the system's task is to choose the right ending.", "In order to support this task, Mostafazadeh et al.", "also provide the ROC Stories dataset, which is a collection of crowd-sourced complete five sentence stories through Amazon Mechanical Turk (MTurk).", "Each story follows a character through a fairly simple series of events to a conclusion.", "Several shallow and neural models, including the state-of-the-art script learning approaches, were presented as baselines (Mostafazadeh et al., 2016) for tackling the task, where they show that all their models perform only slightly better than a random baseline suggesting that richer models are required for tackling this task.", "A variety of new systems were proposed (Mihaylov and Frank, 2017; Schenk and Chiarcos, 2017; Schwartz et al., 2017b; Roemmele et al., 2017) as a part of the first shared task on SCT at LSDSem'17 workshop (Mostafazadeh et al., 2017) .", "Surprisingly, one of the models made a staggering improvement of 15% to the accuracy, partially due to using stylistic features isolated in the ending choices (Schwartz et al., 2017b) , discarding the narrative context.", "Clearly, this success does not seem to reflect the intent of the original task, where the systems should leverage narrative understanding as opposed to the statistical biases in the data.", "In this paper, we study the effect of such biases between the ending choices and present a new scheme to reduce such stylistic artifacts.", "The contribution of this paper is threefold: (1) we provide an extensive analysis of the SCT dataset to shed some light on the ending data characteristics (Section 3) (2) we develop a new strong classifier for tackling the SCT that uses a variety Context Right Ending Wrong Ending Ramona was very unhappy in her job.", "She asked for a raise, but was denied.", "The refusal prompted her to aggressively comb the want ads.", "She found an interesting new possibility and set up an interview.", "She was offered the new job at a higher salary.", "Ramona had no reason to want to change jobs anymore.", "The teacher was walking with a stack of papers.", "Outside started to rain.", "When the teacher tried to walk down a few steps, she ended up falling.", "The papers flew out of her hands and landed on the ground.", "A passer-by helped her up and helped her collect the papers.", "The teacher got up and walked home leaving the papers behind.", "(Chaturvedi et al., 2017) fails to keep up the performance on our new dataset.", "We discuss the implications of this experiment to the greater research community in terms of data collection and benchmarking practices in Section 6.", "All the code and datasets for this paper will be released to the public.", "We hope that the availability of the new evaluation set can further support the continued research on story understanding.", "Related Work This paper mainly extends the work on creating the Story Cloze Test set (Mostafazadeh et al., 2016) , hereinafter SCT-v1.0.", "The SCT-v1.0 dataset was created as follows: full five-sentence stories from the ROC Stories corpus were sampled, then, the initial four sentences were shown to a set of MTurk 2 crowd workers who were prompted to author 'right' and 'wrong' endings.", "Mostafazadeh et al.", "(Mostafazadeh et al., 2016) give special care to make sure there were no boundary cases for 'right' and 'wrong' endings by implementing extra rounds of data filtering.", "The resulting SCT-v1.0 dataset had a validation (hereinafter, SCT-v1.0 Val) and a test set (SCT-v1.0 test), each with 1,871 cases.", "Furthermore, we conducted an extensive ngram analysis, using word tokens, characters, partof-speech, and token-POS (similar to Schwartz et al.", "(Schwartz et al., 2017b) ) as features.", "We see char-grams such as \"sn't\" and \"not\" appear more commonly in the 'wrong endings', suggesting heavy negation.", "In 'right endings', pronouns are used more frequently versus proper nouns used in 'wrong endings'.", "Artifacts such as 'pizza' are common in 'wrong endings,' which could suggest that for a given topic, the authors may replace an object in a right ending with a wrong one and quickly think up a common item such as pizza to create a 'wrong' one.", "An extensive analysis of these features, including the n-gram analysis, can be found in the supplementary material.", "Model Following the analysis above, we developed a Story Cloze model, hereinafter EndingReg, that only uses the ending features while disregarding the story context for choosing the right ending.", "We expanded each Story Cloze Test case's ending options into a set of two single sentences.", "Then, for each sentence, we created the following features: 1.", "Number of tokens 2.", "VADER composite sentiment score 3.", "Yngve complexity score 4.", "Token-POS n-grams 5.", "POS n-grams 6.", "Four length character-grams All n-gram features needed to appear at least five times throughout the dataset.", "The features were collected for each five-sentence story and then fed into a logistic regression classifier.", "As an initial experiment, we trained this model using the SCT-v1.0 validation set and tested on the SCT-v1.0 test set.", "An L2 regularization penalty was used to enforce a Gaussian prior on the feature-space, where a grid search was conducted for hyper-parameter tuning.", "This model achieves an accuracy of 71.5% on the SCT-v1.0 dataset which is on par with the highest score achieved by any model using only the endings.", "Table 3 shows the accuracies ob-tained by models using only those particular features.", "We achieve minimal but sometimes important classification using token count, VADER, and Yngve in combination alone, better classification using POS or char-grams alone, and best classification using n-grams alone.", "By combining all of them we achieve the overall best results.", "Table 3 : Classification results on SCT-v1.0 using each of the feature sets designated in the columns.", "Data Collection Based on the findings above, a new test set for the SCT was deemed necessary.", "The premise of predicting an ending to a short story, as opposed to predicting say a middle sentence, enables a more systematic evaluation where human can agree on the cases 100%.", "Hence, our goal was to come up with a data collection scheme that overcomes the data collection biases, while keeping the original evaluation format.", "As the data analysis revealed, the token count, sentiment, and the complexity are not as important features for classification as the ending n-grams are.", "We set the following goals for sourcing the new 'right' and 'wrong' endings.", "They both should: 1.", "Contain a similar number of tokens 2.", "Have similar distributions of token n-grams and char-grams 3.", "Occur as standalone events with the same likelihood to occur, with topical, sentimental, or emotion consistencies when applicable.", "First, we crowdsourced 5,000 new five-sentence stories through Amazon Mechanical Turk.", "We prompted the users in the same manner described in Mostafazadeh et al.", "(2016) .", "In order to source new 'wrong' endings, we tried two different methods.", "In Method #1, we kept the original ending sourcing format of Mostafazadeh et al., but imposed some further restrictions.", "This was done by taking the first four sentences of the newly collected stories and asking an MTurker to write a 'right' and 'wrong' ending for each.", "The new restrictions were: 'Each sentence should stay within the same subject area of the story,' and 'The number of words in the Right and Wrong sentences should not differ by more than 2 words,' and 'When possible, the Right and Wrong sentences should try to keep a similar tone/sentiment as one another.'", "The motivation behind this technique was to reduce the statistical differences by asking the user to be mindful of considerations.", "In Method #2, we took the five sentences stories and prompted a second set of MTurk workers to modify the fifth sentence in order to make a resulting five-sentence story non-sensible.", "Here, the prompt instructs the workers to make sure the new 'wrong ending' sentence makes sense standalone, that it does not differ in the number of words from the original sentence by more than three words, and that the changes cannot be as simple as e.g., putting the word 'not' in front of a description or a verb.", "As a result, the workers had much less flexibility for changing the underlying linguistic structures which can help tackle the authorship style differences between the 'right' and 'wrong' endings.", "The results in Table 4 , which show classification accuracy when using EndingReg on the two new data sources, show that Method #2 is a slightly better data sourcing scheme in reducing the bias, since the EndingReg model's performance is slightly worse.", "The set was further filtered through human verification similar to Mostafazadeh et al.", "(2016) .", "The filtering was done by splitting each SCT-v1.0's two alternative endings into two independent five-sentence stories and asking three different MTurk users to categorize the story as either: one where the story made complete sense, one where the story made sense until the last sentence and one where the story does not make sense for another reason.", "Stories were only selected if all the three MTurk users verified that the story with the 'right ending' and the corresponding story with the 'wrong ending' were verified to be indeed right and wrong respectively.", "This ensured a higher quality of data and eliminating boundary cases.", "This entire process resulted in creating the Story Cloze Test v1.5 (SCT-v1.5) dataset, consisting of 1,571 stories for each validation and test sets.", "Method #1 Method #2 EndingReg 0.709 0.695 cogcomp 0.649 0.641 Table 4 : Comparison of initial data sourcing methods n βˆ’ gram char βˆ’ gram P OS SCT-v1.0 13.9 12.4 16.4 SCT-v1.5 7.0 6.3 7.5 Table 5 : Standard deviation of the word and character n-gram counts, as well as the part of speech (POS) counts, between the right and wrong endings.", "Results In order to test the decrease in n-gram bias, which was the most salient feature for the classification task using only the endings, we compare the variance between the n-gram counts from SCT-v1.0 to SCT-v1.5.", "The results are presented in Table 5 , which indicates the drop in the standard deviations in our new dataset.", "Table 6 shows the classification results of various models on SCT-v1.5.", "The drop in accuracy of the EndingReg model between the SCT-v1.0 and SCT-v1.5 shows a significant improvement on the statistical weight of the stylistic features generated by the model.", "Since the main features used are the token length and the various n-grams, this suggests that the new 'right endings' and 'wrong endings' have much more similar token n-gram, pos n-gram, postoken n-gram and char-gram overlap.", "Furthermore, the CogComp model's performance has significantly dropped on SCT-v1.5.", "Although this model seems to be using story comprehension features such as event sequencing, since the endings are included in the sequences, the biases within the endings have influenced the predictions and the weak performance of the model in SCT-v1.5 suggest that this model had picked up on the biases of SCT-v1.0 as opposed to really understanding the context.", "In particular, the posterior probabilities for each ending choice using their features are quite similar on the SCT-v1.5.", "These results place the classification accuracy of this top performing model on par with or worse than the models that did not use the ending features of the old SCT-v1.0 dataset (Mostafazadeh et al., 2017) , which suggest that the gap that once was held by models using the ending biases seems to be corrected for.", "Al- though we did not get to test all the other models published on SCT-v1.0 directly, we predict similar trends.", "It is important to point out that the 64.4% performance attained by our EndingReg model is still high for a model which completely discards the context.", "This indicates that although we could correct for some of the stylistic biases, there are some other hidden patterns in the new endings that would not have been accounted for without having the EndingReg baseline.", "This showcases the importance of maintaining benchmarks that evolve and improve over time, where systems should not be optimized for particular narrow test sets.", "We propose the community to report accuracies on both SCT-v1.0 and SCT-v1.5, both of which still have a huge gap between the best system and the human performance.", "Conclusion In this paper, we presented a comprehensive analysis of the stylistic features isolated in the endings of the original Story Cloze Test (SCT-v1.0).", "Using that analysis, along with a classifier we developed for testing new data collection schemes, we created a new SCT dataset, SCT-v1.5, which overcomes some of the biases.", "Based on the results presented in this paper, we believe that our SCT-v1.5 is a better benchmark for story comprehension.", "However, as shown in multiple AI tasks (Ettinger et al., 2017; Antol et al., 2015; Jabri et al., 2016; Poliak et al., 2018) , no collected dataset is entirely without its inherent biases and often the biases in datasets go undiscovered.", "We believe that evaluation benchmarks should evolve and improve over time and we are planning to incrementally update the Story Cloze Test benchmark.", "All the new versions, along with a leader-board showcasing the stateof-the-art results, will be tracked via CodaLab https://competitions.codalab.org/ competitions/15333.", "The success of our modified data collection method shows how extreme care must be given for sourcing new datasets.", "We suggest the next SCT challenges to be completely blind, where the participants cannot deliberately leverage any particular data biases.", "Along with this paper, we are releasing the datasets and the developed models to the community.", "All the announcements, new supplementary material, and datasets can be accessed through http://cs.", "rochester.edu/nlp/rocstories/.", "We hope that this work ignites further interest in the community for making progress on story understanding." ] }
{ "paper_header_number": [ "1", "2", "4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Model", "Data Collection", "Results", "Conclusion" ] }
GEM-SciDuet-train-105#paper-1275#slide-13
The Criteria of the New Dataset
The Right and Wrong Endings should: Contain a similar number of tokens Have similar distributions of token n-grams and char-grams Occur as standalone events with the same likelihood to occur, with topical, sentimental, or emotional consistencies when applicable. Allen, Omid B Tackling the Biasesakhshandeh,
The Right and Wrong Endings should: Contain a similar number of tokens Have similar distributions of token n-grams and char-grams Occur as standalone events with the same likelihood to occur, with topical, sentimental, or emotional consistencies when applicable. Allen, Omid B Tackling the Biasesakhshandeh,
[]
GEM-SciDuet-train-105#paper-1275#slide-14
1275
Tackling the Story Ending Biases in The Story Cloze Test
The Story Cloze Test (SCT) is a recent framework for evaluating story comprehension and script learning. There have been a variety of models tackling the SCT so far. Although the original goal behind the SCT was to require systems to perform deep language understanding and commonsense reasoning for successful narrative understanding, some recent models could perform significantly better than the initial baselines by leveraging human-authorship biases discovered in the SCT dataset. In order to shed some light on this issue, we have performed various data analysis and analyzed a variety of top performing models presented for this task. Given the statistics we have aggregated, we have designed a new crowdsourcing scheme that creates a new SCT dataset, which overcomes some of the biases. We benchmark a few models on the new dataset and show that the topperforming model on the original SCT dataset fails to keep up its performance. Our findings further signify the importance of benchmarking NLP systems on various evolving test sets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112 ], "paper_content_text": [ "Introduction Story comprehension has been one of the longestrunning ambitions in artificial intelligence (Dijk, 1980; Charniak, 1972) .", "One of the challenges in expanding the field had been the lack of a solid evaluation framework and datasets on which comprehension models can be trained and tested.", "Mostafazadeh et al.", "(2016) introduced the Story Cloze Test (SCT) evaluation framework to address * This work was performed at University of Rochester.", "this issue.", "This test evaluates a story comprehension system where the system is given a foursentence short story as the 'context' and two alternative endings and to the story, labeled 'right ending' and 'wrong ending.'", "Then, the system's task is to choose the right ending.", "In order to support this task, Mostafazadeh et al.", "also provide the ROC Stories dataset, which is a collection of crowd-sourced complete five sentence stories through Amazon Mechanical Turk (MTurk).", "Each story follows a character through a fairly simple series of events to a conclusion.", "Several shallow and neural models, including the state-of-the-art script learning approaches, were presented as baselines (Mostafazadeh et al., 2016) for tackling the task, where they show that all their models perform only slightly better than a random baseline suggesting that richer models are required for tackling this task.", "A variety of new systems were proposed (Mihaylov and Frank, 2017; Schenk and Chiarcos, 2017; Schwartz et al., 2017b; Roemmele et al., 2017) as a part of the first shared task on SCT at LSDSem'17 workshop (Mostafazadeh et al., 2017) .", "Surprisingly, one of the models made a staggering improvement of 15% to the accuracy, partially due to using stylistic features isolated in the ending choices (Schwartz et al., 2017b) , discarding the narrative context.", "Clearly, this success does not seem to reflect the intent of the original task, where the systems should leverage narrative understanding as opposed to the statistical biases in the data.", "In this paper, we study the effect of such biases between the ending choices and present a new scheme to reduce such stylistic artifacts.", "The contribution of this paper is threefold: (1) we provide an extensive analysis of the SCT dataset to shed some light on the ending data characteristics (Section 3) (2) we develop a new strong classifier for tackling the SCT that uses a variety Context Right Ending Wrong Ending Ramona was very unhappy in her job.", "She asked for a raise, but was denied.", "The refusal prompted her to aggressively comb the want ads.", "She found an interesting new possibility and set up an interview.", "She was offered the new job at a higher salary.", "Ramona had no reason to want to change jobs anymore.", "The teacher was walking with a stack of papers.", "Outside started to rain.", "When the teacher tried to walk down a few steps, she ended up falling.", "The papers flew out of her hands and landed on the ground.", "A passer-by helped her up and helped her collect the papers.", "The teacher got up and walked home leaving the papers behind.", "(Chaturvedi et al., 2017) fails to keep up the performance on our new dataset.", "We discuss the implications of this experiment to the greater research community in terms of data collection and benchmarking practices in Section 6.", "All the code and datasets for this paper will be released to the public.", "We hope that the availability of the new evaluation set can further support the continued research on story understanding.", "Related Work This paper mainly extends the work on creating the Story Cloze Test set (Mostafazadeh et al., 2016) , hereinafter SCT-v1.0.", "The SCT-v1.0 dataset was created as follows: full five-sentence stories from the ROC Stories corpus were sampled, then, the initial four sentences were shown to a set of MTurk 2 crowd workers who were prompted to author 'right' and 'wrong' endings.", "Mostafazadeh et al.", "(Mostafazadeh et al., 2016) give special care to make sure there were no boundary cases for 'right' and 'wrong' endings by implementing extra rounds of data filtering.", "The resulting SCT-v1.0 dataset had a validation (hereinafter, SCT-v1.0 Val) and a test set (SCT-v1.0 test), each with 1,871 cases.", "Furthermore, we conducted an extensive ngram analysis, using word tokens, characters, partof-speech, and token-POS (similar to Schwartz et al.", "(Schwartz et al., 2017b) ) as features.", "We see char-grams such as \"sn't\" and \"not\" appear more commonly in the 'wrong endings', suggesting heavy negation.", "In 'right endings', pronouns are used more frequently versus proper nouns used in 'wrong endings'.", "Artifacts such as 'pizza' are common in 'wrong endings,' which could suggest that for a given topic, the authors may replace an object in a right ending with a wrong one and quickly think up a common item such as pizza to create a 'wrong' one.", "An extensive analysis of these features, including the n-gram analysis, can be found in the supplementary material.", "Model Following the analysis above, we developed a Story Cloze model, hereinafter EndingReg, that only uses the ending features while disregarding the story context for choosing the right ending.", "We expanded each Story Cloze Test case's ending options into a set of two single sentences.", "Then, for each sentence, we created the following features: 1.", "Number of tokens 2.", "VADER composite sentiment score 3.", "Yngve complexity score 4.", "Token-POS n-grams 5.", "POS n-grams 6.", "Four length character-grams All n-gram features needed to appear at least five times throughout the dataset.", "The features were collected for each five-sentence story and then fed into a logistic regression classifier.", "As an initial experiment, we trained this model using the SCT-v1.0 validation set and tested on the SCT-v1.0 test set.", "An L2 regularization penalty was used to enforce a Gaussian prior on the feature-space, where a grid search was conducted for hyper-parameter tuning.", "This model achieves an accuracy of 71.5% on the SCT-v1.0 dataset which is on par with the highest score achieved by any model using only the endings.", "Table 3 shows the accuracies ob-tained by models using only those particular features.", "We achieve minimal but sometimes important classification using token count, VADER, and Yngve in combination alone, better classification using POS or char-grams alone, and best classification using n-grams alone.", "By combining all of them we achieve the overall best results.", "Table 3 : Classification results on SCT-v1.0 using each of the feature sets designated in the columns.", "Data Collection Based on the findings above, a new test set for the SCT was deemed necessary.", "The premise of predicting an ending to a short story, as opposed to predicting say a middle sentence, enables a more systematic evaluation where human can agree on the cases 100%.", "Hence, our goal was to come up with a data collection scheme that overcomes the data collection biases, while keeping the original evaluation format.", "As the data analysis revealed, the token count, sentiment, and the complexity are not as important features for classification as the ending n-grams are.", "We set the following goals for sourcing the new 'right' and 'wrong' endings.", "They both should: 1.", "Contain a similar number of tokens 2.", "Have similar distributions of token n-grams and char-grams 3.", "Occur as standalone events with the same likelihood to occur, with topical, sentimental, or emotion consistencies when applicable.", "First, we crowdsourced 5,000 new five-sentence stories through Amazon Mechanical Turk.", "We prompted the users in the same manner described in Mostafazadeh et al.", "(2016) .", "In order to source new 'wrong' endings, we tried two different methods.", "In Method #1, we kept the original ending sourcing format of Mostafazadeh et al., but imposed some further restrictions.", "This was done by taking the first four sentences of the newly collected stories and asking an MTurker to write a 'right' and 'wrong' ending for each.", "The new restrictions were: 'Each sentence should stay within the same subject area of the story,' and 'The number of words in the Right and Wrong sentences should not differ by more than 2 words,' and 'When possible, the Right and Wrong sentences should try to keep a similar tone/sentiment as one another.'", "The motivation behind this technique was to reduce the statistical differences by asking the user to be mindful of considerations.", "In Method #2, we took the five sentences stories and prompted a second set of MTurk workers to modify the fifth sentence in order to make a resulting five-sentence story non-sensible.", "Here, the prompt instructs the workers to make sure the new 'wrong ending' sentence makes sense standalone, that it does not differ in the number of words from the original sentence by more than three words, and that the changes cannot be as simple as e.g., putting the word 'not' in front of a description or a verb.", "As a result, the workers had much less flexibility for changing the underlying linguistic structures which can help tackle the authorship style differences between the 'right' and 'wrong' endings.", "The results in Table 4 , which show classification accuracy when using EndingReg on the two new data sources, show that Method #2 is a slightly better data sourcing scheme in reducing the bias, since the EndingReg model's performance is slightly worse.", "The set was further filtered through human verification similar to Mostafazadeh et al.", "(2016) .", "The filtering was done by splitting each SCT-v1.0's two alternative endings into two independent five-sentence stories and asking three different MTurk users to categorize the story as either: one where the story made complete sense, one where the story made sense until the last sentence and one where the story does not make sense for another reason.", "Stories were only selected if all the three MTurk users verified that the story with the 'right ending' and the corresponding story with the 'wrong ending' were verified to be indeed right and wrong respectively.", "This ensured a higher quality of data and eliminating boundary cases.", "This entire process resulted in creating the Story Cloze Test v1.5 (SCT-v1.5) dataset, consisting of 1,571 stories for each validation and test sets.", "Method #1 Method #2 EndingReg 0.709 0.695 cogcomp 0.649 0.641 Table 4 : Comparison of initial data sourcing methods n βˆ’ gram char βˆ’ gram P OS SCT-v1.0 13.9 12.4 16.4 SCT-v1.5 7.0 6.3 7.5 Table 5 : Standard deviation of the word and character n-gram counts, as well as the part of speech (POS) counts, between the right and wrong endings.", "Results In order to test the decrease in n-gram bias, which was the most salient feature for the classification task using only the endings, we compare the variance between the n-gram counts from SCT-v1.0 to SCT-v1.5.", "The results are presented in Table 5 , which indicates the drop in the standard deviations in our new dataset.", "Table 6 shows the classification results of various models on SCT-v1.5.", "The drop in accuracy of the EndingReg model between the SCT-v1.0 and SCT-v1.5 shows a significant improvement on the statistical weight of the stylistic features generated by the model.", "Since the main features used are the token length and the various n-grams, this suggests that the new 'right endings' and 'wrong endings' have much more similar token n-gram, pos n-gram, postoken n-gram and char-gram overlap.", "Furthermore, the CogComp model's performance has significantly dropped on SCT-v1.5.", "Although this model seems to be using story comprehension features such as event sequencing, since the endings are included in the sequences, the biases within the endings have influenced the predictions and the weak performance of the model in SCT-v1.5 suggest that this model had picked up on the biases of SCT-v1.0 as opposed to really understanding the context.", "In particular, the posterior probabilities for each ending choice using their features are quite similar on the SCT-v1.5.", "These results place the classification accuracy of this top performing model on par with or worse than the models that did not use the ending features of the old SCT-v1.0 dataset (Mostafazadeh et al., 2017) , which suggest that the gap that once was held by models using the ending biases seems to be corrected for.", "Al- though we did not get to test all the other models published on SCT-v1.0 directly, we predict similar trends.", "It is important to point out that the 64.4% performance attained by our EndingReg model is still high for a model which completely discards the context.", "This indicates that although we could correct for some of the stylistic biases, there are some other hidden patterns in the new endings that would not have been accounted for without having the EndingReg baseline.", "This showcases the importance of maintaining benchmarks that evolve and improve over time, where systems should not be optimized for particular narrow test sets.", "We propose the community to report accuracies on both SCT-v1.0 and SCT-v1.5, both of which still have a huge gap between the best system and the human performance.", "Conclusion In this paper, we presented a comprehensive analysis of the stylistic features isolated in the endings of the original Story Cloze Test (SCT-v1.0).", "Using that analysis, along with a classifier we developed for testing new data collection schemes, we created a new SCT dataset, SCT-v1.5, which overcomes some of the biases.", "Based on the results presented in this paper, we believe that our SCT-v1.5 is a better benchmark for story comprehension.", "However, as shown in multiple AI tasks (Ettinger et al., 2017; Antol et al., 2015; Jabri et al., 2016; Poliak et al., 2018) , no collected dataset is entirely without its inherent biases and often the biases in datasets go undiscovered.", "We believe that evaluation benchmarks should evolve and improve over time and we are planning to incrementally update the Story Cloze Test benchmark.", "All the new versions, along with a leader-board showcasing the stateof-the-art results, will be tracked via CodaLab https://competitions.codalab.org/ competitions/15333.", "The success of our modified data collection method shows how extreme care must be given for sourcing new datasets.", "We suggest the next SCT challenges to be completely blind, where the participants cannot deliberately leverage any particular data biases.", "Along with this paper, we are releasing the datasets and the developed models to the community.", "All the announcements, new supplementary material, and datasets can be accessed through http://cs.", "rochester.edu/nlp/rocstories/.", "We hope that this work ignites further interest in the community for making progress on story understanding." ] }
{ "paper_header_number": [ "1", "2", "4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Model", "Data Collection", "Results", "Conclusion" ] }
GEM-SciDuet-train-105#paper-1275#slide-14
Collecting The New Dataset
After various rounds of pilot studies, we found the following paradigm to work the best: New Data Collection Steps: collected 5,000 new five sentence stories with MTurk second AMT round to modify the last sentence to make non-sensible story. Here, th prompt instructs the workers to make sure: a. Wrong Ending makes sense standalone b. the Right and Wrong ending do not differ in # of words by >3 c. changes cannot be as simple as negating the verb third AMT to verify quality This entire process resulted in creating the Story Cloze stories for each validation and test sets. token + POS n-gram char-gram POS n-gram Standard deviation of the word and character n-gram counts, as well as the part of speech (POS) counts, between the right and wrong endings. Allen, Omid B Tackling the Biasesakhshandeh,
After various rounds of pilot studies, we found the following paradigm to work the best: New Data Collection Steps: collected 5,000 new five sentence stories with MTurk second AMT round to modify the last sentence to make non-sensible story. Here, th prompt instructs the workers to make sure: a. Wrong Ending makes sense standalone b. the Right and Wrong ending do not differ in # of words by >3 c. changes cannot be as simple as negating the verb third AMT to verify quality This entire process resulted in creating the Story Cloze stories for each validation and test sets. token + POS n-gram char-gram POS n-gram Standard deviation of the word and character n-gram counts, as well as the part of speech (POS) counts, between the right and wrong endings. Allen, Omid B Tackling the Biasesakhshandeh,
[]
GEM-SciDuet-train-105#paper-1275#slide-15
1275
Tackling the Story Ending Biases in The Story Cloze Test
The Story Cloze Test (SCT) is a recent framework for evaluating story comprehension and script learning. There have been a variety of models tackling the SCT so far. Although the original goal behind the SCT was to require systems to perform deep language understanding and commonsense reasoning for successful narrative understanding, some recent models could perform significantly better than the initial baselines by leveraging human-authorship biases discovered in the SCT dataset. In order to shed some light on this issue, we have performed various data analysis and analyzed a variety of top performing models presented for this task. Given the statistics we have aggregated, we have designed a new crowdsourcing scheme that creates a new SCT dataset, which overcomes some of the biases. We benchmark a few models on the new dataset and show that the topperforming model on the original SCT dataset fails to keep up its performance. Our findings further signify the importance of benchmarking NLP systems on various evolving test sets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112 ], "paper_content_text": [ "Introduction Story comprehension has been one of the longestrunning ambitions in artificial intelligence (Dijk, 1980; Charniak, 1972) .", "One of the challenges in expanding the field had been the lack of a solid evaluation framework and datasets on which comprehension models can be trained and tested.", "Mostafazadeh et al.", "(2016) introduced the Story Cloze Test (SCT) evaluation framework to address * This work was performed at University of Rochester.", "this issue.", "This test evaluates a story comprehension system where the system is given a foursentence short story as the 'context' and two alternative endings and to the story, labeled 'right ending' and 'wrong ending.'", "Then, the system's task is to choose the right ending.", "In order to support this task, Mostafazadeh et al.", "also provide the ROC Stories dataset, which is a collection of crowd-sourced complete five sentence stories through Amazon Mechanical Turk (MTurk).", "Each story follows a character through a fairly simple series of events to a conclusion.", "Several shallow and neural models, including the state-of-the-art script learning approaches, were presented as baselines (Mostafazadeh et al., 2016) for tackling the task, where they show that all their models perform only slightly better than a random baseline suggesting that richer models are required for tackling this task.", "A variety of new systems were proposed (Mihaylov and Frank, 2017; Schenk and Chiarcos, 2017; Schwartz et al., 2017b; Roemmele et al., 2017) as a part of the first shared task on SCT at LSDSem'17 workshop (Mostafazadeh et al., 2017) .", "Surprisingly, one of the models made a staggering improvement of 15% to the accuracy, partially due to using stylistic features isolated in the ending choices (Schwartz et al., 2017b) , discarding the narrative context.", "Clearly, this success does not seem to reflect the intent of the original task, where the systems should leverage narrative understanding as opposed to the statistical biases in the data.", "In this paper, we study the effect of such biases between the ending choices and present a new scheme to reduce such stylistic artifacts.", "The contribution of this paper is threefold: (1) we provide an extensive analysis of the SCT dataset to shed some light on the ending data characteristics (Section 3) (2) we develop a new strong classifier for tackling the SCT that uses a variety Context Right Ending Wrong Ending Ramona was very unhappy in her job.", "She asked for a raise, but was denied.", "The refusal prompted her to aggressively comb the want ads.", "She found an interesting new possibility and set up an interview.", "She was offered the new job at a higher salary.", "Ramona had no reason to want to change jobs anymore.", "The teacher was walking with a stack of papers.", "Outside started to rain.", "When the teacher tried to walk down a few steps, she ended up falling.", "The papers flew out of her hands and landed on the ground.", "A passer-by helped her up and helped her collect the papers.", "The teacher got up and walked home leaving the papers behind.", "(Chaturvedi et al., 2017) fails to keep up the performance on our new dataset.", "We discuss the implications of this experiment to the greater research community in terms of data collection and benchmarking practices in Section 6.", "All the code and datasets for this paper will be released to the public.", "We hope that the availability of the new evaluation set can further support the continued research on story understanding.", "Related Work This paper mainly extends the work on creating the Story Cloze Test set (Mostafazadeh et al., 2016) , hereinafter SCT-v1.0.", "The SCT-v1.0 dataset was created as follows: full five-sentence stories from the ROC Stories corpus were sampled, then, the initial four sentences were shown to a set of MTurk 2 crowd workers who were prompted to author 'right' and 'wrong' endings.", "Mostafazadeh et al.", "(Mostafazadeh et al., 2016) give special care to make sure there were no boundary cases for 'right' and 'wrong' endings by implementing extra rounds of data filtering.", "The resulting SCT-v1.0 dataset had a validation (hereinafter, SCT-v1.0 Val) and a test set (SCT-v1.0 test), each with 1,871 cases.", "Furthermore, we conducted an extensive ngram analysis, using word tokens, characters, partof-speech, and token-POS (similar to Schwartz et al.", "(Schwartz et al., 2017b) ) as features.", "We see char-grams such as \"sn't\" and \"not\" appear more commonly in the 'wrong endings', suggesting heavy negation.", "In 'right endings', pronouns are used more frequently versus proper nouns used in 'wrong endings'.", "Artifacts such as 'pizza' are common in 'wrong endings,' which could suggest that for a given topic, the authors may replace an object in a right ending with a wrong one and quickly think up a common item such as pizza to create a 'wrong' one.", "An extensive analysis of these features, including the n-gram analysis, can be found in the supplementary material.", "Model Following the analysis above, we developed a Story Cloze model, hereinafter EndingReg, that only uses the ending features while disregarding the story context for choosing the right ending.", "We expanded each Story Cloze Test case's ending options into a set of two single sentences.", "Then, for each sentence, we created the following features: 1.", "Number of tokens 2.", "VADER composite sentiment score 3.", "Yngve complexity score 4.", "Token-POS n-grams 5.", "POS n-grams 6.", "Four length character-grams All n-gram features needed to appear at least five times throughout the dataset.", "The features were collected for each five-sentence story and then fed into a logistic regression classifier.", "As an initial experiment, we trained this model using the SCT-v1.0 validation set and tested on the SCT-v1.0 test set.", "An L2 regularization penalty was used to enforce a Gaussian prior on the feature-space, where a grid search was conducted for hyper-parameter tuning.", "This model achieves an accuracy of 71.5% on the SCT-v1.0 dataset which is on par with the highest score achieved by any model using only the endings.", "Table 3 shows the accuracies ob-tained by models using only those particular features.", "We achieve minimal but sometimes important classification using token count, VADER, and Yngve in combination alone, better classification using POS or char-grams alone, and best classification using n-grams alone.", "By combining all of them we achieve the overall best results.", "Table 3 : Classification results on SCT-v1.0 using each of the feature sets designated in the columns.", "Data Collection Based on the findings above, a new test set for the SCT was deemed necessary.", "The premise of predicting an ending to a short story, as opposed to predicting say a middle sentence, enables a more systematic evaluation where human can agree on the cases 100%.", "Hence, our goal was to come up with a data collection scheme that overcomes the data collection biases, while keeping the original evaluation format.", "As the data analysis revealed, the token count, sentiment, and the complexity are not as important features for classification as the ending n-grams are.", "We set the following goals for sourcing the new 'right' and 'wrong' endings.", "They both should: 1.", "Contain a similar number of tokens 2.", "Have similar distributions of token n-grams and char-grams 3.", "Occur as standalone events with the same likelihood to occur, with topical, sentimental, or emotion consistencies when applicable.", "First, we crowdsourced 5,000 new five-sentence stories through Amazon Mechanical Turk.", "We prompted the users in the same manner described in Mostafazadeh et al.", "(2016) .", "In order to source new 'wrong' endings, we tried two different methods.", "In Method #1, we kept the original ending sourcing format of Mostafazadeh et al., but imposed some further restrictions.", "This was done by taking the first four sentences of the newly collected stories and asking an MTurker to write a 'right' and 'wrong' ending for each.", "The new restrictions were: 'Each sentence should stay within the same subject area of the story,' and 'The number of words in the Right and Wrong sentences should not differ by more than 2 words,' and 'When possible, the Right and Wrong sentences should try to keep a similar tone/sentiment as one another.'", "The motivation behind this technique was to reduce the statistical differences by asking the user to be mindful of considerations.", "In Method #2, we took the five sentences stories and prompted a second set of MTurk workers to modify the fifth sentence in order to make a resulting five-sentence story non-sensible.", "Here, the prompt instructs the workers to make sure the new 'wrong ending' sentence makes sense standalone, that it does not differ in the number of words from the original sentence by more than three words, and that the changes cannot be as simple as e.g., putting the word 'not' in front of a description or a verb.", "As a result, the workers had much less flexibility for changing the underlying linguistic structures which can help tackle the authorship style differences between the 'right' and 'wrong' endings.", "The results in Table 4 , which show classification accuracy when using EndingReg on the two new data sources, show that Method #2 is a slightly better data sourcing scheme in reducing the bias, since the EndingReg model's performance is slightly worse.", "The set was further filtered through human verification similar to Mostafazadeh et al.", "(2016) .", "The filtering was done by splitting each SCT-v1.0's two alternative endings into two independent five-sentence stories and asking three different MTurk users to categorize the story as either: one where the story made complete sense, one where the story made sense until the last sentence and one where the story does not make sense for another reason.", "Stories were only selected if all the three MTurk users verified that the story with the 'right ending' and the corresponding story with the 'wrong ending' were verified to be indeed right and wrong respectively.", "This ensured a higher quality of data and eliminating boundary cases.", "This entire process resulted in creating the Story Cloze Test v1.5 (SCT-v1.5) dataset, consisting of 1,571 stories for each validation and test sets.", "Method #1 Method #2 EndingReg 0.709 0.695 cogcomp 0.649 0.641 Table 4 : Comparison of initial data sourcing methods n βˆ’ gram char βˆ’ gram P OS SCT-v1.0 13.9 12.4 16.4 SCT-v1.5 7.0 6.3 7.5 Table 5 : Standard deviation of the word and character n-gram counts, as well as the part of speech (POS) counts, between the right and wrong endings.", "Results In order to test the decrease in n-gram bias, which was the most salient feature for the classification task using only the endings, we compare the variance between the n-gram counts from SCT-v1.0 to SCT-v1.5.", "The results are presented in Table 5 , which indicates the drop in the standard deviations in our new dataset.", "Table 6 shows the classification results of various models on SCT-v1.5.", "The drop in accuracy of the EndingReg model between the SCT-v1.0 and SCT-v1.5 shows a significant improvement on the statistical weight of the stylistic features generated by the model.", "Since the main features used are the token length and the various n-grams, this suggests that the new 'right endings' and 'wrong endings' have much more similar token n-gram, pos n-gram, postoken n-gram and char-gram overlap.", "Furthermore, the CogComp model's performance has significantly dropped on SCT-v1.5.", "Although this model seems to be using story comprehension features such as event sequencing, since the endings are included in the sequences, the biases within the endings have influenced the predictions and the weak performance of the model in SCT-v1.5 suggest that this model had picked up on the biases of SCT-v1.0 as opposed to really understanding the context.", "In particular, the posterior probabilities for each ending choice using their features are quite similar on the SCT-v1.5.", "These results place the classification accuracy of this top performing model on par with or worse than the models that did not use the ending features of the old SCT-v1.0 dataset (Mostafazadeh et al., 2017) , which suggest that the gap that once was held by models using the ending biases seems to be corrected for.", "Al- though we did not get to test all the other models published on SCT-v1.0 directly, we predict similar trends.", "It is important to point out that the 64.4% performance attained by our EndingReg model is still high for a model which completely discards the context.", "This indicates that although we could correct for some of the stylistic biases, there are some other hidden patterns in the new endings that would not have been accounted for without having the EndingReg baseline.", "This showcases the importance of maintaining benchmarks that evolve and improve over time, where systems should not be optimized for particular narrow test sets.", "We propose the community to report accuracies on both SCT-v1.0 and SCT-v1.5, both of which still have a huge gap between the best system and the human performance.", "Conclusion In this paper, we presented a comprehensive analysis of the stylistic features isolated in the endings of the original Story Cloze Test (SCT-v1.0).", "Using that analysis, along with a classifier we developed for testing new data collection schemes, we created a new SCT dataset, SCT-v1.5, which overcomes some of the biases.", "Based on the results presented in this paper, we believe that our SCT-v1.5 is a better benchmark for story comprehension.", "However, as shown in multiple AI tasks (Ettinger et al., 2017; Antol et al., 2015; Jabri et al., 2016; Poliak et al., 2018) , no collected dataset is entirely without its inherent biases and often the biases in datasets go undiscovered.", "We believe that evaluation benchmarks should evolve and improve over time and we are planning to incrementally update the Story Cloze Test benchmark.", "All the new versions, along with a leader-board showcasing the stateof-the-art results, will be tracked via CodaLab https://competitions.codalab.org/ competitions/15333.", "The success of our modified data collection method shows how extreme care must be given for sourcing new datasets.", "We suggest the next SCT challenges to be completely blind, where the participants cannot deliberately leverage any particular data biases.", "Along with this paper, we are releasing the datasets and the developed models to the community.", "All the announcements, new supplementary material, and datasets can be accessed through http://cs.", "rochester.edu/nlp/rocstories/.", "We hope that this work ignites further interest in the community for making progress on story understanding." ] }
{ "paper_header_number": [ "1", "2", "4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Model", "Data Collection", "Results", "Conclusion" ] }
GEM-SciDuet-train-105#paper-1275#slide-15
EndingReg Results
Classification accuracy for various models on the SCT-v1.0 and SCT-v1.5 datasets. Allen, Omid B Tackling the Biasesakhshandeh,
Classification accuracy for various models on the SCT-v1.0 and SCT-v1.5 datasets. Allen, Omid B Tackling the Biasesakhshandeh,
[]
GEM-SciDuet-train-105#paper-1275#slide-16
1275
Tackling the Story Ending Biases in The Story Cloze Test
The Story Cloze Test (SCT) is a recent framework for evaluating story comprehension and script learning. There have been a variety of models tackling the SCT so far. Although the original goal behind the SCT was to require systems to perform deep language understanding and commonsense reasoning for successful narrative understanding, some recent models could perform significantly better than the initial baselines by leveraging human-authorship biases discovered in the SCT dataset. In order to shed some light on this issue, we have performed various data analysis and analyzed a variety of top performing models presented for this task. Given the statistics we have aggregated, we have designed a new crowdsourcing scheme that creates a new SCT dataset, which overcomes some of the biases. We benchmark a few models on the new dataset and show that the topperforming model on the original SCT dataset fails to keep up its performance. Our findings further signify the importance of benchmarking NLP systems on various evolving test sets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112 ], "paper_content_text": [ "Introduction Story comprehension has been one of the longestrunning ambitions in artificial intelligence (Dijk, 1980; Charniak, 1972) .", "One of the challenges in expanding the field had been the lack of a solid evaluation framework and datasets on which comprehension models can be trained and tested.", "Mostafazadeh et al.", "(2016) introduced the Story Cloze Test (SCT) evaluation framework to address * This work was performed at University of Rochester.", "this issue.", "This test evaluates a story comprehension system where the system is given a foursentence short story as the 'context' and two alternative endings and to the story, labeled 'right ending' and 'wrong ending.'", "Then, the system's task is to choose the right ending.", "In order to support this task, Mostafazadeh et al.", "also provide the ROC Stories dataset, which is a collection of crowd-sourced complete five sentence stories through Amazon Mechanical Turk (MTurk).", "Each story follows a character through a fairly simple series of events to a conclusion.", "Several shallow and neural models, including the state-of-the-art script learning approaches, were presented as baselines (Mostafazadeh et al., 2016) for tackling the task, where they show that all their models perform only slightly better than a random baseline suggesting that richer models are required for tackling this task.", "A variety of new systems were proposed (Mihaylov and Frank, 2017; Schenk and Chiarcos, 2017; Schwartz et al., 2017b; Roemmele et al., 2017) as a part of the first shared task on SCT at LSDSem'17 workshop (Mostafazadeh et al., 2017) .", "Surprisingly, one of the models made a staggering improvement of 15% to the accuracy, partially due to using stylistic features isolated in the ending choices (Schwartz et al., 2017b) , discarding the narrative context.", "Clearly, this success does not seem to reflect the intent of the original task, where the systems should leverage narrative understanding as opposed to the statistical biases in the data.", "In this paper, we study the effect of such biases between the ending choices and present a new scheme to reduce such stylistic artifacts.", "The contribution of this paper is threefold: (1) we provide an extensive analysis of the SCT dataset to shed some light on the ending data characteristics (Section 3) (2) we develop a new strong classifier for tackling the SCT that uses a variety Context Right Ending Wrong Ending Ramona was very unhappy in her job.", "She asked for a raise, but was denied.", "The refusal prompted her to aggressively comb the want ads.", "She found an interesting new possibility and set up an interview.", "She was offered the new job at a higher salary.", "Ramona had no reason to want to change jobs anymore.", "The teacher was walking with a stack of papers.", "Outside started to rain.", "When the teacher tried to walk down a few steps, she ended up falling.", "The papers flew out of her hands and landed on the ground.", "A passer-by helped her up and helped her collect the papers.", "The teacher got up and walked home leaving the papers behind.", "(Chaturvedi et al., 2017) fails to keep up the performance on our new dataset.", "We discuss the implications of this experiment to the greater research community in terms of data collection and benchmarking practices in Section 6.", "All the code and datasets for this paper will be released to the public.", "We hope that the availability of the new evaluation set can further support the continued research on story understanding.", "Related Work This paper mainly extends the work on creating the Story Cloze Test set (Mostafazadeh et al., 2016) , hereinafter SCT-v1.0.", "The SCT-v1.0 dataset was created as follows: full five-sentence stories from the ROC Stories corpus were sampled, then, the initial four sentences were shown to a set of MTurk 2 crowd workers who were prompted to author 'right' and 'wrong' endings.", "Mostafazadeh et al.", "(Mostafazadeh et al., 2016) give special care to make sure there were no boundary cases for 'right' and 'wrong' endings by implementing extra rounds of data filtering.", "The resulting SCT-v1.0 dataset had a validation (hereinafter, SCT-v1.0 Val) and a test set (SCT-v1.0 test), each with 1,871 cases.", "Furthermore, we conducted an extensive ngram analysis, using word tokens, characters, partof-speech, and token-POS (similar to Schwartz et al.", "(Schwartz et al., 2017b) ) as features.", "We see char-grams such as \"sn't\" and \"not\" appear more commonly in the 'wrong endings', suggesting heavy negation.", "In 'right endings', pronouns are used more frequently versus proper nouns used in 'wrong endings'.", "Artifacts such as 'pizza' are common in 'wrong endings,' which could suggest that for a given topic, the authors may replace an object in a right ending with a wrong one and quickly think up a common item such as pizza to create a 'wrong' one.", "An extensive analysis of these features, including the n-gram analysis, can be found in the supplementary material.", "Model Following the analysis above, we developed a Story Cloze model, hereinafter EndingReg, that only uses the ending features while disregarding the story context for choosing the right ending.", "We expanded each Story Cloze Test case's ending options into a set of two single sentences.", "Then, for each sentence, we created the following features: 1.", "Number of tokens 2.", "VADER composite sentiment score 3.", "Yngve complexity score 4.", "Token-POS n-grams 5.", "POS n-grams 6.", "Four length character-grams All n-gram features needed to appear at least five times throughout the dataset.", "The features were collected for each five-sentence story and then fed into a logistic regression classifier.", "As an initial experiment, we trained this model using the SCT-v1.0 validation set and tested on the SCT-v1.0 test set.", "An L2 regularization penalty was used to enforce a Gaussian prior on the feature-space, where a grid search was conducted for hyper-parameter tuning.", "This model achieves an accuracy of 71.5% on the SCT-v1.0 dataset which is on par with the highest score achieved by any model using only the endings.", "Table 3 shows the accuracies ob-tained by models using only those particular features.", "We achieve minimal but sometimes important classification using token count, VADER, and Yngve in combination alone, better classification using POS or char-grams alone, and best classification using n-grams alone.", "By combining all of them we achieve the overall best results.", "Table 3 : Classification results on SCT-v1.0 using each of the feature sets designated in the columns.", "Data Collection Based on the findings above, a new test set for the SCT was deemed necessary.", "The premise of predicting an ending to a short story, as opposed to predicting say a middle sentence, enables a more systematic evaluation where human can agree on the cases 100%.", "Hence, our goal was to come up with a data collection scheme that overcomes the data collection biases, while keeping the original evaluation format.", "As the data analysis revealed, the token count, sentiment, and the complexity are not as important features for classification as the ending n-grams are.", "We set the following goals for sourcing the new 'right' and 'wrong' endings.", "They both should: 1.", "Contain a similar number of tokens 2.", "Have similar distributions of token n-grams and char-grams 3.", "Occur as standalone events with the same likelihood to occur, with topical, sentimental, or emotion consistencies when applicable.", "First, we crowdsourced 5,000 new five-sentence stories through Amazon Mechanical Turk.", "We prompted the users in the same manner described in Mostafazadeh et al.", "(2016) .", "In order to source new 'wrong' endings, we tried two different methods.", "In Method #1, we kept the original ending sourcing format of Mostafazadeh et al., but imposed some further restrictions.", "This was done by taking the first four sentences of the newly collected stories and asking an MTurker to write a 'right' and 'wrong' ending for each.", "The new restrictions were: 'Each sentence should stay within the same subject area of the story,' and 'The number of words in the Right and Wrong sentences should not differ by more than 2 words,' and 'When possible, the Right and Wrong sentences should try to keep a similar tone/sentiment as one another.'", "The motivation behind this technique was to reduce the statistical differences by asking the user to be mindful of considerations.", "In Method #2, we took the five sentences stories and prompted a second set of MTurk workers to modify the fifth sentence in order to make a resulting five-sentence story non-sensible.", "Here, the prompt instructs the workers to make sure the new 'wrong ending' sentence makes sense standalone, that it does not differ in the number of words from the original sentence by more than three words, and that the changes cannot be as simple as e.g., putting the word 'not' in front of a description or a verb.", "As a result, the workers had much less flexibility for changing the underlying linguistic structures which can help tackle the authorship style differences between the 'right' and 'wrong' endings.", "The results in Table 4 , which show classification accuracy when using EndingReg on the two new data sources, show that Method #2 is a slightly better data sourcing scheme in reducing the bias, since the EndingReg model's performance is slightly worse.", "The set was further filtered through human verification similar to Mostafazadeh et al.", "(2016) .", "The filtering was done by splitting each SCT-v1.0's two alternative endings into two independent five-sentence stories and asking three different MTurk users to categorize the story as either: one where the story made complete sense, one where the story made sense until the last sentence and one where the story does not make sense for another reason.", "Stories were only selected if all the three MTurk users verified that the story with the 'right ending' and the corresponding story with the 'wrong ending' were verified to be indeed right and wrong respectively.", "This ensured a higher quality of data and eliminating boundary cases.", "This entire process resulted in creating the Story Cloze Test v1.5 (SCT-v1.5) dataset, consisting of 1,571 stories for each validation and test sets.", "Method #1 Method #2 EndingReg 0.709 0.695 cogcomp 0.649 0.641 Table 4 : Comparison of initial data sourcing methods n βˆ’ gram char βˆ’ gram P OS SCT-v1.0 13.9 12.4 16.4 SCT-v1.5 7.0 6.3 7.5 Table 5 : Standard deviation of the word and character n-gram counts, as well as the part of speech (POS) counts, between the right and wrong endings.", "Results In order to test the decrease in n-gram bias, which was the most salient feature for the classification task using only the endings, we compare the variance between the n-gram counts from SCT-v1.0 to SCT-v1.5.", "The results are presented in Table 5 , which indicates the drop in the standard deviations in our new dataset.", "Table 6 shows the classification results of various models on SCT-v1.5.", "The drop in accuracy of the EndingReg model between the SCT-v1.0 and SCT-v1.5 shows a significant improvement on the statistical weight of the stylistic features generated by the model.", "Since the main features used are the token length and the various n-grams, this suggests that the new 'right endings' and 'wrong endings' have much more similar token n-gram, pos n-gram, postoken n-gram and char-gram overlap.", "Furthermore, the CogComp model's performance has significantly dropped on SCT-v1.5.", "Although this model seems to be using story comprehension features such as event sequencing, since the endings are included in the sequences, the biases within the endings have influenced the predictions and the weak performance of the model in SCT-v1.5 suggest that this model had picked up on the biases of SCT-v1.0 as opposed to really understanding the context.", "In particular, the posterior probabilities for each ending choice using their features are quite similar on the SCT-v1.5.", "These results place the classification accuracy of this top performing model on par with or worse than the models that did not use the ending features of the old SCT-v1.0 dataset (Mostafazadeh et al., 2017) , which suggest that the gap that once was held by models using the ending biases seems to be corrected for.", "Al- though we did not get to test all the other models published on SCT-v1.0 directly, we predict similar trends.", "It is important to point out that the 64.4% performance attained by our EndingReg model is still high for a model which completely discards the context.", "This indicates that although we could correct for some of the stylistic biases, there are some other hidden patterns in the new endings that would not have been accounted for without having the EndingReg baseline.", "This showcases the importance of maintaining benchmarks that evolve and improve over time, where systems should not be optimized for particular narrow test sets.", "We propose the community to report accuracies on both SCT-v1.0 and SCT-v1.5, both of which still have a huge gap between the best system and the human performance.", "Conclusion In this paper, we presented a comprehensive analysis of the stylistic features isolated in the endings of the original Story Cloze Test (SCT-v1.0).", "Using that analysis, along with a classifier we developed for testing new data collection schemes, we created a new SCT dataset, SCT-v1.5, which overcomes some of the biases.", "Based on the results presented in this paper, we believe that our SCT-v1.5 is a better benchmark for story comprehension.", "However, as shown in multiple AI tasks (Ettinger et al., 2017; Antol et al., 2015; Jabri et al., 2016; Poliak et al., 2018) , no collected dataset is entirely without its inherent biases and often the biases in datasets go undiscovered.", "We believe that evaluation benchmarks should evolve and improve over time and we are planning to incrementally update the Story Cloze Test benchmark.", "All the new versions, along with a leader-board showcasing the stateof-the-art results, will be tracked via CodaLab https://competitions.codalab.org/ competitions/15333.", "The success of our modified data collection method shows how extreme care must be given for sourcing new datasets.", "We suggest the next SCT challenges to be completely blind, where the participants cannot deliberately leverage any particular data biases.", "Along with this paper, we are releasing the datasets and the developed models to the community.", "All the announcements, new supplementary material, and datasets can be accessed through http://cs.", "rochester.edu/nlp/rocstories/.", "We hope that this work ignites further interest in the community for making progress on story understanding." ] }
{ "paper_header_number": [ "1", "2", "4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Model", "Data Collection", "Results", "Conclusion" ] }
GEM-SciDuet-train-105#paper-1275#slide-16
The SOTA Models
In Improving Language Understanding by Generative Pre-Training model achieves accuracy of 86.5 on SCT-v1.0! Pretrained language model made with Transformer network Task specific supervised learning approach to classify Initial results on SCT-1.5 show an accuracy of 81.06% for this model, which suggests a deeper story understanding model that goes beyond leveraging the intricacies of the particular test sets. in Radford, Alec, et al. "Improving Language Understanding by Generative Pre-Training." Allen, Omid B Tackling the Biasesakhshandeh,
In Improving Language Understanding by Generative Pre-Training model achieves accuracy of 86.5 on SCT-v1.0! Pretrained language model made with Transformer network Task specific supervised learning approach to classify Initial results on SCT-1.5 show an accuracy of 81.06% for this model, which suggests a deeper story understanding model that goes beyond leveraging the intricacies of the particular test sets. in Radford, Alec, et al. "Improving Language Understanding by Generative Pre-Training." Allen, Omid B Tackling the Biasesakhshandeh,
[]
GEM-SciDuet-train-105#paper-1275#slide-17
1275
Tackling the Story Ending Biases in The Story Cloze Test
The Story Cloze Test (SCT) is a recent framework for evaluating story comprehension and script learning. There have been a variety of models tackling the SCT so far. Although the original goal behind the SCT was to require systems to perform deep language understanding and commonsense reasoning for successful narrative understanding, some recent models could perform significantly better than the initial baselines by leveraging human-authorship biases discovered in the SCT dataset. In order to shed some light on this issue, we have performed various data analysis and analyzed a variety of top performing models presented for this task. Given the statistics we have aggregated, we have designed a new crowdsourcing scheme that creates a new SCT dataset, which overcomes some of the biases. We benchmark a few models on the new dataset and show that the topperforming model on the original SCT dataset fails to keep up its performance. Our findings further signify the importance of benchmarking NLP systems on various evolving test sets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112 ], "paper_content_text": [ "Introduction Story comprehension has been one of the longestrunning ambitions in artificial intelligence (Dijk, 1980; Charniak, 1972) .", "One of the challenges in expanding the field had been the lack of a solid evaluation framework and datasets on which comprehension models can be trained and tested.", "Mostafazadeh et al.", "(2016) introduced the Story Cloze Test (SCT) evaluation framework to address * This work was performed at University of Rochester.", "this issue.", "This test evaluates a story comprehension system where the system is given a foursentence short story as the 'context' and two alternative endings and to the story, labeled 'right ending' and 'wrong ending.'", "Then, the system's task is to choose the right ending.", "In order to support this task, Mostafazadeh et al.", "also provide the ROC Stories dataset, which is a collection of crowd-sourced complete five sentence stories through Amazon Mechanical Turk (MTurk).", "Each story follows a character through a fairly simple series of events to a conclusion.", "Several shallow and neural models, including the state-of-the-art script learning approaches, were presented as baselines (Mostafazadeh et al., 2016) for tackling the task, where they show that all their models perform only slightly better than a random baseline suggesting that richer models are required for tackling this task.", "A variety of new systems were proposed (Mihaylov and Frank, 2017; Schenk and Chiarcos, 2017; Schwartz et al., 2017b; Roemmele et al., 2017) as a part of the first shared task on SCT at LSDSem'17 workshop (Mostafazadeh et al., 2017) .", "Surprisingly, one of the models made a staggering improvement of 15% to the accuracy, partially due to using stylistic features isolated in the ending choices (Schwartz et al., 2017b) , discarding the narrative context.", "Clearly, this success does not seem to reflect the intent of the original task, where the systems should leverage narrative understanding as opposed to the statistical biases in the data.", "In this paper, we study the effect of such biases between the ending choices and present a new scheme to reduce such stylistic artifacts.", "The contribution of this paper is threefold: (1) we provide an extensive analysis of the SCT dataset to shed some light on the ending data characteristics (Section 3) (2) we develop a new strong classifier for tackling the SCT that uses a variety Context Right Ending Wrong Ending Ramona was very unhappy in her job.", "She asked for a raise, but was denied.", "The refusal prompted her to aggressively comb the want ads.", "She found an interesting new possibility and set up an interview.", "She was offered the new job at a higher salary.", "Ramona had no reason to want to change jobs anymore.", "The teacher was walking with a stack of papers.", "Outside started to rain.", "When the teacher tried to walk down a few steps, she ended up falling.", "The papers flew out of her hands and landed on the ground.", "A passer-by helped her up and helped her collect the papers.", "The teacher got up and walked home leaving the papers behind.", "(Chaturvedi et al., 2017) fails to keep up the performance on our new dataset.", "We discuss the implications of this experiment to the greater research community in terms of data collection and benchmarking practices in Section 6.", "All the code and datasets for this paper will be released to the public.", "We hope that the availability of the new evaluation set can further support the continued research on story understanding.", "Related Work This paper mainly extends the work on creating the Story Cloze Test set (Mostafazadeh et al., 2016) , hereinafter SCT-v1.0.", "The SCT-v1.0 dataset was created as follows: full five-sentence stories from the ROC Stories corpus were sampled, then, the initial four sentences were shown to a set of MTurk 2 crowd workers who were prompted to author 'right' and 'wrong' endings.", "Mostafazadeh et al.", "(Mostafazadeh et al., 2016) give special care to make sure there were no boundary cases for 'right' and 'wrong' endings by implementing extra rounds of data filtering.", "The resulting SCT-v1.0 dataset had a validation (hereinafter, SCT-v1.0 Val) and a test set (SCT-v1.0 test), each with 1,871 cases.", "Furthermore, we conducted an extensive ngram analysis, using word tokens, characters, partof-speech, and token-POS (similar to Schwartz et al.", "(Schwartz et al., 2017b) ) as features.", "We see char-grams such as \"sn't\" and \"not\" appear more commonly in the 'wrong endings', suggesting heavy negation.", "In 'right endings', pronouns are used more frequently versus proper nouns used in 'wrong endings'.", "Artifacts such as 'pizza' are common in 'wrong endings,' which could suggest that for a given topic, the authors may replace an object in a right ending with a wrong one and quickly think up a common item such as pizza to create a 'wrong' one.", "An extensive analysis of these features, including the n-gram analysis, can be found in the supplementary material.", "Model Following the analysis above, we developed a Story Cloze model, hereinafter EndingReg, that only uses the ending features while disregarding the story context for choosing the right ending.", "We expanded each Story Cloze Test case's ending options into a set of two single sentences.", "Then, for each sentence, we created the following features: 1.", "Number of tokens 2.", "VADER composite sentiment score 3.", "Yngve complexity score 4.", "Token-POS n-grams 5.", "POS n-grams 6.", "Four length character-grams All n-gram features needed to appear at least five times throughout the dataset.", "The features were collected for each five-sentence story and then fed into a logistic regression classifier.", "As an initial experiment, we trained this model using the SCT-v1.0 validation set and tested on the SCT-v1.0 test set.", "An L2 regularization penalty was used to enforce a Gaussian prior on the feature-space, where a grid search was conducted for hyper-parameter tuning.", "This model achieves an accuracy of 71.5% on the SCT-v1.0 dataset which is on par with the highest score achieved by any model using only the endings.", "Table 3 shows the accuracies ob-tained by models using only those particular features.", "We achieve minimal but sometimes important classification using token count, VADER, and Yngve in combination alone, better classification using POS or char-grams alone, and best classification using n-grams alone.", "By combining all of them we achieve the overall best results.", "Table 3 : Classification results on SCT-v1.0 using each of the feature sets designated in the columns.", "Data Collection Based on the findings above, a new test set for the SCT was deemed necessary.", "The premise of predicting an ending to a short story, as opposed to predicting say a middle sentence, enables a more systematic evaluation where human can agree on the cases 100%.", "Hence, our goal was to come up with a data collection scheme that overcomes the data collection biases, while keeping the original evaluation format.", "As the data analysis revealed, the token count, sentiment, and the complexity are not as important features for classification as the ending n-grams are.", "We set the following goals for sourcing the new 'right' and 'wrong' endings.", "They both should: 1.", "Contain a similar number of tokens 2.", "Have similar distributions of token n-grams and char-grams 3.", "Occur as standalone events with the same likelihood to occur, with topical, sentimental, or emotion consistencies when applicable.", "First, we crowdsourced 5,000 new five-sentence stories through Amazon Mechanical Turk.", "We prompted the users in the same manner described in Mostafazadeh et al.", "(2016) .", "In order to source new 'wrong' endings, we tried two different methods.", "In Method #1, we kept the original ending sourcing format of Mostafazadeh et al., but imposed some further restrictions.", "This was done by taking the first four sentences of the newly collected stories and asking an MTurker to write a 'right' and 'wrong' ending for each.", "The new restrictions were: 'Each sentence should stay within the same subject area of the story,' and 'The number of words in the Right and Wrong sentences should not differ by more than 2 words,' and 'When possible, the Right and Wrong sentences should try to keep a similar tone/sentiment as one another.'", "The motivation behind this technique was to reduce the statistical differences by asking the user to be mindful of considerations.", "In Method #2, we took the five sentences stories and prompted a second set of MTurk workers to modify the fifth sentence in order to make a resulting five-sentence story non-sensible.", "Here, the prompt instructs the workers to make sure the new 'wrong ending' sentence makes sense standalone, that it does not differ in the number of words from the original sentence by more than three words, and that the changes cannot be as simple as e.g., putting the word 'not' in front of a description or a verb.", "As a result, the workers had much less flexibility for changing the underlying linguistic structures which can help tackle the authorship style differences between the 'right' and 'wrong' endings.", "The results in Table 4 , which show classification accuracy when using EndingReg on the two new data sources, show that Method #2 is a slightly better data sourcing scheme in reducing the bias, since the EndingReg model's performance is slightly worse.", "The set was further filtered through human verification similar to Mostafazadeh et al.", "(2016) .", "The filtering was done by splitting each SCT-v1.0's two alternative endings into two independent five-sentence stories and asking three different MTurk users to categorize the story as either: one where the story made complete sense, one where the story made sense until the last sentence and one where the story does not make sense for another reason.", "Stories were only selected if all the three MTurk users verified that the story with the 'right ending' and the corresponding story with the 'wrong ending' were verified to be indeed right and wrong respectively.", "This ensured a higher quality of data and eliminating boundary cases.", "This entire process resulted in creating the Story Cloze Test v1.5 (SCT-v1.5) dataset, consisting of 1,571 stories for each validation and test sets.", "Method #1 Method #2 EndingReg 0.709 0.695 cogcomp 0.649 0.641 Table 4 : Comparison of initial data sourcing methods n βˆ’ gram char βˆ’ gram P OS SCT-v1.0 13.9 12.4 16.4 SCT-v1.5 7.0 6.3 7.5 Table 5 : Standard deviation of the word and character n-gram counts, as well as the part of speech (POS) counts, between the right and wrong endings.", "Results In order to test the decrease in n-gram bias, which was the most salient feature for the classification task using only the endings, we compare the variance between the n-gram counts from SCT-v1.0 to SCT-v1.5.", "The results are presented in Table 5 , which indicates the drop in the standard deviations in our new dataset.", "Table 6 shows the classification results of various models on SCT-v1.5.", "The drop in accuracy of the EndingReg model between the SCT-v1.0 and SCT-v1.5 shows a significant improvement on the statistical weight of the stylistic features generated by the model.", "Since the main features used are the token length and the various n-grams, this suggests that the new 'right endings' and 'wrong endings' have much more similar token n-gram, pos n-gram, postoken n-gram and char-gram overlap.", "Furthermore, the CogComp model's performance has significantly dropped on SCT-v1.5.", "Although this model seems to be using story comprehension features such as event sequencing, since the endings are included in the sequences, the biases within the endings have influenced the predictions and the weak performance of the model in SCT-v1.5 suggest that this model had picked up on the biases of SCT-v1.0 as opposed to really understanding the context.", "In particular, the posterior probabilities for each ending choice using their features are quite similar on the SCT-v1.5.", "These results place the classification accuracy of this top performing model on par with or worse than the models that did not use the ending features of the old SCT-v1.0 dataset (Mostafazadeh et al., 2017) , which suggest that the gap that once was held by models using the ending biases seems to be corrected for.", "Al- though we did not get to test all the other models published on SCT-v1.0 directly, we predict similar trends.", "It is important to point out that the 64.4% performance attained by our EndingReg model is still high for a model which completely discards the context.", "This indicates that although we could correct for some of the stylistic biases, there are some other hidden patterns in the new endings that would not have been accounted for without having the EndingReg baseline.", "This showcases the importance of maintaining benchmarks that evolve and improve over time, where systems should not be optimized for particular narrow test sets.", "We propose the community to report accuracies on both SCT-v1.0 and SCT-v1.5, both of which still have a huge gap between the best system and the human performance.", "Conclusion In this paper, we presented a comprehensive analysis of the stylistic features isolated in the endings of the original Story Cloze Test (SCT-v1.0).", "Using that analysis, along with a classifier we developed for testing new data collection schemes, we created a new SCT dataset, SCT-v1.5, which overcomes some of the biases.", "Based on the results presented in this paper, we believe that our SCT-v1.5 is a better benchmark for story comprehension.", "However, as shown in multiple AI tasks (Ettinger et al., 2017; Antol et al., 2015; Jabri et al., 2016; Poliak et al., 2018) , no collected dataset is entirely without its inherent biases and often the biases in datasets go undiscovered.", "We believe that evaluation benchmarks should evolve and improve over time and we are planning to incrementally update the Story Cloze Test benchmark.", "All the new versions, along with a leader-board showcasing the stateof-the-art results, will be tracked via CodaLab https://competitions.codalab.org/ competitions/15333.", "The success of our modified data collection method shows how extreme care must be given for sourcing new datasets.", "We suggest the next SCT challenges to be completely blind, where the participants cannot deliberately leverage any particular data biases.", "Along with this paper, we are releasing the datasets and the developed models to the community.", "All the announcements, new supplementary material, and datasets can be accessed through http://cs.", "rochester.edu/nlp/rocstories/.", "We hope that this work ignites further interest in the community for making progress on story understanding." ] }
{ "paper_header_number": [ "1", "2", "4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Model", "Data Collection", "Results", "Conclusion" ] }
GEM-SciDuet-train-105#paper-1275#slide-17
Conclusion
We presented a comprehensive analysis of the stylistic features isolated in the endings of the original Story Cloze Test (SCT-v1.0). Developed a strong classifier using only the story endings Developed a new data collection schemes for tackling the stylistic ending features Created a new SCT dataset, SCT-v1.5, which overcomes some of the biases. The success of our modified data collection method shows how extreme care must be given for sourcing new datasets. However, as shown in multiple AI tasks, no collected dataset is entirely without its inherent biases and often the biases in datasets go undiscovered. Remember: There is still a wide gap between system and human performance, on either SCT 1.0 or SCT 1.5 ;) Allen, Omid B Tackling the Biasesakhshandeh,
We presented a comprehensive analysis of the stylistic features isolated in the endings of the original Story Cloze Test (SCT-v1.0). Developed a strong classifier using only the story endings Developed a new data collection schemes for tackling the stylistic ending features Created a new SCT dataset, SCT-v1.5, which overcomes some of the biases. The success of our modified data collection method shows how extreme care must be given for sourcing new datasets. However, as shown in multiple AI tasks, no collected dataset is entirely without its inherent biases and often the biases in datasets go undiscovered. Remember: There is still a wide gap between system and human performance, on either SCT 1.0 or SCT 1.5 ;) Allen, Omid B Tackling the Biasesakhshandeh,
[]
GEM-SciDuet-train-105#paper-1275#slide-18
1275
Tackling the Story Ending Biases in The Story Cloze Test
The Story Cloze Test (SCT) is a recent framework for evaluating story comprehension and script learning. There have been a variety of models tackling the SCT so far. Although the original goal behind the SCT was to require systems to perform deep language understanding and commonsense reasoning for successful narrative understanding, some recent models could perform significantly better than the initial baselines by leveraging human-authorship biases discovered in the SCT dataset. In order to shed some light on this issue, we have performed various data analysis and analyzed a variety of top performing models presented for this task. Given the statistics we have aggregated, we have designed a new crowdsourcing scheme that creates a new SCT dataset, which overcomes some of the biases. We benchmark a few models on the new dataset and show that the topperforming model on the original SCT dataset fails to keep up its performance. Our findings further signify the importance of benchmarking NLP systems on various evolving test sets.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112 ], "paper_content_text": [ "Introduction Story comprehension has been one of the longestrunning ambitions in artificial intelligence (Dijk, 1980; Charniak, 1972) .", "One of the challenges in expanding the field had been the lack of a solid evaluation framework and datasets on which comprehension models can be trained and tested.", "Mostafazadeh et al.", "(2016) introduced the Story Cloze Test (SCT) evaluation framework to address * This work was performed at University of Rochester.", "this issue.", "This test evaluates a story comprehension system where the system is given a foursentence short story as the 'context' and two alternative endings and to the story, labeled 'right ending' and 'wrong ending.'", "Then, the system's task is to choose the right ending.", "In order to support this task, Mostafazadeh et al.", "also provide the ROC Stories dataset, which is a collection of crowd-sourced complete five sentence stories through Amazon Mechanical Turk (MTurk).", "Each story follows a character through a fairly simple series of events to a conclusion.", "Several shallow and neural models, including the state-of-the-art script learning approaches, were presented as baselines (Mostafazadeh et al., 2016) for tackling the task, where they show that all their models perform only slightly better than a random baseline suggesting that richer models are required for tackling this task.", "A variety of new systems were proposed (Mihaylov and Frank, 2017; Schenk and Chiarcos, 2017; Schwartz et al., 2017b; Roemmele et al., 2017) as a part of the first shared task on SCT at LSDSem'17 workshop (Mostafazadeh et al., 2017) .", "Surprisingly, one of the models made a staggering improvement of 15% to the accuracy, partially due to using stylistic features isolated in the ending choices (Schwartz et al., 2017b) , discarding the narrative context.", "Clearly, this success does not seem to reflect the intent of the original task, where the systems should leverage narrative understanding as opposed to the statistical biases in the data.", "In this paper, we study the effect of such biases between the ending choices and present a new scheme to reduce such stylistic artifacts.", "The contribution of this paper is threefold: (1) we provide an extensive analysis of the SCT dataset to shed some light on the ending data characteristics (Section 3) (2) we develop a new strong classifier for tackling the SCT that uses a variety Context Right Ending Wrong Ending Ramona was very unhappy in her job.", "She asked for a raise, but was denied.", "The refusal prompted her to aggressively comb the want ads.", "She found an interesting new possibility and set up an interview.", "She was offered the new job at a higher salary.", "Ramona had no reason to want to change jobs anymore.", "The teacher was walking with a stack of papers.", "Outside started to rain.", "When the teacher tried to walk down a few steps, she ended up falling.", "The papers flew out of her hands and landed on the ground.", "A passer-by helped her up and helped her collect the papers.", "The teacher got up and walked home leaving the papers behind.", "(Chaturvedi et al., 2017) fails to keep up the performance on our new dataset.", "We discuss the implications of this experiment to the greater research community in terms of data collection and benchmarking practices in Section 6.", "All the code and datasets for this paper will be released to the public.", "We hope that the availability of the new evaluation set can further support the continued research on story understanding.", "Related Work This paper mainly extends the work on creating the Story Cloze Test set (Mostafazadeh et al., 2016) , hereinafter SCT-v1.0.", "The SCT-v1.0 dataset was created as follows: full five-sentence stories from the ROC Stories corpus were sampled, then, the initial four sentences were shown to a set of MTurk 2 crowd workers who were prompted to author 'right' and 'wrong' endings.", "Mostafazadeh et al.", "(Mostafazadeh et al., 2016) give special care to make sure there were no boundary cases for 'right' and 'wrong' endings by implementing extra rounds of data filtering.", "The resulting SCT-v1.0 dataset had a validation (hereinafter, SCT-v1.0 Val) and a test set (SCT-v1.0 test), each with 1,871 cases.", "Furthermore, we conducted an extensive ngram analysis, using word tokens, characters, partof-speech, and token-POS (similar to Schwartz et al.", "(Schwartz et al., 2017b) ) as features.", "We see char-grams such as \"sn't\" and \"not\" appear more commonly in the 'wrong endings', suggesting heavy negation.", "In 'right endings', pronouns are used more frequently versus proper nouns used in 'wrong endings'.", "Artifacts such as 'pizza' are common in 'wrong endings,' which could suggest that for a given topic, the authors may replace an object in a right ending with a wrong one and quickly think up a common item such as pizza to create a 'wrong' one.", "An extensive analysis of these features, including the n-gram analysis, can be found in the supplementary material.", "Model Following the analysis above, we developed a Story Cloze model, hereinafter EndingReg, that only uses the ending features while disregarding the story context for choosing the right ending.", "We expanded each Story Cloze Test case's ending options into a set of two single sentences.", "Then, for each sentence, we created the following features: 1.", "Number of tokens 2.", "VADER composite sentiment score 3.", "Yngve complexity score 4.", "Token-POS n-grams 5.", "POS n-grams 6.", "Four length character-grams All n-gram features needed to appear at least five times throughout the dataset.", "The features were collected for each five-sentence story and then fed into a logistic regression classifier.", "As an initial experiment, we trained this model using the SCT-v1.0 validation set and tested on the SCT-v1.0 test set.", "An L2 regularization penalty was used to enforce a Gaussian prior on the feature-space, where a grid search was conducted for hyper-parameter tuning.", "This model achieves an accuracy of 71.5% on the SCT-v1.0 dataset which is on par with the highest score achieved by any model using only the endings.", "Table 3 shows the accuracies ob-tained by models using only those particular features.", "We achieve minimal but sometimes important classification using token count, VADER, and Yngve in combination alone, better classification using POS or char-grams alone, and best classification using n-grams alone.", "By combining all of them we achieve the overall best results.", "Table 3 : Classification results on SCT-v1.0 using each of the feature sets designated in the columns.", "Data Collection Based on the findings above, a new test set for the SCT was deemed necessary.", "The premise of predicting an ending to a short story, as opposed to predicting say a middle sentence, enables a more systematic evaluation where human can agree on the cases 100%.", "Hence, our goal was to come up with a data collection scheme that overcomes the data collection biases, while keeping the original evaluation format.", "As the data analysis revealed, the token count, sentiment, and the complexity are not as important features for classification as the ending n-grams are.", "We set the following goals for sourcing the new 'right' and 'wrong' endings.", "They both should: 1.", "Contain a similar number of tokens 2.", "Have similar distributions of token n-grams and char-grams 3.", "Occur as standalone events with the same likelihood to occur, with topical, sentimental, or emotion consistencies when applicable.", "First, we crowdsourced 5,000 new five-sentence stories through Amazon Mechanical Turk.", "We prompted the users in the same manner described in Mostafazadeh et al.", "(2016) .", "In order to source new 'wrong' endings, we tried two different methods.", "In Method #1, we kept the original ending sourcing format of Mostafazadeh et al., but imposed some further restrictions.", "This was done by taking the first four sentences of the newly collected stories and asking an MTurker to write a 'right' and 'wrong' ending for each.", "The new restrictions were: 'Each sentence should stay within the same subject area of the story,' and 'The number of words in the Right and Wrong sentences should not differ by more than 2 words,' and 'When possible, the Right and Wrong sentences should try to keep a similar tone/sentiment as one another.'", "The motivation behind this technique was to reduce the statistical differences by asking the user to be mindful of considerations.", "In Method #2, we took the five sentences stories and prompted a second set of MTurk workers to modify the fifth sentence in order to make a resulting five-sentence story non-sensible.", "Here, the prompt instructs the workers to make sure the new 'wrong ending' sentence makes sense standalone, that it does not differ in the number of words from the original sentence by more than three words, and that the changes cannot be as simple as e.g., putting the word 'not' in front of a description or a verb.", "As a result, the workers had much less flexibility for changing the underlying linguistic structures which can help tackle the authorship style differences between the 'right' and 'wrong' endings.", "The results in Table 4 , which show classification accuracy when using EndingReg on the two new data sources, show that Method #2 is a slightly better data sourcing scheme in reducing the bias, since the EndingReg model's performance is slightly worse.", "The set was further filtered through human verification similar to Mostafazadeh et al.", "(2016) .", "The filtering was done by splitting each SCT-v1.0's two alternative endings into two independent five-sentence stories and asking three different MTurk users to categorize the story as either: one where the story made complete sense, one where the story made sense until the last sentence and one where the story does not make sense for another reason.", "Stories were only selected if all the three MTurk users verified that the story with the 'right ending' and the corresponding story with the 'wrong ending' were verified to be indeed right and wrong respectively.", "This ensured a higher quality of data and eliminating boundary cases.", "This entire process resulted in creating the Story Cloze Test v1.5 (SCT-v1.5) dataset, consisting of 1,571 stories for each validation and test sets.", "Method #1 Method #2 EndingReg 0.709 0.695 cogcomp 0.649 0.641 Table 4 : Comparison of initial data sourcing methods n βˆ’ gram char βˆ’ gram P OS SCT-v1.0 13.9 12.4 16.4 SCT-v1.5 7.0 6.3 7.5 Table 5 : Standard deviation of the word and character n-gram counts, as well as the part of speech (POS) counts, between the right and wrong endings.", "Results In order to test the decrease in n-gram bias, which was the most salient feature for the classification task using only the endings, we compare the variance between the n-gram counts from SCT-v1.0 to SCT-v1.5.", "The results are presented in Table 5 , which indicates the drop in the standard deviations in our new dataset.", "Table 6 shows the classification results of various models on SCT-v1.5.", "The drop in accuracy of the EndingReg model between the SCT-v1.0 and SCT-v1.5 shows a significant improvement on the statistical weight of the stylistic features generated by the model.", "Since the main features used are the token length and the various n-grams, this suggests that the new 'right endings' and 'wrong endings' have much more similar token n-gram, pos n-gram, postoken n-gram and char-gram overlap.", "Furthermore, the CogComp model's performance has significantly dropped on SCT-v1.5.", "Although this model seems to be using story comprehension features such as event sequencing, since the endings are included in the sequences, the biases within the endings have influenced the predictions and the weak performance of the model in SCT-v1.5 suggest that this model had picked up on the biases of SCT-v1.0 as opposed to really understanding the context.", "In particular, the posterior probabilities for each ending choice using their features are quite similar on the SCT-v1.5.", "These results place the classification accuracy of this top performing model on par with or worse than the models that did not use the ending features of the old SCT-v1.0 dataset (Mostafazadeh et al., 2017) , which suggest that the gap that once was held by models using the ending biases seems to be corrected for.", "Al- though we did not get to test all the other models published on SCT-v1.0 directly, we predict similar trends.", "It is important to point out that the 64.4% performance attained by our EndingReg model is still high for a model which completely discards the context.", "This indicates that although we could correct for some of the stylistic biases, there are some other hidden patterns in the new endings that would not have been accounted for without having the EndingReg baseline.", "This showcases the importance of maintaining benchmarks that evolve and improve over time, where systems should not be optimized for particular narrow test sets.", "We propose the community to report accuracies on both SCT-v1.0 and SCT-v1.5, both of which still have a huge gap between the best system and the human performance.", "Conclusion In this paper, we presented a comprehensive analysis of the stylistic features isolated in the endings of the original Story Cloze Test (SCT-v1.0).", "Using that analysis, along with a classifier we developed for testing new data collection schemes, we created a new SCT dataset, SCT-v1.5, which overcomes some of the biases.", "Based on the results presented in this paper, we believe that our SCT-v1.5 is a better benchmark for story comprehension.", "However, as shown in multiple AI tasks (Ettinger et al., 2017; Antol et al., 2015; Jabri et al., 2016; Poliak et al., 2018) , no collected dataset is entirely without its inherent biases and often the biases in datasets go undiscovered.", "We believe that evaluation benchmarks should evolve and improve over time and we are planning to incrementally update the Story Cloze Test benchmark.", "All the new versions, along with a leader-board showcasing the stateof-the-art results, will be tracked via CodaLab https://competitions.codalab.org/ competitions/15333.", "The success of our modified data collection method shows how extreme care must be given for sourcing new datasets.", "We suggest the next SCT challenges to be completely blind, where the participants cannot deliberately leverage any particular data biases.", "Along with this paper, we are releasing the datasets and the developed models to the community.", "All the announcements, new supplementary material, and datasets can be accessed through http://cs.", "rochester.edu/nlp/rocstories/.", "We hope that this work ignites further interest in the community for making progress on story understanding." ] }
{ "paper_header_number": [ "1", "2", "4", "5", "6", "7" ], "paper_header_content": [ "Introduction", "Related Work", "Model", "Data Collection", "Results", "Conclusion" ] }
GEM-SciDuet-train-105#paper-1275#slide-18
Next Steps
We believe that evaluation benchmarks should evolve and improve over time and we are planning to incrementally update the Story Cloze Test benchmark. Stay tuned for updates on the dataset and SOTA models via http:// cs.rochester.edu/nlp/rocstories/ We expect to release the final dataset, along with reporting the performance of the most recent SCT 1.0 SOTA models on the new dataset, shortly after Allen, Omid B Tackling the Biasesakhshandeh,
We believe that evaluation benchmarks should evolve and improve over time and we are planning to incrementally update the Story Cloze Test benchmark. Stay tuned for updates on the dataset and SOTA models via http:// cs.rochester.edu/nlp/rocstories/ We expect to release the final dataset, along with reporting the performance of the most recent SCT 1.0 SOTA models on the new dataset, shortly after Allen, Omid B Tackling the Biasesakhshandeh,
[]
GEM-SciDuet-train-106#paper-1281#slide-0
1281
A Framework for Representing Language Acquisition in a Population Setting
Language variation and change are driven both by individuals' internal cognitive processes and by the social structures through which language propagates. A wide range of computational frameworks have been proposed to connect these drivers. We compare the strengths and weaknesses of existing approaches and propose a new analytic framework which combines previous network models' ability to capture realistic social structure with practically and more elegant computational properties. The framework privileges the process of language acquisition and embeds learners in a social network but is modular so that population structure can be combined with different acquisition models. We demonstrate two applications for the framework: a test of practical concerns that arise when modeling acquisition in a population setting and an application of the framework to recent work on phonological mergers in progress.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217 ], "paper_content_text": [ "Introduction The process of language change should be thought of as a two-step cycle in which 1) individuals acquire their native languages from their predecessors then 2) pass them on to their successors.", "Small changes accrue over time this way and create both small-scale interpersonal variation and large-scale typological differences.", "It is easy to draw a strong analogy here between linguistic evolution and biological evolution.", "Both feature classic descent with modification, except while phenotypes are transmitted through genes and acted on by natural selection, language is both transmitted through and constrained by the individual (Cavalli-Sforza and Feldman, 1981; Ritt, 2004, etc.)", ".", "But while evolution, linguistic or otherwise, is driven by forces acting on the individual, it unfolds on the level of populations (Cavalli-Sforza and Feldman, 1981) .", "The influence of communitylevel social factors on the path of language change is a major focus of sociolinguistics (Labov, 2001; Milroy and Milroy, 1985; Rogers Everett, 1995) .", "Ideally, one could observe population-level variation unfold in real time while testing out individual factors, but this is impossible because nobody can travel back in time or fit entire natural environments into a lab.", "Change that has already happened is out of reach, and change in progress is buried in a world of confounds.", "The classic sociolinguistic method instead approaches the problem by inferring causal factors from patterns discovered in field interviews and corpora (Labov, 1994; Labov et al., 2005, etc.)", ".", "This is the primary source of empirical data in the field and the only way to look at language change in a naturalistic setting, but it is limited in that it cannot test cause and effect directly.", "More recently, controlled experimental studies have emerged as a complementary line of research which manipulate causal factors directly (Johnson et al., 1999; Campbell-Kibler, 2009, etc.)", ", but are inherently removed natural time and scale.", "A third approach, the one we build upon here, relies on computational modeling to simulate how sociolinguistic factors might work together in larger populations (Klein, 1966; Blythe and Croft, 2012; Kauhanen, 2016, etc.)", ".", "It has long been argued that language acquisition is the primary cause of language change (Sweet, 1899; Lightfoot, 1979; Niyogi, 1998, etc.)", ".", "In the last few decades, this connection has been modeled computationally (Gibson and Wexler, 1994; Kirby et al., 2000; Yang, 2000, etc.)", ", leading to the strong conclusion that change is the inevitable consequence of mixed linguistic input or finite learning periods (Niyogi and Berwick, 1996) , even if children are \"perfect\" learners.", "An important result connecting the learner and population emphasizes the need for this line of work: the space of paths of change available in populations is formally larger than the paths available to linear chains of iterated learners.", "Niyogi and Berwick (2009) prove formally that even perfectly-mixed (i.e., uniform and homogeneous social network) populations admit phase transitions in the path of change unavailable to chains of single learners commonly implemented in iterated learning (Kirby et al., 2000) .", "This suggests that small-population experimental studies in sociolinguistics and in child language acquisition do not paint the full picture of language change.", "We introduce a new framework for modeling language change in populations.", "It has an outer loop to represent generational progression, but it replaces the inner loop which calculates randomized interactions between agents with a single formula that is defined generally enough to allow the simulation of a wide range of scenarios.", "It builds upon the principled formalism described by Niyogi and Berwick (1996, et seq.)", ", privileging the acquisition model and separating it from the population model.", "The resulting modular framework is described in the following sections.", "First, Section 1.1 presents a survey of previous simulation work followed by a description of the new population model in Section 2.", "Next, Section 3 addresses practical concerns relating population size to assumptions about language acquisition.", "Finally, Section 4 introduces a case study on phonological change which demonstrates the need for appropriate models both of acquisition and populations.", "Related Work Computational models for the propagation of linguistic variation have been employed with a variety of research goals in mind.", "Every paper implements its own framework with few exceptions, so comparison across studies is difficult.", "Additionally, since each model is essentially 'boutique,' it is always possible that models are designed consciously or unconsciously to achieve a specific outcome rather driven by underlying principles.", "We group these frameworks into three classes according to their implementation, swarm, network, and algebraic, and discusses their strengths and weaknesses.", "The first class, called swarm here, models populations as collections of agents placed on a grid.", "They \"swarm\" around randomly according to some movement function, and \"interact\" when they occupy adjacent grid spaces (Satterfield, 2001; Harrison et al., 2002; Ke et al., 2008; Stanford and Kenny, 2013) .", "This tends toward concrete interpretation, for example, more mobile populations are expressed directly by more mobile agents.", "They capture Bloomfield (1933) 's \"principle of density\" which describes the observation that geographically or socially close individuals interact more frequently than those farther away.", "On the other hand, they provide little control over network structure, relying on series of explicit movement constraints in order to direct their agents, and since each one moves randomly at each iteration, these models have potentially thousands of degrees of freedom.", "Such simulations should be run many times if any sort of statistically expected results are to be computed.", "The second class, network frameworks, model speakers as nodes and interaction probabilities as weighted edges on network graphs (Minett and Wang, 2008; Baxter et al., 2009; Fagyal et al., 2010; Blythe and Croft, 2012; Kauhanen, 2016) .", "These frameworks offer precise control over social network structure and can test specific community models from within sociolinguistics.", "However, implementations usually proceed by some kind of iterative probabilistic node-pair selection process, and in this way suffer from the same statistical pitfalls as swarm frameworks.", "In contrast to swarm models, interaction is rigidly restricted to immediately connected nodes, so to achieve gradient interaction probabilities, edges must be frequently updated or nearly fully-connected graphs with carefully assigned edge weights would need to be constructed and motivated.", "The third class, algebraic frameworks, present analytic methods for determining the state of the network at the end of each iteration rather than relying on stochastic simulation of individual agents (Niyogi and Berwick, 1996, 1997; Yang, 2000; Baxter et al., 2006; Minett and Wang, 2008; Niyogi and Berwick, 2009) .", "Removing that inner loop is a more mathematically elegant approach and avoids dealing unnecessarily with statistics behind random trials.", "Removing that loop speeds up calculation as well, making larger simulations more tractable than with network or swarm frameworks.", "But this power is achieved by sacrificing the social network.", "Up to this point, such models have, to our knowledge, only been defined over perfectly-mixed (i.e., no network effects) populations.", "That assumption is useful for reasoning about the mathematical theory behind language change, but it hinders such models' utility in empirical studies.", "For example, though Baxter et al.", "(2006) and Minett and Wang (2008) implement algebraic models for perfectly mixed populations, they fall back on network models to model network effects.", "Framework for Transmission in Social Networks Algebraic frameworks have their mathematical advantage, but network frameworks provide a richer model for representing real-world population structures and swarm models capture density effects by default.", "An ideal framework would combine the benefits of all three of these.", "Here we do just that.", "We introduce a framework that instantiates Niyogi and Berwick (1996) 's acquisitiondriven formalism where change is handled explicitly as a two-step alternation between individual learners learning and populations interacting.", "It provides an analytic solution to the state of a network structure over which swarm-like behavior can be modeled.", "We begin by conceptualizing the framework in terms of agents traveling probabilistically over a network structure as in Algo.", "1 before introducing the analytic solution.", "There is an individual standing at every node in the graph, and at every iteration, each individual begins at some location and travels along the network's edges, at each step deciding to continue on or to stop and interact with the agent at that node.", "Any two agents with a nonzero weight path between them could potentially interact, so the overall probability of an interaction is a function of the shape of the network and the decay rate of the step probability.", "The shorter and higher weighted the path between two agents, the more likely they are to interact.", "This corresponds to the gradient interaction probabilities of swarm frameworks.", "Social networks are typically conceived of as graph structures with individuals as vertices and the social or geographical connections between individuals as edges, and this allows for a great deal of flexibility.", "If edges are undirected, then all interactions are equal and bidirectional, but if edges are directed, interactions may or may not be.", "Edges can be weighted to represent likelihood of interaction or some measure of social valuation, and this too can vary over time.", "Lastly, it is possible to add and remove nodes themselves to capture births, deaths, or migration.", "The network structure is represented computationally here as an adjacency matrix A.", "In a population of n individuals, this is n Γ— n where each element a ij is the weight of the connection from individual j to individual i.", "The matrix must be column stochastic (all columns sum to 1 and contain only positive elements) so that edge weights can be interpreted as probabilities.", "The special case where the matrix is symmetric (every a ij = a ji ) models undirected edges, and more strongly, the model reduces to perfectly-mixed populations when each a ij = 1 n .", "We define a notion of communities over the nodes of the network in order to add the option to categorize groups of individuals.", "Membership among c communities is identified with an n Γ— c indicator matrix C. Depending on the problem at hand, it is possible to calculate the average behavior of the learners within each community directly without having to calculate the behavior of each individual member.", "Propagation in the Network In a typical network model, the edge weights between nodes in A are interpreted directly as interaction probabilities, meaning that individuals only ever interact with their immediate graph neighbors.", "We take a different approach by allowing the agents to \"travel\" and potentially interact with any other agent whose node is connected by a path of non-zero edges.", "If the number of traveling steps were fixed at k, the probability of each pair interacting would be defined as A k .", "It is more complicated for us since the number of steps traveled is a random variable.", "The probability of j interacting with i (p(ij)) is the probability of them interacting after k steps times the probability of k for all values of k as in Eqn.", "1.", "Combining this intuition with A yields the interaction probabilities for all i, j pairs.", "p(ij) = k p(ij|k steps) p(k steps) (1) The pattern of linguistic variants or grammars (in the formal sense where grammar g is the intensional equivalent of language L g ) within a network unfolds as a dynamical system over the course of many iterations, and learners' positions within the network mediate which ones they eventually acquire.", "In a system with g grammars and n individuals, a n Γ— g row-stochastic matrix G specifies the probability with which each community expresses each grammar.", "Given this notion of interaction and the specification of grammars expressed within a network, it is possible to compute the distribution of grammars presented to each learner.", "This is the learners' linguistic environment and is represented by a matrix E in the same form as G .", "An environment function E n (G t , A) = E t+1 shown in Eqn.", "2 calculates E by first calculating all the interaction probabilities in the network then multiplying those by the grammars which every agent expresses to get the environment E. The Ξ± parameter from the geometric distribution 1 defines the travel decay rate.", "A lower Ξ± defines conceptually more mobile agents.", "More generally, E n is a special case of E(G t , C t , A t ) = E t+1 where the number of communities equals the number of individuals (c = n).", "C becomes the identity matrix without loss of generality, so the network's initial condition does not have to be defined explicitly.", "For any other community definition, an initial condition has to be defined as in Eqn.", "3 which specifies the starting point in the network that each agent conceptually begins traveling from.", "The output of E is a g Γ— c matrix giving the environment of the average agent in each community.", "2 E n (G t , A) = G t Ξ± (I βˆ’ (1 βˆ’ Ξ±)A) βˆ’1 (2) E(G t , C, A) = E n (G t , A)C(C C) βˆ’1 (3) The output of E must be broadcast to g Γ— n, which would result in the loss of some information unless the assumption can be made that each community is internally uniform.", "However, when that assumption can be made, the n Γ— n adjacency matrix admits a c Γ— c equitable partition A Ο€ (Eqn.", "4) (Schaub et al., 2016) which permits an alternate environment function E EP (G t , C, A) shown in Eqn.", "5 that is equivalent to the lossless E n if A.", "If n c, E EP is much faster to calculate because it only inverts a small c Γ— c matrix rather than a large n Γ— n. This makes it feasible to run much larger simulations than what has been done in the past.", "A Ο€ = (C C) βˆ’1 C AC (4) EEP = Ξ±G C (I βˆ’ (1 βˆ’ Ξ±)A Ο€ ) βˆ’1 (C C) βˆ’1 (5) Learning in the Network The environment function describes what inputs E t+1 are available to learners given the language expressed by the mature speakers of the previous age cohort with grammars G t .", "The second component of the framework describes the learning algorithm A(E t+1 ) = G t+1 , how individuals respond to their input environment.", "The resulting G t+1 describes which grammars those learners will eventually contribute to the subsequent generation's environment E t+2 .", "This back-andforth between adults' grammars G and childrens' environment E is the two-step cycle of language change (Fig.", "1) .", "In neutral change, learners would acquire grammars at the rates that they are expressed in their environments, but there is good reason to believe .", ".", ".", "G t β†’ E t+1 β†’ G t+1 .", ".", ".", "G t+i β†’ E t+i+1 .", ".", ".", "Figure 1: Language change as an alternation between G and E matrices that most language change involves differential fitness between competing variants, and most nontrivial learning algorithms yield some kind of fitness (Kroch, 1989; Yang, 2000; Blythe and Croft, 2012, etc.)", ", so A is rarely neutral.", "A neutral and simple advantaged model are both considered in Section 3, and a more complex learning algorithm is described for Section 4.", "Application: Testing Assumptions The general nature of the framework described here renders it suitable for reproducing the results of previous works and evaluating their assumptions.", "To demonstrate this, we reproduce the major result from Kauhanen (2016) , which tested the behavior of neutral change in networks of singlegrammar learners, in order to dissect two of its primary assumptions.", "Implemented in a typical network framework, the original setup contains n = 200 individuals in probabilistically generated centralized networks in which individuals mature categorically to the single most frequent grammar in their input.", "The author found that categorical neutral change produced chaotic paths of change regardless of network shape and that periodically \"rewiring\" some of the network edges smoothed this out.", "Without commenting on rewiring, we find that the combination of n and choice of categorical learners conspire to create the chaotic results.", "We create two communities, both centralized along the lines of the single cluster in Kauhanen (2016) , initialize all members of cluster 1 with grammar g 1 and all members of cluster 2 with grammar g 2 , and additional edges are added between members of clusters 1 and 2 to allow interaction.", "G is converted to an indicator matrix at the end of each learning iteration by rounding values to 0 and 1 in order to model categorical learners who only internalize the most common grammar in their inputs as in the original model.", "In a pair of infinitely large clusters or two clusters where individuals are permitted to learn a probabilistic distribution of grammars, each cluster should homogenize to a 50/50 distribution of g 1 and g 2 after some number of iterations depending on the specifics of the network shape and setting for Ξ± creating the red curves in Fig.", "2 .", "At n = 20000, each of 10 trials roughly follows the path of the predicted curve, but when run at the original n = 200 for 10 trials, this produces the type of chaotic behavior which Kauhanen (2016) attempts to repair.", "The outcome appears to be the result of an assumption made out of convenience (n = 200) rather than a principled decision.", "To further explore the impact of the population size assumption, we experiment on a model of advantaged change, which is typically contrasted with neutral change because of its tendency to produce \"well-behaved\" S-curve change (Blythe and Croft, 2012; Kauhanen, 2016) .", "This time, only a single cluster is created, and the advantaged grammar is initially assigned to 1% of the population.", "As seen in Figure 3 , results are chaotic for n = 200 once again and near predicted for n = 20000.", "This is important because at n = 200, advantaged change is chaotic, and most simulations both rise and fall.", "An experimenter who only studied advantaged change in small population might concluded that it is as ill-behaved as neutral change.", "While the conclusions that Kauhanen (2016) draws appear valid for n = 200, it is not clear to what extent they can be projected onto larger populations.", "This demonstrates the need for carefully choosing one's modeling assumptions and testing them out when possible.", "Application: Mergers in Progress The acquisition of phonological mergers in mixed input settings presents an interesting problem.", "It appears that mergers have an inherent advantage because they tend to spread at the expense of distinctions, and once they begin, they are rarely reversed (Labov, 1994) .", "Yang (2009) 's acquisition model quantifies this advantage as the relatively lower chance of misinterpretation if a listener assumes the merged grammar instead of the nonmerged grammar once a sufficient proportion of the environment is merged.", "Applied to Johnson (2007) 's detailed population study of the frontier of the COT-CAUGHT merger in the small towns along the border between Rhode Island and Massachusetts, this accurately predicts the ratio of merged input for a child to acquire the merged grammar, however when applied to a perfectly mixed population of learners, it fails to model the spread of the merged grammar in the population.", "Yang's model is input-driven, so it is conducive to simulation with minimal assumptions past those drawn from the empirical data.", "We test the behavior of this learning model in a typical population network and demonstrate that it produces a reasonable path of change.", "Background The COT-CAUGHT merger, also called the low back merger describes the phenomenon present in varieties of North American English spoken in eastern New England, western Pennsylvania, the American West, and Canada among others where the vowel in words like cot and the vowel in words like caught have come to be pronounced the same (Labov et al., 2005, pp.", "58-65) .", "The geographical extent of the merger is currently expanding, which might be expected if the merger has a cognitive or social advantage associated with it.", "Johnson (2007) 's study of the merger's frontier on the border Rhode Island and Massachusetts uncovered an interesting social dynamic that illustrates the merger's speed: there are families where the parents and older siblings non-merged, but the younger siblings are.", "The merger has swept through in only a few years and passed between the siblings.", "Yang (2009) seeks to understand why mergers have an advantage from a cognitive perspective, and his model treats the acquisition of mergers as an evolutionary process.", "Learners who receive both merged (M + ) and non-merged (M βˆ’ ) input entertain both a merged (g + ) and non-merged (g βˆ’ ) grammar and reward whichever grammar successfully parses the input.", "This kind of variational learner (Yang, 2000) is essentially an adaptation of the classic evolutionary Linear Reward Punishment model (Bush and Mosteller, 1953) .", "The fitness of each grammar is the probability in the limit that it will fail to parse any given input, and since it is virtually always the case that this probability is different for both grammars, fitness is virtually always asymmetric.", "The variational learner is characterized as follows.", "Given two grammars and an input token s, The learner parses s with g 1 with probability p and with g 2 with probability q = 1 βˆ’ p. p is rewarded according to whether the choice of g successfully parses s (g β†’ s) or it fails to (g s), where Ξ³ is some small constant.", "p = p + Ξ³q, g β†’ s (1 βˆ’ Ξ³)p, g s Given a specific problem, one can calculate a penalty probability C for each g, the proportion of input that would cause g s. The grammar with the lower C has the advantage, so the other one will be driven down in the long run.", "C can be estimated from type frequencies in a corpus, and the model is non-parametric because these values do not depend on Ξ³. lim tβ†’βˆž p t = C 2 C 1 + C 2 lim tβ†’βˆž q t = C 1 C 1 + C 2 To understand the COT-CAUGHT merger empirically, one must reason about what kind of input would trigger a penalty and then calculate the penalty probabilities of the merged grammar C + and non-merged grammar C βˆ’ from a corpus.", "This model considers parsing failure to be the rate of initial misinterpretation, and for a vowel merger, the only inputs that could create an initial misinterpretation are minimal pairs because they become homophones.", "Examples of COT-CAUGHT minimal pairs include cot-caught, Don-Dawn, stock-stalk, odd-awed, collar-caller, and so on.", "The merged g + grammar collapses would-be minimal pairs into homophones, so the penalty rate C + comes down to lexical access.", "Under the observation that more frequent homophones are retrieved first regardless of syntactic context (Caramazza et al., 2001) , g + listeners only suffer initial misinterpretation when the less frequent member of a pair is uttered regardless of the rate of M + .", "If H is the sum token frequency of all minimal pairs and h i o , h i oh are the frequencies of the ith pair's members, then C+ is calculated by Eqn.", "6.", "In contrast, g βˆ’ listeners are sensitive to the phonemic distinction, so they misinterpret M βˆ’ input at the rate of mishearing one vowel for the other (Peterson and Barney, 1952) (second half of Eqn.", "7).", "And given M + input, they misinterpret whenever they hear the phoneme which g βˆ’ does not expect (e.g., a merged speaker pronouncing cot with the CAUGHT vowel) times the probability of not mishearing that vowel (1-) plus times the probability of hearing the right vowel (i.e., the merged speaker pronounces cot with the COT vowel but it is misheard anyway) (first half of Eqn.", "7).", "Since g βˆ’ misinterpretation rates are a function of the rate of M + (p) in the environment, there is a threshold of M + speakers above which the merged grammar has a fitness advantage over the non-merged one.", "C + = 1 H i min(h i o , h i oh ) (6) C βˆ’ = 1 H i p 0 ((1 βˆ’ oh )h i o + oh h i oh ) (7) +q 0 ( oh h i o + oh h i oh ) Calculating this threshold for the frequent minimal pairs that Yang extracts from the Wortschatz project (Biemann et al., 2004 ) corpus 3 and mishearing rates from Peterson and Barney (1952), the Yang model predicts that a learner exposed to at least ∼ 17% COT-CAUGHT-merged input will acquire the merger.", "This threshold represents a strong advantage for M + because it is well under the 50% threshold expected for neutral (non-advantaged) change and it is very close to what was found in Johnson (2007) 's sociolinguistic study.", "It predicts that younger children may have g + while their parents and even older siblings have g βˆ’ if the 17% threshold was crossed in E after the acquisition period of the older sibling but before that of the younger sibling.", "Model Setup All the mechanics behind the learning model reduce to a simple statement: learners acquires g + iff > 17% of their input is M + and they acquire g βˆ’ otherwise.", "However, this kind of categorical learner in a perfectly-mixed population leads to immediate fixation at either g βˆ’ or g + in a single iteration, since the proportion of g + speakers in the population is equivalent to the proportion of M + input in every learner's environment.", "This is not realistic change.", "Clearly, social network structure is at least as important as the learning algorithm in modeling the spread of the merger.", "We model the change in a non-uniform social network of 100 centralized clusters of 75 individuals each.", "75 was chosen as half Dunbar's number, the maximum number of reliable social connections that an adult can maintain (Dunbar, 2010) .", "There are two grammars, g + and g βˆ’ , and learners internalize one or the other according to the 17% threshold of M + in their input.", "One cluster represents the source of the merger and is initialized at 100% g + , while the rest begin 100% g βˆ’ .", "Inter-cluster connections are chosen randomly so that some connections are between central members of the clusters and some are between peripheral members.", "The one merged cluster is connected to half the other clusters representing those at the frontier of the change, and each other cluster is connected to five randomly chosen ones.", "4 This network structure echoes work in sociolinguistics, in particular, Milroy and Milroy (1985) 's notion of strong and weak connections in language change, where weak connections between social clusters are particularly important for propagation of a change.", "Propagation of the merged grammar is calculated by E n because we are interested in the behavior of individuals without loss of precision and because it cannot be assumed that each cluster is internally uniform.", "5 Since the spread of the merger has been rapid enough to detect over a period of a few years, iterations are modeled as short age co-horts rather than full generations in the first experiments by updating only a randomly chosen 10% of nodes at each iteration because only a fraction of the population is learning at any given time.", "A model where every node is updated is investigated as well.", "Results The behavior of this simulation is shown graphically in Figure 4 .", "The fine/colored lines indicate the rate of M + within each initially non-merged cluster, and the bold/black line shows the average rate across all initially non-merged.", "The merger spreads from cluster to cluster in succession over the \"weak\" inter-cluster connections and through each cluster over the 'strong' connections before moving on to the next ones.", "Everett, 1995) members have the merger, a period of rapid diffusion of the merger, then some time where a few laggards resist the merger.", "As a result, most clusters exhibit an S-like shape.", "A few clusters change rapidly because of their especially wellconnected positions in the network, and some lag behind the rest because they are poorly connected to the rest of the network.", "More interestingly, the population-wide average, the population-level data at the kind of granularity that is often studied, yields a smooth S-curve with a shallower slope than the individual clusters.", "The fact that it arises naturally here in a network that conforms with typical network shapes but was otherwise randomly generated is encouraging because the experiment was not set up so that it would produce such a curve, and the steep rate of change in individual clusters is what is expected for a change that is rapid enough to affect siblings differently.", "In the above simulation, only a fraction of nodes were updated at each iteration in order to model a rapid change.", "In order to confirm that this choice is not affecting the results and to test a purer implementation of the framework presented here, we remove that constraint and update every node at each iteration.", "Figure 5 shows what happens over 20 iterations in a network that is otherwise identical but with 2/5 as many inter-cluster connections as the original.", "A qualitatively similar pattern arises, so the choice to update only a fraction of the population is not crucially affecting the results.", "In all experiments so far, social connections were fixed at the first iteration even though connections in real populations tend to change over time.", "To investigate that modeling assumption, we perform another simulation in which connections are randomly updated both within and across clusters at each iteration akin to Kauhanen (2016) 's rewiring.", "The result as shown in Figure 6 is similar to before, with one major difference.", "The individual clusters transition more closely in time because no individual cluster remains poorly connected or especially well connected throughout the entire simulation.", "Finally, we test our assumptions about population size by repeating the experiments on a smaller network of 40 clusters of 18 individuals.", "The results are qualitatively similar, but the S-curve appears to be more sensitive to probabilistic connections in the network.", "To explore this, we present the average network-wide rate of (M + ) across 10 trials, revealing that an S-like curve is formed each time but that its slope varies.", "A few trials never reach 100% because some of the clusters are not connected to the innovative one.", "The slope varies between trials, indicating that the rate of change is a function of both the population structure and the learning algorithm, but the network size does not substantially affect these results.", "Discussion The algebraic-network framework for modeling population-level language change presented here has substantial practical and theoretical advantages over previous ones.", "It is much simpler computationally than previous frameworks because it calculates the statistically expected behavior of each generation analytically and therefore removes the entire inner loop of calculating stochastic inter-agent interactions from the simulation.", "It follows the Niyogi and Berwick (1996) formalism for language change which presents a clean and modular way of reasoning about the problem and promotes the centrality of language acquisition.", "In addition to the core algorithm, the framework offers enough flexibility to represent a wide variety of processes from the highly abstract (e.g., Kauhanen (2016) ) to those grounded in soci-olinguistic and acquisition research (e.g., Yang (2009) ).", "In our investigation of Kauhanen's basic assumptions, we discover how seemingly innocuous decisions about population size and learning conspire to drive simulation results.", "If learners are conceived as categorical learners, population size becomes a deciding factor in the path of change.", "So while the original results are interesting and meaningful, they may only valid for small (on the order of 10 2 ) populations.", "In our simulation of the spread of the COT-CAUGHT merger, we show how a cognitivelymotivated model of acquisition requires a network model in order to represent population-level language change.", "The population is represented as a collection of individual clusters based on sociological work, but the clusters themselves are connected randomly.", "The fact that S-curves arise naturally from these networks underscores their centrality to language change.", "One problem that this line of simulation work has always faced has been the lack of viable comparison between models because every study implements its own learning, network, and interaction models.", "The modular nature of our framework advances against this trend since it is now possible to hold the population model constant while slotting in various learning models to test them against one another and vice-versa.", "Finally, since this framework reduces to Niyogi & Berwick's models in perfectly-mixed populations, it can be used to reason about the formal dynamics of language change as well.", "Without simulation, it would be difficult or impossible to undercover the interplay between acquisition and social structure on the propagation of language change.", "Neither factor alone can account for the theoretical or empirically observed patterns.", "Simulations of this kind which explicitly model both simultaneously is well equipped to provide insights that fieldwork and laboratory work cannot.", "As such, it is an invaluable complement to those more traditional methodologies." ] }
{ "paper_header_number": [ "1", "1.1", "2", "2.2", "2.3", "3", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Framework for Transmission in Social Networks", "Propagation in the Network", "Learning in the Network", "Application: Testing Assumptions", "Application: Mergers in Progress", "Background", "Model Setup", "Results", "Discussion" ] }
GEM-SciDuet-train-106#paper-1281#slide-0
Language Change
Languages change over time Both an internal and external process Individuals acquire language and transmit it to future generations New variants propagate through populations Must model how the individual reacts to linguistic input and to the community
Languages change over time Both an internal and external process Individuals acquire language and transmit it to future generations New variants propagate through populations Must model how the individual reacts to linguistic input and to the community
[]
GEM-SciDuet-train-106#paper-1281#slide-1
1281
A Framework for Representing Language Acquisition in a Population Setting
Language variation and change are driven both by individuals' internal cognitive processes and by the social structures through which language propagates. A wide range of computational frameworks have been proposed to connect these drivers. We compare the strengths and weaknesses of existing approaches and propose a new analytic framework which combines previous network models' ability to capture realistic social structure with practically and more elegant computational properties. The framework privileges the process of language acquisition and embeds learners in a social network but is modular so that population structure can be combined with different acquisition models. We demonstrate two applications for the framework: a test of practical concerns that arise when modeling acquisition in a population setting and an application of the framework to recent work on phonological mergers in progress.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217 ], "paper_content_text": [ "Introduction The process of language change should be thought of as a two-step cycle in which 1) individuals acquire their native languages from their predecessors then 2) pass them on to their successors.", "Small changes accrue over time this way and create both small-scale interpersonal variation and large-scale typological differences.", "It is easy to draw a strong analogy here between linguistic evolution and biological evolution.", "Both feature classic descent with modification, except while phenotypes are transmitted through genes and acted on by natural selection, language is both transmitted through and constrained by the individual (Cavalli-Sforza and Feldman, 1981; Ritt, 2004, etc.)", ".", "But while evolution, linguistic or otherwise, is driven by forces acting on the individual, it unfolds on the level of populations (Cavalli-Sforza and Feldman, 1981) .", "The influence of communitylevel social factors on the path of language change is a major focus of sociolinguistics (Labov, 2001; Milroy and Milroy, 1985; Rogers Everett, 1995) .", "Ideally, one could observe population-level variation unfold in real time while testing out individual factors, but this is impossible because nobody can travel back in time or fit entire natural environments into a lab.", "Change that has already happened is out of reach, and change in progress is buried in a world of confounds.", "The classic sociolinguistic method instead approaches the problem by inferring causal factors from patterns discovered in field interviews and corpora (Labov, 1994; Labov et al., 2005, etc.)", ".", "This is the primary source of empirical data in the field and the only way to look at language change in a naturalistic setting, but it is limited in that it cannot test cause and effect directly.", "More recently, controlled experimental studies have emerged as a complementary line of research which manipulate causal factors directly (Johnson et al., 1999; Campbell-Kibler, 2009, etc.)", ", but are inherently removed natural time and scale.", "A third approach, the one we build upon here, relies on computational modeling to simulate how sociolinguistic factors might work together in larger populations (Klein, 1966; Blythe and Croft, 2012; Kauhanen, 2016, etc.)", ".", "It has long been argued that language acquisition is the primary cause of language change (Sweet, 1899; Lightfoot, 1979; Niyogi, 1998, etc.)", ".", "In the last few decades, this connection has been modeled computationally (Gibson and Wexler, 1994; Kirby et al., 2000; Yang, 2000, etc.)", ", leading to the strong conclusion that change is the inevitable consequence of mixed linguistic input or finite learning periods (Niyogi and Berwick, 1996) , even if children are \"perfect\" learners.", "An important result connecting the learner and population emphasizes the need for this line of work: the space of paths of change available in populations is formally larger than the paths available to linear chains of iterated learners.", "Niyogi and Berwick (2009) prove formally that even perfectly-mixed (i.e., uniform and homogeneous social network) populations admit phase transitions in the path of change unavailable to chains of single learners commonly implemented in iterated learning (Kirby et al., 2000) .", "This suggests that small-population experimental studies in sociolinguistics and in child language acquisition do not paint the full picture of language change.", "We introduce a new framework for modeling language change in populations.", "It has an outer loop to represent generational progression, but it replaces the inner loop which calculates randomized interactions between agents with a single formula that is defined generally enough to allow the simulation of a wide range of scenarios.", "It builds upon the principled formalism described by Niyogi and Berwick (1996, et seq.)", ", privileging the acquisition model and separating it from the population model.", "The resulting modular framework is described in the following sections.", "First, Section 1.1 presents a survey of previous simulation work followed by a description of the new population model in Section 2.", "Next, Section 3 addresses practical concerns relating population size to assumptions about language acquisition.", "Finally, Section 4 introduces a case study on phonological change which demonstrates the need for appropriate models both of acquisition and populations.", "Related Work Computational models for the propagation of linguistic variation have been employed with a variety of research goals in mind.", "Every paper implements its own framework with few exceptions, so comparison across studies is difficult.", "Additionally, since each model is essentially 'boutique,' it is always possible that models are designed consciously or unconsciously to achieve a specific outcome rather driven by underlying principles.", "We group these frameworks into three classes according to their implementation, swarm, network, and algebraic, and discusses their strengths and weaknesses.", "The first class, called swarm here, models populations as collections of agents placed on a grid.", "They \"swarm\" around randomly according to some movement function, and \"interact\" when they occupy adjacent grid spaces (Satterfield, 2001; Harrison et al., 2002; Ke et al., 2008; Stanford and Kenny, 2013) .", "This tends toward concrete interpretation, for example, more mobile populations are expressed directly by more mobile agents.", "They capture Bloomfield (1933) 's \"principle of density\" which describes the observation that geographically or socially close individuals interact more frequently than those farther away.", "On the other hand, they provide little control over network structure, relying on series of explicit movement constraints in order to direct their agents, and since each one moves randomly at each iteration, these models have potentially thousands of degrees of freedom.", "Such simulations should be run many times if any sort of statistically expected results are to be computed.", "The second class, network frameworks, model speakers as nodes and interaction probabilities as weighted edges on network graphs (Minett and Wang, 2008; Baxter et al., 2009; Fagyal et al., 2010; Blythe and Croft, 2012; Kauhanen, 2016) .", "These frameworks offer precise control over social network structure and can test specific community models from within sociolinguistics.", "However, implementations usually proceed by some kind of iterative probabilistic node-pair selection process, and in this way suffer from the same statistical pitfalls as swarm frameworks.", "In contrast to swarm models, interaction is rigidly restricted to immediately connected nodes, so to achieve gradient interaction probabilities, edges must be frequently updated or nearly fully-connected graphs with carefully assigned edge weights would need to be constructed and motivated.", "The third class, algebraic frameworks, present analytic methods for determining the state of the network at the end of each iteration rather than relying on stochastic simulation of individual agents (Niyogi and Berwick, 1996, 1997; Yang, 2000; Baxter et al., 2006; Minett and Wang, 2008; Niyogi and Berwick, 2009) .", "Removing that inner loop is a more mathematically elegant approach and avoids dealing unnecessarily with statistics behind random trials.", "Removing that loop speeds up calculation as well, making larger simulations more tractable than with network or swarm frameworks.", "But this power is achieved by sacrificing the social network.", "Up to this point, such models have, to our knowledge, only been defined over perfectly-mixed (i.e., no network effects) populations.", "That assumption is useful for reasoning about the mathematical theory behind language change, but it hinders such models' utility in empirical studies.", "For example, though Baxter et al.", "(2006) and Minett and Wang (2008) implement algebraic models for perfectly mixed populations, they fall back on network models to model network effects.", "Framework for Transmission in Social Networks Algebraic frameworks have their mathematical advantage, but network frameworks provide a richer model for representing real-world population structures and swarm models capture density effects by default.", "An ideal framework would combine the benefits of all three of these.", "Here we do just that.", "We introduce a framework that instantiates Niyogi and Berwick (1996) 's acquisitiondriven formalism where change is handled explicitly as a two-step alternation between individual learners learning and populations interacting.", "It provides an analytic solution to the state of a network structure over which swarm-like behavior can be modeled.", "We begin by conceptualizing the framework in terms of agents traveling probabilistically over a network structure as in Algo.", "1 before introducing the analytic solution.", "There is an individual standing at every node in the graph, and at every iteration, each individual begins at some location and travels along the network's edges, at each step deciding to continue on or to stop and interact with the agent at that node.", "Any two agents with a nonzero weight path between them could potentially interact, so the overall probability of an interaction is a function of the shape of the network and the decay rate of the step probability.", "The shorter and higher weighted the path between two agents, the more likely they are to interact.", "This corresponds to the gradient interaction probabilities of swarm frameworks.", "Social networks are typically conceived of as graph structures with individuals as vertices and the social or geographical connections between individuals as edges, and this allows for a great deal of flexibility.", "If edges are undirected, then all interactions are equal and bidirectional, but if edges are directed, interactions may or may not be.", "Edges can be weighted to represent likelihood of interaction or some measure of social valuation, and this too can vary over time.", "Lastly, it is possible to add and remove nodes themselves to capture births, deaths, or migration.", "The network structure is represented computationally here as an adjacency matrix A.", "In a population of n individuals, this is n Γ— n where each element a ij is the weight of the connection from individual j to individual i.", "The matrix must be column stochastic (all columns sum to 1 and contain only positive elements) so that edge weights can be interpreted as probabilities.", "The special case where the matrix is symmetric (every a ij = a ji ) models undirected edges, and more strongly, the model reduces to perfectly-mixed populations when each a ij = 1 n .", "We define a notion of communities over the nodes of the network in order to add the option to categorize groups of individuals.", "Membership among c communities is identified with an n Γ— c indicator matrix C. Depending on the problem at hand, it is possible to calculate the average behavior of the learners within each community directly without having to calculate the behavior of each individual member.", "Propagation in the Network In a typical network model, the edge weights between nodes in A are interpreted directly as interaction probabilities, meaning that individuals only ever interact with their immediate graph neighbors.", "We take a different approach by allowing the agents to \"travel\" and potentially interact with any other agent whose node is connected by a path of non-zero edges.", "If the number of traveling steps were fixed at k, the probability of each pair interacting would be defined as A k .", "It is more complicated for us since the number of steps traveled is a random variable.", "The probability of j interacting with i (p(ij)) is the probability of them interacting after k steps times the probability of k for all values of k as in Eqn.", "1.", "Combining this intuition with A yields the interaction probabilities for all i, j pairs.", "p(ij) = k p(ij|k steps) p(k steps) (1) The pattern of linguistic variants or grammars (in the formal sense where grammar g is the intensional equivalent of language L g ) within a network unfolds as a dynamical system over the course of many iterations, and learners' positions within the network mediate which ones they eventually acquire.", "In a system with g grammars and n individuals, a n Γ— g row-stochastic matrix G specifies the probability with which each community expresses each grammar.", "Given this notion of interaction and the specification of grammars expressed within a network, it is possible to compute the distribution of grammars presented to each learner.", "This is the learners' linguistic environment and is represented by a matrix E in the same form as G .", "An environment function E n (G t , A) = E t+1 shown in Eqn.", "2 calculates E by first calculating all the interaction probabilities in the network then multiplying those by the grammars which every agent expresses to get the environment E. The Ξ± parameter from the geometric distribution 1 defines the travel decay rate.", "A lower Ξ± defines conceptually more mobile agents.", "More generally, E n is a special case of E(G t , C t , A t ) = E t+1 where the number of communities equals the number of individuals (c = n).", "C becomes the identity matrix without loss of generality, so the network's initial condition does not have to be defined explicitly.", "For any other community definition, an initial condition has to be defined as in Eqn.", "3 which specifies the starting point in the network that each agent conceptually begins traveling from.", "The output of E is a g Γ— c matrix giving the environment of the average agent in each community.", "2 E n (G t , A) = G t Ξ± (I βˆ’ (1 βˆ’ Ξ±)A) βˆ’1 (2) E(G t , C, A) = E n (G t , A)C(C C) βˆ’1 (3) The output of E must be broadcast to g Γ— n, which would result in the loss of some information unless the assumption can be made that each community is internally uniform.", "However, when that assumption can be made, the n Γ— n adjacency matrix admits a c Γ— c equitable partition A Ο€ (Eqn.", "4) (Schaub et al., 2016) which permits an alternate environment function E EP (G t , C, A) shown in Eqn.", "5 that is equivalent to the lossless E n if A.", "If n c, E EP is much faster to calculate because it only inverts a small c Γ— c matrix rather than a large n Γ— n. This makes it feasible to run much larger simulations than what has been done in the past.", "A Ο€ = (C C) βˆ’1 C AC (4) EEP = Ξ±G C (I βˆ’ (1 βˆ’ Ξ±)A Ο€ ) βˆ’1 (C C) βˆ’1 (5) Learning in the Network The environment function describes what inputs E t+1 are available to learners given the language expressed by the mature speakers of the previous age cohort with grammars G t .", "The second component of the framework describes the learning algorithm A(E t+1 ) = G t+1 , how individuals respond to their input environment.", "The resulting G t+1 describes which grammars those learners will eventually contribute to the subsequent generation's environment E t+2 .", "This back-andforth between adults' grammars G and childrens' environment E is the two-step cycle of language change (Fig.", "1) .", "In neutral change, learners would acquire grammars at the rates that they are expressed in their environments, but there is good reason to believe .", ".", ".", "G t β†’ E t+1 β†’ G t+1 .", ".", ".", "G t+i β†’ E t+i+1 .", ".", ".", "Figure 1: Language change as an alternation between G and E matrices that most language change involves differential fitness between competing variants, and most nontrivial learning algorithms yield some kind of fitness (Kroch, 1989; Yang, 2000; Blythe and Croft, 2012, etc.)", ", so A is rarely neutral.", "A neutral and simple advantaged model are both considered in Section 3, and a more complex learning algorithm is described for Section 4.", "Application: Testing Assumptions The general nature of the framework described here renders it suitable for reproducing the results of previous works and evaluating their assumptions.", "To demonstrate this, we reproduce the major result from Kauhanen (2016) , which tested the behavior of neutral change in networks of singlegrammar learners, in order to dissect two of its primary assumptions.", "Implemented in a typical network framework, the original setup contains n = 200 individuals in probabilistically generated centralized networks in which individuals mature categorically to the single most frequent grammar in their input.", "The author found that categorical neutral change produced chaotic paths of change regardless of network shape and that periodically \"rewiring\" some of the network edges smoothed this out.", "Without commenting on rewiring, we find that the combination of n and choice of categorical learners conspire to create the chaotic results.", "We create two communities, both centralized along the lines of the single cluster in Kauhanen (2016) , initialize all members of cluster 1 with grammar g 1 and all members of cluster 2 with grammar g 2 , and additional edges are added between members of clusters 1 and 2 to allow interaction.", "G is converted to an indicator matrix at the end of each learning iteration by rounding values to 0 and 1 in order to model categorical learners who only internalize the most common grammar in their inputs as in the original model.", "In a pair of infinitely large clusters or two clusters where individuals are permitted to learn a probabilistic distribution of grammars, each cluster should homogenize to a 50/50 distribution of g 1 and g 2 after some number of iterations depending on the specifics of the network shape and setting for Ξ± creating the red curves in Fig.", "2 .", "At n = 20000, each of 10 trials roughly follows the path of the predicted curve, but when run at the original n = 200 for 10 trials, this produces the type of chaotic behavior which Kauhanen (2016) attempts to repair.", "The outcome appears to be the result of an assumption made out of convenience (n = 200) rather than a principled decision.", "To further explore the impact of the population size assumption, we experiment on a model of advantaged change, which is typically contrasted with neutral change because of its tendency to produce \"well-behaved\" S-curve change (Blythe and Croft, 2012; Kauhanen, 2016) .", "This time, only a single cluster is created, and the advantaged grammar is initially assigned to 1% of the population.", "As seen in Figure 3 , results are chaotic for n = 200 once again and near predicted for n = 20000.", "This is important because at n = 200, advantaged change is chaotic, and most simulations both rise and fall.", "An experimenter who only studied advantaged change in small population might concluded that it is as ill-behaved as neutral change.", "While the conclusions that Kauhanen (2016) draws appear valid for n = 200, it is not clear to what extent they can be projected onto larger populations.", "This demonstrates the need for carefully choosing one's modeling assumptions and testing them out when possible.", "Application: Mergers in Progress The acquisition of phonological mergers in mixed input settings presents an interesting problem.", "It appears that mergers have an inherent advantage because they tend to spread at the expense of distinctions, and once they begin, they are rarely reversed (Labov, 1994) .", "Yang (2009) 's acquisition model quantifies this advantage as the relatively lower chance of misinterpretation if a listener assumes the merged grammar instead of the nonmerged grammar once a sufficient proportion of the environment is merged.", "Applied to Johnson (2007) 's detailed population study of the frontier of the COT-CAUGHT merger in the small towns along the border between Rhode Island and Massachusetts, this accurately predicts the ratio of merged input for a child to acquire the merged grammar, however when applied to a perfectly mixed population of learners, it fails to model the spread of the merged grammar in the population.", "Yang's model is input-driven, so it is conducive to simulation with minimal assumptions past those drawn from the empirical data.", "We test the behavior of this learning model in a typical population network and demonstrate that it produces a reasonable path of change.", "Background The COT-CAUGHT merger, also called the low back merger describes the phenomenon present in varieties of North American English spoken in eastern New England, western Pennsylvania, the American West, and Canada among others where the vowel in words like cot and the vowel in words like caught have come to be pronounced the same (Labov et al., 2005, pp.", "58-65) .", "The geographical extent of the merger is currently expanding, which might be expected if the merger has a cognitive or social advantage associated with it.", "Johnson (2007) 's study of the merger's frontier on the border Rhode Island and Massachusetts uncovered an interesting social dynamic that illustrates the merger's speed: there are families where the parents and older siblings non-merged, but the younger siblings are.", "The merger has swept through in only a few years and passed between the siblings.", "Yang (2009) seeks to understand why mergers have an advantage from a cognitive perspective, and his model treats the acquisition of mergers as an evolutionary process.", "Learners who receive both merged (M + ) and non-merged (M βˆ’ ) input entertain both a merged (g + ) and non-merged (g βˆ’ ) grammar and reward whichever grammar successfully parses the input.", "This kind of variational learner (Yang, 2000) is essentially an adaptation of the classic evolutionary Linear Reward Punishment model (Bush and Mosteller, 1953) .", "The fitness of each grammar is the probability in the limit that it will fail to parse any given input, and since it is virtually always the case that this probability is different for both grammars, fitness is virtually always asymmetric.", "The variational learner is characterized as follows.", "Given two grammars and an input token s, The learner parses s with g 1 with probability p and with g 2 with probability q = 1 βˆ’ p. p is rewarded according to whether the choice of g successfully parses s (g β†’ s) or it fails to (g s), where Ξ³ is some small constant.", "p = p + Ξ³q, g β†’ s (1 βˆ’ Ξ³)p, g s Given a specific problem, one can calculate a penalty probability C for each g, the proportion of input that would cause g s. The grammar with the lower C has the advantage, so the other one will be driven down in the long run.", "C can be estimated from type frequencies in a corpus, and the model is non-parametric because these values do not depend on Ξ³. lim tβ†’βˆž p t = C 2 C 1 + C 2 lim tβ†’βˆž q t = C 1 C 1 + C 2 To understand the COT-CAUGHT merger empirically, one must reason about what kind of input would trigger a penalty and then calculate the penalty probabilities of the merged grammar C + and non-merged grammar C βˆ’ from a corpus.", "This model considers parsing failure to be the rate of initial misinterpretation, and for a vowel merger, the only inputs that could create an initial misinterpretation are minimal pairs because they become homophones.", "Examples of COT-CAUGHT minimal pairs include cot-caught, Don-Dawn, stock-stalk, odd-awed, collar-caller, and so on.", "The merged g + grammar collapses would-be minimal pairs into homophones, so the penalty rate C + comes down to lexical access.", "Under the observation that more frequent homophones are retrieved first regardless of syntactic context (Caramazza et al., 2001) , g + listeners only suffer initial misinterpretation when the less frequent member of a pair is uttered regardless of the rate of M + .", "If H is the sum token frequency of all minimal pairs and h i o , h i oh are the frequencies of the ith pair's members, then C+ is calculated by Eqn.", "6.", "In contrast, g βˆ’ listeners are sensitive to the phonemic distinction, so they misinterpret M βˆ’ input at the rate of mishearing one vowel for the other (Peterson and Barney, 1952) (second half of Eqn.", "7).", "And given M + input, they misinterpret whenever they hear the phoneme which g βˆ’ does not expect (e.g., a merged speaker pronouncing cot with the CAUGHT vowel) times the probability of not mishearing that vowel (1-) plus times the probability of hearing the right vowel (i.e., the merged speaker pronounces cot with the COT vowel but it is misheard anyway) (first half of Eqn.", "7).", "Since g βˆ’ misinterpretation rates are a function of the rate of M + (p) in the environment, there is a threshold of M + speakers above which the merged grammar has a fitness advantage over the non-merged one.", "C + = 1 H i min(h i o , h i oh ) (6) C βˆ’ = 1 H i p 0 ((1 βˆ’ oh )h i o + oh h i oh ) (7) +q 0 ( oh h i o + oh h i oh ) Calculating this threshold for the frequent minimal pairs that Yang extracts from the Wortschatz project (Biemann et al., 2004 ) corpus 3 and mishearing rates from Peterson and Barney (1952), the Yang model predicts that a learner exposed to at least ∼ 17% COT-CAUGHT-merged input will acquire the merger.", "This threshold represents a strong advantage for M + because it is well under the 50% threshold expected for neutral (non-advantaged) change and it is very close to what was found in Johnson (2007) 's sociolinguistic study.", "It predicts that younger children may have g + while their parents and even older siblings have g βˆ’ if the 17% threshold was crossed in E after the acquisition period of the older sibling but before that of the younger sibling.", "Model Setup All the mechanics behind the learning model reduce to a simple statement: learners acquires g + iff > 17% of their input is M + and they acquire g βˆ’ otherwise.", "However, this kind of categorical learner in a perfectly-mixed population leads to immediate fixation at either g βˆ’ or g + in a single iteration, since the proportion of g + speakers in the population is equivalent to the proportion of M + input in every learner's environment.", "This is not realistic change.", "Clearly, social network structure is at least as important as the learning algorithm in modeling the spread of the merger.", "We model the change in a non-uniform social network of 100 centralized clusters of 75 individuals each.", "75 was chosen as half Dunbar's number, the maximum number of reliable social connections that an adult can maintain (Dunbar, 2010) .", "There are two grammars, g + and g βˆ’ , and learners internalize one or the other according to the 17% threshold of M + in their input.", "One cluster represents the source of the merger and is initialized at 100% g + , while the rest begin 100% g βˆ’ .", "Inter-cluster connections are chosen randomly so that some connections are between central members of the clusters and some are between peripheral members.", "The one merged cluster is connected to half the other clusters representing those at the frontier of the change, and each other cluster is connected to five randomly chosen ones.", "4 This network structure echoes work in sociolinguistics, in particular, Milroy and Milroy (1985) 's notion of strong and weak connections in language change, where weak connections between social clusters are particularly important for propagation of a change.", "Propagation of the merged grammar is calculated by E n because we are interested in the behavior of individuals without loss of precision and because it cannot be assumed that each cluster is internally uniform.", "5 Since the spread of the merger has been rapid enough to detect over a period of a few years, iterations are modeled as short age co-horts rather than full generations in the first experiments by updating only a randomly chosen 10% of nodes at each iteration because only a fraction of the population is learning at any given time.", "A model where every node is updated is investigated as well.", "Results The behavior of this simulation is shown graphically in Figure 4 .", "The fine/colored lines indicate the rate of M + within each initially non-merged cluster, and the bold/black line shows the average rate across all initially non-merged.", "The merger spreads from cluster to cluster in succession over the \"weak\" inter-cluster connections and through each cluster over the 'strong' connections before moving on to the next ones.", "Everett, 1995) members have the merger, a period of rapid diffusion of the merger, then some time where a few laggards resist the merger.", "As a result, most clusters exhibit an S-like shape.", "A few clusters change rapidly because of their especially wellconnected positions in the network, and some lag behind the rest because they are poorly connected to the rest of the network.", "More interestingly, the population-wide average, the population-level data at the kind of granularity that is often studied, yields a smooth S-curve with a shallower slope than the individual clusters.", "The fact that it arises naturally here in a network that conforms with typical network shapes but was otherwise randomly generated is encouraging because the experiment was not set up so that it would produce such a curve, and the steep rate of change in individual clusters is what is expected for a change that is rapid enough to affect siblings differently.", "In the above simulation, only a fraction of nodes were updated at each iteration in order to model a rapid change.", "In order to confirm that this choice is not affecting the results and to test a purer implementation of the framework presented here, we remove that constraint and update every node at each iteration.", "Figure 5 shows what happens over 20 iterations in a network that is otherwise identical but with 2/5 as many inter-cluster connections as the original.", "A qualitatively similar pattern arises, so the choice to update only a fraction of the population is not crucially affecting the results.", "In all experiments so far, social connections were fixed at the first iteration even though connections in real populations tend to change over time.", "To investigate that modeling assumption, we perform another simulation in which connections are randomly updated both within and across clusters at each iteration akin to Kauhanen (2016) 's rewiring.", "The result as shown in Figure 6 is similar to before, with one major difference.", "The individual clusters transition more closely in time because no individual cluster remains poorly connected or especially well connected throughout the entire simulation.", "Finally, we test our assumptions about population size by repeating the experiments on a smaller network of 40 clusters of 18 individuals.", "The results are qualitatively similar, but the S-curve appears to be more sensitive to probabilistic connections in the network.", "To explore this, we present the average network-wide rate of (M + ) across 10 trials, revealing that an S-like curve is formed each time but that its slope varies.", "A few trials never reach 100% because some of the clusters are not connected to the innovative one.", "The slope varies between trials, indicating that the rate of change is a function of both the population structure and the learning algorithm, but the network size does not substantially affect these results.", "Discussion The algebraic-network framework for modeling population-level language change presented here has substantial practical and theoretical advantages over previous ones.", "It is much simpler computationally than previous frameworks because it calculates the statistically expected behavior of each generation analytically and therefore removes the entire inner loop of calculating stochastic inter-agent interactions from the simulation.", "It follows the Niyogi and Berwick (1996) formalism for language change which presents a clean and modular way of reasoning about the problem and promotes the centrality of language acquisition.", "In addition to the core algorithm, the framework offers enough flexibility to represent a wide variety of processes from the highly abstract (e.g., Kauhanen (2016) ) to those grounded in soci-olinguistic and acquisition research (e.g., Yang (2009) ).", "In our investigation of Kauhanen's basic assumptions, we discover how seemingly innocuous decisions about population size and learning conspire to drive simulation results.", "If learners are conceived as categorical learners, population size becomes a deciding factor in the path of change.", "So while the original results are interesting and meaningful, they may only valid for small (on the order of 10 2 ) populations.", "In our simulation of the spread of the COT-CAUGHT merger, we show how a cognitivelymotivated model of acquisition requires a network model in order to represent population-level language change.", "The population is represented as a collection of individual clusters based on sociological work, but the clusters themselves are connected randomly.", "The fact that S-curves arise naturally from these networks underscores their centrality to language change.", "One problem that this line of simulation work has always faced has been the lack of viable comparison between models because every study implements its own learning, network, and interaction models.", "The modular nature of our framework advances against this trend since it is now possible to hold the population model constant while slotting in various learning models to test them against one another and vice-versa.", "Finally, since this framework reduces to Niyogi & Berwick's models in perfectly-mixed populations, it can be used to reason about the formal dynamics of language change as well.", "Without simulation, it would be difficult or impossible to undercover the interplay between acquisition and social structure on the propagation of language change.", "Neither factor alone can account for the theoretical or empirically observed patterns.", "Simulations of this kind which explicitly model both simultaneously is well equipped to provide insights that fieldwork and laboratory work cannot.", "As such, it is an invaluable complement to those more traditional methodologies." ] }
{ "paper_header_number": [ "1", "1.1", "2", "2.2", "2.3", "3", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Framework for Transmission in Social Networks", "Propagation in the Network", "Learning in the Network", "Application: Testing Assumptions", "Application: Mergers in Progress", "Background", "Model Setup", "Results", "Discussion" ] }
GEM-SciDuet-train-106#paper-1281#slide-1
Example The Cot Caught Merger
// cot is pronounced the same as // caught Minimal pairs distinguished by Don collar knotty odd caught Dawn caller naughty awed Present in many dialects of North Spreading into Rhode Island Rapid! Families with Non-merged parents and older siblings but merged younger siblings
// cot is pronounced the same as // caught Minimal pairs distinguished by Don collar knotty odd caught Dawn caller naughty awed Present in many dialects of North Spreading into Rhode Island Rapid! Families with Non-merged parents and older siblings but merged younger siblings
[]
GEM-SciDuet-train-106#paper-1281#slide-3
1281
A Framework for Representing Language Acquisition in a Population Setting
Language variation and change are driven both by individuals' internal cognitive processes and by the social structures through which language propagates. A wide range of computational frameworks have been proposed to connect these drivers. We compare the strengths and weaknesses of existing approaches and propose a new analytic framework which combines previous network models' ability to capture realistic social structure with practically and more elegant computational properties. The framework privileges the process of language acquisition and embeds learners in a social network but is modular so that population structure can be combined with different acquisition models. We demonstrate two applications for the framework: a test of practical concerns that arise when modeling acquisition in a population setting and an application of the framework to recent work on phonological mergers in progress.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217 ], "paper_content_text": [ "Introduction The process of language change should be thought of as a two-step cycle in which 1) individuals acquire their native languages from their predecessors then 2) pass them on to their successors.", "Small changes accrue over time this way and create both small-scale interpersonal variation and large-scale typological differences.", "It is easy to draw a strong analogy here between linguistic evolution and biological evolution.", "Both feature classic descent with modification, except while phenotypes are transmitted through genes and acted on by natural selection, language is both transmitted through and constrained by the individual (Cavalli-Sforza and Feldman, 1981; Ritt, 2004, etc.)", ".", "But while evolution, linguistic or otherwise, is driven by forces acting on the individual, it unfolds on the level of populations (Cavalli-Sforza and Feldman, 1981) .", "The influence of communitylevel social factors on the path of language change is a major focus of sociolinguistics (Labov, 2001; Milroy and Milroy, 1985; Rogers Everett, 1995) .", "Ideally, one could observe population-level variation unfold in real time while testing out individual factors, but this is impossible because nobody can travel back in time or fit entire natural environments into a lab.", "Change that has already happened is out of reach, and change in progress is buried in a world of confounds.", "The classic sociolinguistic method instead approaches the problem by inferring causal factors from patterns discovered in field interviews and corpora (Labov, 1994; Labov et al., 2005, etc.)", ".", "This is the primary source of empirical data in the field and the only way to look at language change in a naturalistic setting, but it is limited in that it cannot test cause and effect directly.", "More recently, controlled experimental studies have emerged as a complementary line of research which manipulate causal factors directly (Johnson et al., 1999; Campbell-Kibler, 2009, etc.)", ", but are inherently removed natural time and scale.", "A third approach, the one we build upon here, relies on computational modeling to simulate how sociolinguistic factors might work together in larger populations (Klein, 1966; Blythe and Croft, 2012; Kauhanen, 2016, etc.)", ".", "It has long been argued that language acquisition is the primary cause of language change (Sweet, 1899; Lightfoot, 1979; Niyogi, 1998, etc.)", ".", "In the last few decades, this connection has been modeled computationally (Gibson and Wexler, 1994; Kirby et al., 2000; Yang, 2000, etc.)", ", leading to the strong conclusion that change is the inevitable consequence of mixed linguistic input or finite learning periods (Niyogi and Berwick, 1996) , even if children are \"perfect\" learners.", "An important result connecting the learner and population emphasizes the need for this line of work: the space of paths of change available in populations is formally larger than the paths available to linear chains of iterated learners.", "Niyogi and Berwick (2009) prove formally that even perfectly-mixed (i.e., uniform and homogeneous social network) populations admit phase transitions in the path of change unavailable to chains of single learners commonly implemented in iterated learning (Kirby et al., 2000) .", "This suggests that small-population experimental studies in sociolinguistics and in child language acquisition do not paint the full picture of language change.", "We introduce a new framework for modeling language change in populations.", "It has an outer loop to represent generational progression, but it replaces the inner loop which calculates randomized interactions between agents with a single formula that is defined generally enough to allow the simulation of a wide range of scenarios.", "It builds upon the principled formalism described by Niyogi and Berwick (1996, et seq.)", ", privileging the acquisition model and separating it from the population model.", "The resulting modular framework is described in the following sections.", "First, Section 1.1 presents a survey of previous simulation work followed by a description of the new population model in Section 2.", "Next, Section 3 addresses practical concerns relating population size to assumptions about language acquisition.", "Finally, Section 4 introduces a case study on phonological change which demonstrates the need for appropriate models both of acquisition and populations.", "Related Work Computational models for the propagation of linguistic variation have been employed with a variety of research goals in mind.", "Every paper implements its own framework with few exceptions, so comparison across studies is difficult.", "Additionally, since each model is essentially 'boutique,' it is always possible that models are designed consciously or unconsciously to achieve a specific outcome rather driven by underlying principles.", "We group these frameworks into three classes according to their implementation, swarm, network, and algebraic, and discusses their strengths and weaknesses.", "The first class, called swarm here, models populations as collections of agents placed on a grid.", "They \"swarm\" around randomly according to some movement function, and \"interact\" when they occupy adjacent grid spaces (Satterfield, 2001; Harrison et al., 2002; Ke et al., 2008; Stanford and Kenny, 2013) .", "This tends toward concrete interpretation, for example, more mobile populations are expressed directly by more mobile agents.", "They capture Bloomfield (1933) 's \"principle of density\" which describes the observation that geographically or socially close individuals interact more frequently than those farther away.", "On the other hand, they provide little control over network structure, relying on series of explicit movement constraints in order to direct their agents, and since each one moves randomly at each iteration, these models have potentially thousands of degrees of freedom.", "Such simulations should be run many times if any sort of statistically expected results are to be computed.", "The second class, network frameworks, model speakers as nodes and interaction probabilities as weighted edges on network graphs (Minett and Wang, 2008; Baxter et al., 2009; Fagyal et al., 2010; Blythe and Croft, 2012; Kauhanen, 2016) .", "These frameworks offer precise control over social network structure and can test specific community models from within sociolinguistics.", "However, implementations usually proceed by some kind of iterative probabilistic node-pair selection process, and in this way suffer from the same statistical pitfalls as swarm frameworks.", "In contrast to swarm models, interaction is rigidly restricted to immediately connected nodes, so to achieve gradient interaction probabilities, edges must be frequently updated or nearly fully-connected graphs with carefully assigned edge weights would need to be constructed and motivated.", "The third class, algebraic frameworks, present analytic methods for determining the state of the network at the end of each iteration rather than relying on stochastic simulation of individual agents (Niyogi and Berwick, 1996, 1997; Yang, 2000; Baxter et al., 2006; Minett and Wang, 2008; Niyogi and Berwick, 2009) .", "Removing that inner loop is a more mathematically elegant approach and avoids dealing unnecessarily with statistics behind random trials.", "Removing that loop speeds up calculation as well, making larger simulations more tractable than with network or swarm frameworks.", "But this power is achieved by sacrificing the social network.", "Up to this point, such models have, to our knowledge, only been defined over perfectly-mixed (i.e., no network effects) populations.", "That assumption is useful for reasoning about the mathematical theory behind language change, but it hinders such models' utility in empirical studies.", "For example, though Baxter et al.", "(2006) and Minett and Wang (2008) implement algebraic models for perfectly mixed populations, they fall back on network models to model network effects.", "Framework for Transmission in Social Networks Algebraic frameworks have their mathematical advantage, but network frameworks provide a richer model for representing real-world population structures and swarm models capture density effects by default.", "An ideal framework would combine the benefits of all three of these.", "Here we do just that.", "We introduce a framework that instantiates Niyogi and Berwick (1996) 's acquisitiondriven formalism where change is handled explicitly as a two-step alternation between individual learners learning and populations interacting.", "It provides an analytic solution to the state of a network structure over which swarm-like behavior can be modeled.", "We begin by conceptualizing the framework in terms of agents traveling probabilistically over a network structure as in Algo.", "1 before introducing the analytic solution.", "There is an individual standing at every node in the graph, and at every iteration, each individual begins at some location and travels along the network's edges, at each step deciding to continue on or to stop and interact with the agent at that node.", "Any two agents with a nonzero weight path between them could potentially interact, so the overall probability of an interaction is a function of the shape of the network and the decay rate of the step probability.", "The shorter and higher weighted the path between two agents, the more likely they are to interact.", "This corresponds to the gradient interaction probabilities of swarm frameworks.", "Social networks are typically conceived of as graph structures with individuals as vertices and the social or geographical connections between individuals as edges, and this allows for a great deal of flexibility.", "If edges are undirected, then all interactions are equal and bidirectional, but if edges are directed, interactions may or may not be.", "Edges can be weighted to represent likelihood of interaction or some measure of social valuation, and this too can vary over time.", "Lastly, it is possible to add and remove nodes themselves to capture births, deaths, or migration.", "The network structure is represented computationally here as an adjacency matrix A.", "In a population of n individuals, this is n Γ— n where each element a ij is the weight of the connection from individual j to individual i.", "The matrix must be column stochastic (all columns sum to 1 and contain only positive elements) so that edge weights can be interpreted as probabilities.", "The special case where the matrix is symmetric (every a ij = a ji ) models undirected edges, and more strongly, the model reduces to perfectly-mixed populations when each a ij = 1 n .", "We define a notion of communities over the nodes of the network in order to add the option to categorize groups of individuals.", "Membership among c communities is identified with an n Γ— c indicator matrix C. Depending on the problem at hand, it is possible to calculate the average behavior of the learners within each community directly without having to calculate the behavior of each individual member.", "Propagation in the Network In a typical network model, the edge weights between nodes in A are interpreted directly as interaction probabilities, meaning that individuals only ever interact with their immediate graph neighbors.", "We take a different approach by allowing the agents to \"travel\" and potentially interact with any other agent whose node is connected by a path of non-zero edges.", "If the number of traveling steps were fixed at k, the probability of each pair interacting would be defined as A k .", "It is more complicated for us since the number of steps traveled is a random variable.", "The probability of j interacting with i (p(ij)) is the probability of them interacting after k steps times the probability of k for all values of k as in Eqn.", "1.", "Combining this intuition with A yields the interaction probabilities for all i, j pairs.", "p(ij) = k p(ij|k steps) p(k steps) (1) The pattern of linguistic variants or grammars (in the formal sense where grammar g is the intensional equivalent of language L g ) within a network unfolds as a dynamical system over the course of many iterations, and learners' positions within the network mediate which ones they eventually acquire.", "In a system with g grammars and n individuals, a n Γ— g row-stochastic matrix G specifies the probability with which each community expresses each grammar.", "Given this notion of interaction and the specification of grammars expressed within a network, it is possible to compute the distribution of grammars presented to each learner.", "This is the learners' linguistic environment and is represented by a matrix E in the same form as G .", "An environment function E n (G t , A) = E t+1 shown in Eqn.", "2 calculates E by first calculating all the interaction probabilities in the network then multiplying those by the grammars which every agent expresses to get the environment E. The Ξ± parameter from the geometric distribution 1 defines the travel decay rate.", "A lower Ξ± defines conceptually more mobile agents.", "More generally, E n is a special case of E(G t , C t , A t ) = E t+1 where the number of communities equals the number of individuals (c = n).", "C becomes the identity matrix without loss of generality, so the network's initial condition does not have to be defined explicitly.", "For any other community definition, an initial condition has to be defined as in Eqn.", "3 which specifies the starting point in the network that each agent conceptually begins traveling from.", "The output of E is a g Γ— c matrix giving the environment of the average agent in each community.", "2 E n (G t , A) = G t Ξ± (I βˆ’ (1 βˆ’ Ξ±)A) βˆ’1 (2) E(G t , C, A) = E n (G t , A)C(C C) βˆ’1 (3) The output of E must be broadcast to g Γ— n, which would result in the loss of some information unless the assumption can be made that each community is internally uniform.", "However, when that assumption can be made, the n Γ— n adjacency matrix admits a c Γ— c equitable partition A Ο€ (Eqn.", "4) (Schaub et al., 2016) which permits an alternate environment function E EP (G t , C, A) shown in Eqn.", "5 that is equivalent to the lossless E n if A.", "If n c, E EP is much faster to calculate because it only inverts a small c Γ— c matrix rather than a large n Γ— n. This makes it feasible to run much larger simulations than what has been done in the past.", "A Ο€ = (C C) βˆ’1 C AC (4) EEP = Ξ±G C (I βˆ’ (1 βˆ’ Ξ±)A Ο€ ) βˆ’1 (C C) βˆ’1 (5) Learning in the Network The environment function describes what inputs E t+1 are available to learners given the language expressed by the mature speakers of the previous age cohort with grammars G t .", "The second component of the framework describes the learning algorithm A(E t+1 ) = G t+1 , how individuals respond to their input environment.", "The resulting G t+1 describes which grammars those learners will eventually contribute to the subsequent generation's environment E t+2 .", "This back-andforth between adults' grammars G and childrens' environment E is the two-step cycle of language change (Fig.", "1) .", "In neutral change, learners would acquire grammars at the rates that they are expressed in their environments, but there is good reason to believe .", ".", ".", "G t β†’ E t+1 β†’ G t+1 .", ".", ".", "G t+i β†’ E t+i+1 .", ".", ".", "Figure 1: Language change as an alternation between G and E matrices that most language change involves differential fitness between competing variants, and most nontrivial learning algorithms yield some kind of fitness (Kroch, 1989; Yang, 2000; Blythe and Croft, 2012, etc.)", ", so A is rarely neutral.", "A neutral and simple advantaged model are both considered in Section 3, and a more complex learning algorithm is described for Section 4.", "Application: Testing Assumptions The general nature of the framework described here renders it suitable for reproducing the results of previous works and evaluating their assumptions.", "To demonstrate this, we reproduce the major result from Kauhanen (2016) , which tested the behavior of neutral change in networks of singlegrammar learners, in order to dissect two of its primary assumptions.", "Implemented in a typical network framework, the original setup contains n = 200 individuals in probabilistically generated centralized networks in which individuals mature categorically to the single most frequent grammar in their input.", "The author found that categorical neutral change produced chaotic paths of change regardless of network shape and that periodically \"rewiring\" some of the network edges smoothed this out.", "Without commenting on rewiring, we find that the combination of n and choice of categorical learners conspire to create the chaotic results.", "We create two communities, both centralized along the lines of the single cluster in Kauhanen (2016) , initialize all members of cluster 1 with grammar g 1 and all members of cluster 2 with grammar g 2 , and additional edges are added between members of clusters 1 and 2 to allow interaction.", "G is converted to an indicator matrix at the end of each learning iteration by rounding values to 0 and 1 in order to model categorical learners who only internalize the most common grammar in their inputs as in the original model.", "In a pair of infinitely large clusters or two clusters where individuals are permitted to learn a probabilistic distribution of grammars, each cluster should homogenize to a 50/50 distribution of g 1 and g 2 after some number of iterations depending on the specifics of the network shape and setting for Ξ± creating the red curves in Fig.", "2 .", "At n = 20000, each of 10 trials roughly follows the path of the predicted curve, but when run at the original n = 200 for 10 trials, this produces the type of chaotic behavior which Kauhanen (2016) attempts to repair.", "The outcome appears to be the result of an assumption made out of convenience (n = 200) rather than a principled decision.", "To further explore the impact of the population size assumption, we experiment on a model of advantaged change, which is typically contrasted with neutral change because of its tendency to produce \"well-behaved\" S-curve change (Blythe and Croft, 2012; Kauhanen, 2016) .", "This time, only a single cluster is created, and the advantaged grammar is initially assigned to 1% of the population.", "As seen in Figure 3 , results are chaotic for n = 200 once again and near predicted for n = 20000.", "This is important because at n = 200, advantaged change is chaotic, and most simulations both rise and fall.", "An experimenter who only studied advantaged change in small population might concluded that it is as ill-behaved as neutral change.", "While the conclusions that Kauhanen (2016) draws appear valid for n = 200, it is not clear to what extent they can be projected onto larger populations.", "This demonstrates the need for carefully choosing one's modeling assumptions and testing them out when possible.", "Application: Mergers in Progress The acquisition of phonological mergers in mixed input settings presents an interesting problem.", "It appears that mergers have an inherent advantage because they tend to spread at the expense of distinctions, and once they begin, they are rarely reversed (Labov, 1994) .", "Yang (2009) 's acquisition model quantifies this advantage as the relatively lower chance of misinterpretation if a listener assumes the merged grammar instead of the nonmerged grammar once a sufficient proportion of the environment is merged.", "Applied to Johnson (2007) 's detailed population study of the frontier of the COT-CAUGHT merger in the small towns along the border between Rhode Island and Massachusetts, this accurately predicts the ratio of merged input for a child to acquire the merged grammar, however when applied to a perfectly mixed population of learners, it fails to model the spread of the merged grammar in the population.", "Yang's model is input-driven, so it is conducive to simulation with minimal assumptions past those drawn from the empirical data.", "We test the behavior of this learning model in a typical population network and demonstrate that it produces a reasonable path of change.", "Background The COT-CAUGHT merger, also called the low back merger describes the phenomenon present in varieties of North American English spoken in eastern New England, western Pennsylvania, the American West, and Canada among others where the vowel in words like cot and the vowel in words like caught have come to be pronounced the same (Labov et al., 2005, pp.", "58-65) .", "The geographical extent of the merger is currently expanding, which might be expected if the merger has a cognitive or social advantage associated with it.", "Johnson (2007) 's study of the merger's frontier on the border Rhode Island and Massachusetts uncovered an interesting social dynamic that illustrates the merger's speed: there are families where the parents and older siblings non-merged, but the younger siblings are.", "The merger has swept through in only a few years and passed between the siblings.", "Yang (2009) seeks to understand why mergers have an advantage from a cognitive perspective, and his model treats the acquisition of mergers as an evolutionary process.", "Learners who receive both merged (M + ) and non-merged (M βˆ’ ) input entertain both a merged (g + ) and non-merged (g βˆ’ ) grammar and reward whichever grammar successfully parses the input.", "This kind of variational learner (Yang, 2000) is essentially an adaptation of the classic evolutionary Linear Reward Punishment model (Bush and Mosteller, 1953) .", "The fitness of each grammar is the probability in the limit that it will fail to parse any given input, and since it is virtually always the case that this probability is different for both grammars, fitness is virtually always asymmetric.", "The variational learner is characterized as follows.", "Given two grammars and an input token s, The learner parses s with g 1 with probability p and with g 2 with probability q = 1 βˆ’ p. p is rewarded according to whether the choice of g successfully parses s (g β†’ s) or it fails to (g s), where Ξ³ is some small constant.", "p = p + Ξ³q, g β†’ s (1 βˆ’ Ξ³)p, g s Given a specific problem, one can calculate a penalty probability C for each g, the proportion of input that would cause g s. The grammar with the lower C has the advantage, so the other one will be driven down in the long run.", "C can be estimated from type frequencies in a corpus, and the model is non-parametric because these values do not depend on Ξ³. lim tβ†’βˆž p t = C 2 C 1 + C 2 lim tβ†’βˆž q t = C 1 C 1 + C 2 To understand the COT-CAUGHT merger empirically, one must reason about what kind of input would trigger a penalty and then calculate the penalty probabilities of the merged grammar C + and non-merged grammar C βˆ’ from a corpus.", "This model considers parsing failure to be the rate of initial misinterpretation, and for a vowel merger, the only inputs that could create an initial misinterpretation are minimal pairs because they become homophones.", "Examples of COT-CAUGHT minimal pairs include cot-caught, Don-Dawn, stock-stalk, odd-awed, collar-caller, and so on.", "The merged g + grammar collapses would-be minimal pairs into homophones, so the penalty rate C + comes down to lexical access.", "Under the observation that more frequent homophones are retrieved first regardless of syntactic context (Caramazza et al., 2001) , g + listeners only suffer initial misinterpretation when the less frequent member of a pair is uttered regardless of the rate of M + .", "If H is the sum token frequency of all minimal pairs and h i o , h i oh are the frequencies of the ith pair's members, then C+ is calculated by Eqn.", "6.", "In contrast, g βˆ’ listeners are sensitive to the phonemic distinction, so they misinterpret M βˆ’ input at the rate of mishearing one vowel for the other (Peterson and Barney, 1952) (second half of Eqn.", "7).", "And given M + input, they misinterpret whenever they hear the phoneme which g βˆ’ does not expect (e.g., a merged speaker pronouncing cot with the CAUGHT vowel) times the probability of not mishearing that vowel (1-) plus times the probability of hearing the right vowel (i.e., the merged speaker pronounces cot with the COT vowel but it is misheard anyway) (first half of Eqn.", "7).", "Since g βˆ’ misinterpretation rates are a function of the rate of M + (p) in the environment, there is a threshold of M + speakers above which the merged grammar has a fitness advantage over the non-merged one.", "C + = 1 H i min(h i o , h i oh ) (6) C βˆ’ = 1 H i p 0 ((1 βˆ’ oh )h i o + oh h i oh ) (7) +q 0 ( oh h i o + oh h i oh ) Calculating this threshold for the frequent minimal pairs that Yang extracts from the Wortschatz project (Biemann et al., 2004 ) corpus 3 and mishearing rates from Peterson and Barney (1952), the Yang model predicts that a learner exposed to at least ∼ 17% COT-CAUGHT-merged input will acquire the merger.", "This threshold represents a strong advantage for M + because it is well under the 50% threshold expected for neutral (non-advantaged) change and it is very close to what was found in Johnson (2007) 's sociolinguistic study.", "It predicts that younger children may have g + while their parents and even older siblings have g βˆ’ if the 17% threshold was crossed in E after the acquisition period of the older sibling but before that of the younger sibling.", "Model Setup All the mechanics behind the learning model reduce to a simple statement: learners acquires g + iff > 17% of their input is M + and they acquire g βˆ’ otherwise.", "However, this kind of categorical learner in a perfectly-mixed population leads to immediate fixation at either g βˆ’ or g + in a single iteration, since the proportion of g + speakers in the population is equivalent to the proportion of M + input in every learner's environment.", "This is not realistic change.", "Clearly, social network structure is at least as important as the learning algorithm in modeling the spread of the merger.", "We model the change in a non-uniform social network of 100 centralized clusters of 75 individuals each.", "75 was chosen as half Dunbar's number, the maximum number of reliable social connections that an adult can maintain (Dunbar, 2010) .", "There are two grammars, g + and g βˆ’ , and learners internalize one or the other according to the 17% threshold of M + in their input.", "One cluster represents the source of the merger and is initialized at 100% g + , while the rest begin 100% g βˆ’ .", "Inter-cluster connections are chosen randomly so that some connections are between central members of the clusters and some are between peripheral members.", "The one merged cluster is connected to half the other clusters representing those at the frontier of the change, and each other cluster is connected to five randomly chosen ones.", "4 This network structure echoes work in sociolinguistics, in particular, Milroy and Milroy (1985) 's notion of strong and weak connections in language change, where weak connections between social clusters are particularly important for propagation of a change.", "Propagation of the merged grammar is calculated by E n because we are interested in the behavior of individuals without loss of precision and because it cannot be assumed that each cluster is internally uniform.", "5 Since the spread of the merger has been rapid enough to detect over a period of a few years, iterations are modeled as short age co-horts rather than full generations in the first experiments by updating only a randomly chosen 10% of nodes at each iteration because only a fraction of the population is learning at any given time.", "A model where every node is updated is investigated as well.", "Results The behavior of this simulation is shown graphically in Figure 4 .", "The fine/colored lines indicate the rate of M + within each initially non-merged cluster, and the bold/black line shows the average rate across all initially non-merged.", "The merger spreads from cluster to cluster in succession over the \"weak\" inter-cluster connections and through each cluster over the 'strong' connections before moving on to the next ones.", "Everett, 1995) members have the merger, a period of rapid diffusion of the merger, then some time where a few laggards resist the merger.", "As a result, most clusters exhibit an S-like shape.", "A few clusters change rapidly because of their especially wellconnected positions in the network, and some lag behind the rest because they are poorly connected to the rest of the network.", "More interestingly, the population-wide average, the population-level data at the kind of granularity that is often studied, yields a smooth S-curve with a shallower slope than the individual clusters.", "The fact that it arises naturally here in a network that conforms with typical network shapes but was otherwise randomly generated is encouraging because the experiment was not set up so that it would produce such a curve, and the steep rate of change in individual clusters is what is expected for a change that is rapid enough to affect siblings differently.", "In the above simulation, only a fraction of nodes were updated at each iteration in order to model a rapid change.", "In order to confirm that this choice is not affecting the results and to test a purer implementation of the framework presented here, we remove that constraint and update every node at each iteration.", "Figure 5 shows what happens over 20 iterations in a network that is otherwise identical but with 2/5 as many inter-cluster connections as the original.", "A qualitatively similar pattern arises, so the choice to update only a fraction of the population is not crucially affecting the results.", "In all experiments so far, social connections were fixed at the first iteration even though connections in real populations tend to change over time.", "To investigate that modeling assumption, we perform another simulation in which connections are randomly updated both within and across clusters at each iteration akin to Kauhanen (2016) 's rewiring.", "The result as shown in Figure 6 is similar to before, with one major difference.", "The individual clusters transition more closely in time because no individual cluster remains poorly connected or especially well connected throughout the entire simulation.", "Finally, we test our assumptions about population size by repeating the experiments on a smaller network of 40 clusters of 18 individuals.", "The results are qualitatively similar, but the S-curve appears to be more sensitive to probabilistic connections in the network.", "To explore this, we present the average network-wide rate of (M + ) across 10 trials, revealing that an S-like curve is formed each time but that its slope varies.", "A few trials never reach 100% because some of the clusters are not connected to the innovative one.", "The slope varies between trials, indicating that the rate of change is a function of both the population structure and the learning algorithm, but the network size does not substantially affect these results.", "Discussion The algebraic-network framework for modeling population-level language change presented here has substantial practical and theoretical advantages over previous ones.", "It is much simpler computationally than previous frameworks because it calculates the statistically expected behavior of each generation analytically and therefore removes the entire inner loop of calculating stochastic inter-agent interactions from the simulation.", "It follows the Niyogi and Berwick (1996) formalism for language change which presents a clean and modular way of reasoning about the problem and promotes the centrality of language acquisition.", "In addition to the core algorithm, the framework offers enough flexibility to represent a wide variety of processes from the highly abstract (e.g., Kauhanen (2016) ) to those grounded in soci-olinguistic and acquisition research (e.g., Yang (2009) ).", "In our investigation of Kauhanen's basic assumptions, we discover how seemingly innocuous decisions about population size and learning conspire to drive simulation results.", "If learners are conceived as categorical learners, population size becomes a deciding factor in the path of change.", "So while the original results are interesting and meaningful, they may only valid for small (on the order of 10 2 ) populations.", "In our simulation of the spread of the COT-CAUGHT merger, we show how a cognitivelymotivated model of acquisition requires a network model in order to represent population-level language change.", "The population is represented as a collection of individual clusters based on sociological work, but the clusters themselves are connected randomly.", "The fact that S-curves arise naturally from these networks underscores their centrality to language change.", "One problem that this line of simulation work has always faced has been the lack of viable comparison between models because every study implements its own learning, network, and interaction models.", "The modular nature of our framework advances against this trend since it is now possible to hold the population model constant while slotting in various learning models to test them against one another and vice-versa.", "Finally, since this framework reduces to Niyogi & Berwick's models in perfectly-mixed populations, it can be used to reason about the formal dynamics of language change as well.", "Without simulation, it would be difficult or impossible to undercover the interplay between acquisition and social structure on the propagation of language change.", "Neither factor alone can account for the theoretical or empirically observed patterns.", "Simulations of this kind which explicitly model both simultaneously is well equipped to provide insights that fieldwork and laboratory work cannot.", "As such, it is an invaluable complement to those more traditional methodologies." ] }
{ "paper_header_number": [ "1", "1.1", "2", "2.2", "2.3", "3", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Framework for Transmission in Social Networks", "Propagation in the Network", "Learning in the Network", "Application: Testing Assumptions", "Application: Mergers in Progress", "Background", "Model Setup", "Results", "Discussion" ] }
GEM-SciDuet-train-106#paper-1281#slide-3
Three Classes of Framework
Individual agents on a grid moving randomly and interacting (ABM) Bloomfield (1933)s Principle of Density for free Not a lot of control over the network Thousands of degrees of freedom -> should run many many times Speakers are nodes in a graph, edges are possibility of interaction Much more control over network structure Easy to model concepts from the sociolinguistic lit. (e.g., Milroy & Milroy) Nodes only interact with immediate neighbours -> slow and less realistic? Practically implemented as random interactions between neighbours -> same problem as #1 Expected outcome of interactions is calculated analytically + Closed-form solution rather than simulation -> faster and more direct - No network structure! Always implemented over perfectly mixed populations This proliferation of boutique frameworks is a problem An ad hoc framework risks overfitting the pattern Comparison between frameworks is challenging
Individual agents on a grid moving randomly and interacting (ABM) Bloomfield (1933)s Principle of Density for free Not a lot of control over the network Thousands of degrees of freedom -> should run many many times Speakers are nodes in a graph, edges are possibility of interaction Much more control over network structure Easy to model concepts from the sociolinguistic lit. (e.g., Milroy & Milroy) Nodes only interact with immediate neighbours -> slow and less realistic? Practically implemented as random interactions between neighbours -> same problem as #1 Expected outcome of interactions is calculated analytically + Closed-form solution rather than simulation -> faster and more direct - No network structure! Always implemented over perfectly mixed populations This proliferation of boutique frameworks is a problem An ad hoc framework risks overfitting the pattern Comparison between frameworks is challenging
[]
GEM-SciDuet-train-106#paper-1281#slide-5
1281
A Framework for Representing Language Acquisition in a Population Setting
Language variation and change are driven both by individuals' internal cognitive processes and by the social structures through which language propagates. A wide range of computational frameworks have been proposed to connect these drivers. We compare the strengths and weaknesses of existing approaches and propose a new analytic framework which combines previous network models' ability to capture realistic social structure with practically and more elegant computational properties. The framework privileges the process of language acquisition and embeds learners in a social network but is modular so that population structure can be combined with different acquisition models. We demonstrate two applications for the framework: a test of practical concerns that arise when modeling acquisition in a population setting and an application of the framework to recent work on phonological mergers in progress.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217 ], "paper_content_text": [ "Introduction The process of language change should be thought of as a two-step cycle in which 1) individuals acquire their native languages from their predecessors then 2) pass them on to their successors.", "Small changes accrue over time this way and create both small-scale interpersonal variation and large-scale typological differences.", "It is easy to draw a strong analogy here between linguistic evolution and biological evolution.", "Both feature classic descent with modification, except while phenotypes are transmitted through genes and acted on by natural selection, language is both transmitted through and constrained by the individual (Cavalli-Sforza and Feldman, 1981; Ritt, 2004, etc.)", ".", "But while evolution, linguistic or otherwise, is driven by forces acting on the individual, it unfolds on the level of populations (Cavalli-Sforza and Feldman, 1981) .", "The influence of communitylevel social factors on the path of language change is a major focus of sociolinguistics (Labov, 2001; Milroy and Milroy, 1985; Rogers Everett, 1995) .", "Ideally, one could observe population-level variation unfold in real time while testing out individual factors, but this is impossible because nobody can travel back in time or fit entire natural environments into a lab.", "Change that has already happened is out of reach, and change in progress is buried in a world of confounds.", "The classic sociolinguistic method instead approaches the problem by inferring causal factors from patterns discovered in field interviews and corpora (Labov, 1994; Labov et al., 2005, etc.)", ".", "This is the primary source of empirical data in the field and the only way to look at language change in a naturalistic setting, but it is limited in that it cannot test cause and effect directly.", "More recently, controlled experimental studies have emerged as a complementary line of research which manipulate causal factors directly (Johnson et al., 1999; Campbell-Kibler, 2009, etc.)", ", but are inherently removed natural time and scale.", "A third approach, the one we build upon here, relies on computational modeling to simulate how sociolinguistic factors might work together in larger populations (Klein, 1966; Blythe and Croft, 2012; Kauhanen, 2016, etc.)", ".", "It has long been argued that language acquisition is the primary cause of language change (Sweet, 1899; Lightfoot, 1979; Niyogi, 1998, etc.)", ".", "In the last few decades, this connection has been modeled computationally (Gibson and Wexler, 1994; Kirby et al., 2000; Yang, 2000, etc.)", ", leading to the strong conclusion that change is the inevitable consequence of mixed linguistic input or finite learning periods (Niyogi and Berwick, 1996) , even if children are \"perfect\" learners.", "An important result connecting the learner and population emphasizes the need for this line of work: the space of paths of change available in populations is formally larger than the paths available to linear chains of iterated learners.", "Niyogi and Berwick (2009) prove formally that even perfectly-mixed (i.e., uniform and homogeneous social network) populations admit phase transitions in the path of change unavailable to chains of single learners commonly implemented in iterated learning (Kirby et al., 2000) .", "This suggests that small-population experimental studies in sociolinguistics and in child language acquisition do not paint the full picture of language change.", "We introduce a new framework for modeling language change in populations.", "It has an outer loop to represent generational progression, but it replaces the inner loop which calculates randomized interactions between agents with a single formula that is defined generally enough to allow the simulation of a wide range of scenarios.", "It builds upon the principled formalism described by Niyogi and Berwick (1996, et seq.)", ", privileging the acquisition model and separating it from the population model.", "The resulting modular framework is described in the following sections.", "First, Section 1.1 presents a survey of previous simulation work followed by a description of the new population model in Section 2.", "Next, Section 3 addresses practical concerns relating population size to assumptions about language acquisition.", "Finally, Section 4 introduces a case study on phonological change which demonstrates the need for appropriate models both of acquisition and populations.", "Related Work Computational models for the propagation of linguistic variation have been employed with a variety of research goals in mind.", "Every paper implements its own framework with few exceptions, so comparison across studies is difficult.", "Additionally, since each model is essentially 'boutique,' it is always possible that models are designed consciously or unconsciously to achieve a specific outcome rather driven by underlying principles.", "We group these frameworks into three classes according to their implementation, swarm, network, and algebraic, and discusses their strengths and weaknesses.", "The first class, called swarm here, models populations as collections of agents placed on a grid.", "They \"swarm\" around randomly according to some movement function, and \"interact\" when they occupy adjacent grid spaces (Satterfield, 2001; Harrison et al., 2002; Ke et al., 2008; Stanford and Kenny, 2013) .", "This tends toward concrete interpretation, for example, more mobile populations are expressed directly by more mobile agents.", "They capture Bloomfield (1933) 's \"principle of density\" which describes the observation that geographically or socially close individuals interact more frequently than those farther away.", "On the other hand, they provide little control over network structure, relying on series of explicit movement constraints in order to direct their agents, and since each one moves randomly at each iteration, these models have potentially thousands of degrees of freedom.", "Such simulations should be run many times if any sort of statistically expected results are to be computed.", "The second class, network frameworks, model speakers as nodes and interaction probabilities as weighted edges on network graphs (Minett and Wang, 2008; Baxter et al., 2009; Fagyal et al., 2010; Blythe and Croft, 2012; Kauhanen, 2016) .", "These frameworks offer precise control over social network structure and can test specific community models from within sociolinguistics.", "However, implementations usually proceed by some kind of iterative probabilistic node-pair selection process, and in this way suffer from the same statistical pitfalls as swarm frameworks.", "In contrast to swarm models, interaction is rigidly restricted to immediately connected nodes, so to achieve gradient interaction probabilities, edges must be frequently updated or nearly fully-connected graphs with carefully assigned edge weights would need to be constructed and motivated.", "The third class, algebraic frameworks, present analytic methods for determining the state of the network at the end of each iteration rather than relying on stochastic simulation of individual agents (Niyogi and Berwick, 1996, 1997; Yang, 2000; Baxter et al., 2006; Minett and Wang, 2008; Niyogi and Berwick, 2009) .", "Removing that inner loop is a more mathematically elegant approach and avoids dealing unnecessarily with statistics behind random trials.", "Removing that loop speeds up calculation as well, making larger simulations more tractable than with network or swarm frameworks.", "But this power is achieved by sacrificing the social network.", "Up to this point, such models have, to our knowledge, only been defined over perfectly-mixed (i.e., no network effects) populations.", "That assumption is useful for reasoning about the mathematical theory behind language change, but it hinders such models' utility in empirical studies.", "For example, though Baxter et al.", "(2006) and Minett and Wang (2008) implement algebraic models for perfectly mixed populations, they fall back on network models to model network effects.", "Framework for Transmission in Social Networks Algebraic frameworks have their mathematical advantage, but network frameworks provide a richer model for representing real-world population structures and swarm models capture density effects by default.", "An ideal framework would combine the benefits of all three of these.", "Here we do just that.", "We introduce a framework that instantiates Niyogi and Berwick (1996) 's acquisitiondriven formalism where change is handled explicitly as a two-step alternation between individual learners learning and populations interacting.", "It provides an analytic solution to the state of a network structure over which swarm-like behavior can be modeled.", "We begin by conceptualizing the framework in terms of agents traveling probabilistically over a network structure as in Algo.", "1 before introducing the analytic solution.", "There is an individual standing at every node in the graph, and at every iteration, each individual begins at some location and travels along the network's edges, at each step deciding to continue on or to stop and interact with the agent at that node.", "Any two agents with a nonzero weight path between them could potentially interact, so the overall probability of an interaction is a function of the shape of the network and the decay rate of the step probability.", "The shorter and higher weighted the path between two agents, the more likely they are to interact.", "This corresponds to the gradient interaction probabilities of swarm frameworks.", "Social networks are typically conceived of as graph structures with individuals as vertices and the social or geographical connections between individuals as edges, and this allows for a great deal of flexibility.", "If edges are undirected, then all interactions are equal and bidirectional, but if edges are directed, interactions may or may not be.", "Edges can be weighted to represent likelihood of interaction or some measure of social valuation, and this too can vary over time.", "Lastly, it is possible to add and remove nodes themselves to capture births, deaths, or migration.", "The network structure is represented computationally here as an adjacency matrix A.", "In a population of n individuals, this is n Γ— n where each element a ij is the weight of the connection from individual j to individual i.", "The matrix must be column stochastic (all columns sum to 1 and contain only positive elements) so that edge weights can be interpreted as probabilities.", "The special case where the matrix is symmetric (every a ij = a ji ) models undirected edges, and more strongly, the model reduces to perfectly-mixed populations when each a ij = 1 n .", "We define a notion of communities over the nodes of the network in order to add the option to categorize groups of individuals.", "Membership among c communities is identified with an n Γ— c indicator matrix C. Depending on the problem at hand, it is possible to calculate the average behavior of the learners within each community directly without having to calculate the behavior of each individual member.", "Propagation in the Network In a typical network model, the edge weights between nodes in A are interpreted directly as interaction probabilities, meaning that individuals only ever interact with their immediate graph neighbors.", "We take a different approach by allowing the agents to \"travel\" and potentially interact with any other agent whose node is connected by a path of non-zero edges.", "If the number of traveling steps were fixed at k, the probability of each pair interacting would be defined as A k .", "It is more complicated for us since the number of steps traveled is a random variable.", "The probability of j interacting with i (p(ij)) is the probability of them interacting after k steps times the probability of k for all values of k as in Eqn.", "1.", "Combining this intuition with A yields the interaction probabilities for all i, j pairs.", "p(ij) = k p(ij|k steps) p(k steps) (1) The pattern of linguistic variants or grammars (in the formal sense where grammar g is the intensional equivalent of language L g ) within a network unfolds as a dynamical system over the course of many iterations, and learners' positions within the network mediate which ones they eventually acquire.", "In a system with g grammars and n individuals, a n Γ— g row-stochastic matrix G specifies the probability with which each community expresses each grammar.", "Given this notion of interaction and the specification of grammars expressed within a network, it is possible to compute the distribution of grammars presented to each learner.", "This is the learners' linguistic environment and is represented by a matrix E in the same form as G .", "An environment function E n (G t , A) = E t+1 shown in Eqn.", "2 calculates E by first calculating all the interaction probabilities in the network then multiplying those by the grammars which every agent expresses to get the environment E. The Ξ± parameter from the geometric distribution 1 defines the travel decay rate.", "A lower Ξ± defines conceptually more mobile agents.", "More generally, E n is a special case of E(G t , C t , A t ) = E t+1 where the number of communities equals the number of individuals (c = n).", "C becomes the identity matrix without loss of generality, so the network's initial condition does not have to be defined explicitly.", "For any other community definition, an initial condition has to be defined as in Eqn.", "3 which specifies the starting point in the network that each agent conceptually begins traveling from.", "The output of E is a g Γ— c matrix giving the environment of the average agent in each community.", "2 E n (G t , A) = G t Ξ± (I βˆ’ (1 βˆ’ Ξ±)A) βˆ’1 (2) E(G t , C, A) = E n (G t , A)C(C C) βˆ’1 (3) The output of E must be broadcast to g Γ— n, which would result in the loss of some information unless the assumption can be made that each community is internally uniform.", "However, when that assumption can be made, the n Γ— n adjacency matrix admits a c Γ— c equitable partition A Ο€ (Eqn.", "4) (Schaub et al., 2016) which permits an alternate environment function E EP (G t , C, A) shown in Eqn.", "5 that is equivalent to the lossless E n if A.", "If n c, E EP is much faster to calculate because it only inverts a small c Γ— c matrix rather than a large n Γ— n. This makes it feasible to run much larger simulations than what has been done in the past.", "A Ο€ = (C C) βˆ’1 C AC (4) EEP = Ξ±G C (I βˆ’ (1 βˆ’ Ξ±)A Ο€ ) βˆ’1 (C C) βˆ’1 (5) Learning in the Network The environment function describes what inputs E t+1 are available to learners given the language expressed by the mature speakers of the previous age cohort with grammars G t .", "The second component of the framework describes the learning algorithm A(E t+1 ) = G t+1 , how individuals respond to their input environment.", "The resulting G t+1 describes which grammars those learners will eventually contribute to the subsequent generation's environment E t+2 .", "This back-andforth between adults' grammars G and childrens' environment E is the two-step cycle of language change (Fig.", "1) .", "In neutral change, learners would acquire grammars at the rates that they are expressed in their environments, but there is good reason to believe .", ".", ".", "G t β†’ E t+1 β†’ G t+1 .", ".", ".", "G t+i β†’ E t+i+1 .", ".", ".", "Figure 1: Language change as an alternation between G and E matrices that most language change involves differential fitness between competing variants, and most nontrivial learning algorithms yield some kind of fitness (Kroch, 1989; Yang, 2000; Blythe and Croft, 2012, etc.)", ", so A is rarely neutral.", "A neutral and simple advantaged model are both considered in Section 3, and a more complex learning algorithm is described for Section 4.", "Application: Testing Assumptions The general nature of the framework described here renders it suitable for reproducing the results of previous works and evaluating their assumptions.", "To demonstrate this, we reproduce the major result from Kauhanen (2016) , which tested the behavior of neutral change in networks of singlegrammar learners, in order to dissect two of its primary assumptions.", "Implemented in a typical network framework, the original setup contains n = 200 individuals in probabilistically generated centralized networks in which individuals mature categorically to the single most frequent grammar in their input.", "The author found that categorical neutral change produced chaotic paths of change regardless of network shape and that periodically \"rewiring\" some of the network edges smoothed this out.", "Without commenting on rewiring, we find that the combination of n and choice of categorical learners conspire to create the chaotic results.", "We create two communities, both centralized along the lines of the single cluster in Kauhanen (2016) , initialize all members of cluster 1 with grammar g 1 and all members of cluster 2 with grammar g 2 , and additional edges are added between members of clusters 1 and 2 to allow interaction.", "G is converted to an indicator matrix at the end of each learning iteration by rounding values to 0 and 1 in order to model categorical learners who only internalize the most common grammar in their inputs as in the original model.", "In a pair of infinitely large clusters or two clusters where individuals are permitted to learn a probabilistic distribution of grammars, each cluster should homogenize to a 50/50 distribution of g 1 and g 2 after some number of iterations depending on the specifics of the network shape and setting for Ξ± creating the red curves in Fig.", "2 .", "At n = 20000, each of 10 trials roughly follows the path of the predicted curve, but when run at the original n = 200 for 10 trials, this produces the type of chaotic behavior which Kauhanen (2016) attempts to repair.", "The outcome appears to be the result of an assumption made out of convenience (n = 200) rather than a principled decision.", "To further explore the impact of the population size assumption, we experiment on a model of advantaged change, which is typically contrasted with neutral change because of its tendency to produce \"well-behaved\" S-curve change (Blythe and Croft, 2012; Kauhanen, 2016) .", "This time, only a single cluster is created, and the advantaged grammar is initially assigned to 1% of the population.", "As seen in Figure 3 , results are chaotic for n = 200 once again and near predicted for n = 20000.", "This is important because at n = 200, advantaged change is chaotic, and most simulations both rise and fall.", "An experimenter who only studied advantaged change in small population might concluded that it is as ill-behaved as neutral change.", "While the conclusions that Kauhanen (2016) draws appear valid for n = 200, it is not clear to what extent they can be projected onto larger populations.", "This demonstrates the need for carefully choosing one's modeling assumptions and testing them out when possible.", "Application: Mergers in Progress The acquisition of phonological mergers in mixed input settings presents an interesting problem.", "It appears that mergers have an inherent advantage because they tend to spread at the expense of distinctions, and once they begin, they are rarely reversed (Labov, 1994) .", "Yang (2009) 's acquisition model quantifies this advantage as the relatively lower chance of misinterpretation if a listener assumes the merged grammar instead of the nonmerged grammar once a sufficient proportion of the environment is merged.", "Applied to Johnson (2007) 's detailed population study of the frontier of the COT-CAUGHT merger in the small towns along the border between Rhode Island and Massachusetts, this accurately predicts the ratio of merged input for a child to acquire the merged grammar, however when applied to a perfectly mixed population of learners, it fails to model the spread of the merged grammar in the population.", "Yang's model is input-driven, so it is conducive to simulation with minimal assumptions past those drawn from the empirical data.", "We test the behavior of this learning model in a typical population network and demonstrate that it produces a reasonable path of change.", "Background The COT-CAUGHT merger, also called the low back merger describes the phenomenon present in varieties of North American English spoken in eastern New England, western Pennsylvania, the American West, and Canada among others where the vowel in words like cot and the vowel in words like caught have come to be pronounced the same (Labov et al., 2005, pp.", "58-65) .", "The geographical extent of the merger is currently expanding, which might be expected if the merger has a cognitive or social advantage associated with it.", "Johnson (2007) 's study of the merger's frontier on the border Rhode Island and Massachusetts uncovered an interesting social dynamic that illustrates the merger's speed: there are families where the parents and older siblings non-merged, but the younger siblings are.", "The merger has swept through in only a few years and passed between the siblings.", "Yang (2009) seeks to understand why mergers have an advantage from a cognitive perspective, and his model treats the acquisition of mergers as an evolutionary process.", "Learners who receive both merged (M + ) and non-merged (M βˆ’ ) input entertain both a merged (g + ) and non-merged (g βˆ’ ) grammar and reward whichever grammar successfully parses the input.", "This kind of variational learner (Yang, 2000) is essentially an adaptation of the classic evolutionary Linear Reward Punishment model (Bush and Mosteller, 1953) .", "The fitness of each grammar is the probability in the limit that it will fail to parse any given input, and since it is virtually always the case that this probability is different for both grammars, fitness is virtually always asymmetric.", "The variational learner is characterized as follows.", "Given two grammars and an input token s, The learner parses s with g 1 with probability p and with g 2 with probability q = 1 βˆ’ p. p is rewarded according to whether the choice of g successfully parses s (g β†’ s) or it fails to (g s), where Ξ³ is some small constant.", "p = p + Ξ³q, g β†’ s (1 βˆ’ Ξ³)p, g s Given a specific problem, one can calculate a penalty probability C for each g, the proportion of input that would cause g s. The grammar with the lower C has the advantage, so the other one will be driven down in the long run.", "C can be estimated from type frequencies in a corpus, and the model is non-parametric because these values do not depend on Ξ³. lim tβ†’βˆž p t = C 2 C 1 + C 2 lim tβ†’βˆž q t = C 1 C 1 + C 2 To understand the COT-CAUGHT merger empirically, one must reason about what kind of input would trigger a penalty and then calculate the penalty probabilities of the merged grammar C + and non-merged grammar C βˆ’ from a corpus.", "This model considers parsing failure to be the rate of initial misinterpretation, and for a vowel merger, the only inputs that could create an initial misinterpretation are minimal pairs because they become homophones.", "Examples of COT-CAUGHT minimal pairs include cot-caught, Don-Dawn, stock-stalk, odd-awed, collar-caller, and so on.", "The merged g + grammar collapses would-be minimal pairs into homophones, so the penalty rate C + comes down to lexical access.", "Under the observation that more frequent homophones are retrieved first regardless of syntactic context (Caramazza et al., 2001) , g + listeners only suffer initial misinterpretation when the less frequent member of a pair is uttered regardless of the rate of M + .", "If H is the sum token frequency of all minimal pairs and h i o , h i oh are the frequencies of the ith pair's members, then C+ is calculated by Eqn.", "6.", "In contrast, g βˆ’ listeners are sensitive to the phonemic distinction, so they misinterpret M βˆ’ input at the rate of mishearing one vowel for the other (Peterson and Barney, 1952) (second half of Eqn.", "7).", "And given M + input, they misinterpret whenever they hear the phoneme which g βˆ’ does not expect (e.g., a merged speaker pronouncing cot with the CAUGHT vowel) times the probability of not mishearing that vowel (1-) plus times the probability of hearing the right vowel (i.e., the merged speaker pronounces cot with the COT vowel but it is misheard anyway) (first half of Eqn.", "7).", "Since g βˆ’ misinterpretation rates are a function of the rate of M + (p) in the environment, there is a threshold of M + speakers above which the merged grammar has a fitness advantage over the non-merged one.", "C + = 1 H i min(h i o , h i oh ) (6) C βˆ’ = 1 H i p 0 ((1 βˆ’ oh )h i o + oh h i oh ) (7) +q 0 ( oh h i o + oh h i oh ) Calculating this threshold for the frequent minimal pairs that Yang extracts from the Wortschatz project (Biemann et al., 2004 ) corpus 3 and mishearing rates from Peterson and Barney (1952), the Yang model predicts that a learner exposed to at least ∼ 17% COT-CAUGHT-merged input will acquire the merger.", "This threshold represents a strong advantage for M + because it is well under the 50% threshold expected for neutral (non-advantaged) change and it is very close to what was found in Johnson (2007) 's sociolinguistic study.", "It predicts that younger children may have g + while their parents and even older siblings have g βˆ’ if the 17% threshold was crossed in E after the acquisition period of the older sibling but before that of the younger sibling.", "Model Setup All the mechanics behind the learning model reduce to a simple statement: learners acquires g + iff > 17% of their input is M + and they acquire g βˆ’ otherwise.", "However, this kind of categorical learner in a perfectly-mixed population leads to immediate fixation at either g βˆ’ or g + in a single iteration, since the proportion of g + speakers in the population is equivalent to the proportion of M + input in every learner's environment.", "This is not realistic change.", "Clearly, social network structure is at least as important as the learning algorithm in modeling the spread of the merger.", "We model the change in a non-uniform social network of 100 centralized clusters of 75 individuals each.", "75 was chosen as half Dunbar's number, the maximum number of reliable social connections that an adult can maintain (Dunbar, 2010) .", "There are two grammars, g + and g βˆ’ , and learners internalize one or the other according to the 17% threshold of M + in their input.", "One cluster represents the source of the merger and is initialized at 100% g + , while the rest begin 100% g βˆ’ .", "Inter-cluster connections are chosen randomly so that some connections are between central members of the clusters and some are between peripheral members.", "The one merged cluster is connected to half the other clusters representing those at the frontier of the change, and each other cluster is connected to five randomly chosen ones.", "4 This network structure echoes work in sociolinguistics, in particular, Milroy and Milroy (1985) 's notion of strong and weak connections in language change, where weak connections between social clusters are particularly important for propagation of a change.", "Propagation of the merged grammar is calculated by E n because we are interested in the behavior of individuals without loss of precision and because it cannot be assumed that each cluster is internally uniform.", "5 Since the spread of the merger has been rapid enough to detect over a period of a few years, iterations are modeled as short age co-horts rather than full generations in the first experiments by updating only a randomly chosen 10% of nodes at each iteration because only a fraction of the population is learning at any given time.", "A model where every node is updated is investigated as well.", "Results The behavior of this simulation is shown graphically in Figure 4 .", "The fine/colored lines indicate the rate of M + within each initially non-merged cluster, and the bold/black line shows the average rate across all initially non-merged.", "The merger spreads from cluster to cluster in succession over the \"weak\" inter-cluster connections and through each cluster over the 'strong' connections before moving on to the next ones.", "Everett, 1995) members have the merger, a period of rapid diffusion of the merger, then some time where a few laggards resist the merger.", "As a result, most clusters exhibit an S-like shape.", "A few clusters change rapidly because of their especially wellconnected positions in the network, and some lag behind the rest because they are poorly connected to the rest of the network.", "More interestingly, the population-wide average, the population-level data at the kind of granularity that is often studied, yields a smooth S-curve with a shallower slope than the individual clusters.", "The fact that it arises naturally here in a network that conforms with typical network shapes but was otherwise randomly generated is encouraging because the experiment was not set up so that it would produce such a curve, and the steep rate of change in individual clusters is what is expected for a change that is rapid enough to affect siblings differently.", "In the above simulation, only a fraction of nodes were updated at each iteration in order to model a rapid change.", "In order to confirm that this choice is not affecting the results and to test a purer implementation of the framework presented here, we remove that constraint and update every node at each iteration.", "Figure 5 shows what happens over 20 iterations in a network that is otherwise identical but with 2/5 as many inter-cluster connections as the original.", "A qualitatively similar pattern arises, so the choice to update only a fraction of the population is not crucially affecting the results.", "In all experiments so far, social connections were fixed at the first iteration even though connections in real populations tend to change over time.", "To investigate that modeling assumption, we perform another simulation in which connections are randomly updated both within and across clusters at each iteration akin to Kauhanen (2016) 's rewiring.", "The result as shown in Figure 6 is similar to before, with one major difference.", "The individual clusters transition more closely in time because no individual cluster remains poorly connected or especially well connected throughout the entire simulation.", "Finally, we test our assumptions about population size by repeating the experiments on a smaller network of 40 clusters of 18 individuals.", "The results are qualitatively similar, but the S-curve appears to be more sensitive to probabilistic connections in the network.", "To explore this, we present the average network-wide rate of (M + ) across 10 trials, revealing that an S-like curve is formed each time but that its slope varies.", "A few trials never reach 100% because some of the clusters are not connected to the innovative one.", "The slope varies between trials, indicating that the rate of change is a function of both the population structure and the learning algorithm, but the network size does not substantially affect these results.", "Discussion The algebraic-network framework for modeling population-level language change presented here has substantial practical and theoretical advantages over previous ones.", "It is much simpler computationally than previous frameworks because it calculates the statistically expected behavior of each generation analytically and therefore removes the entire inner loop of calculating stochastic inter-agent interactions from the simulation.", "It follows the Niyogi and Berwick (1996) formalism for language change which presents a clean and modular way of reasoning about the problem and promotes the centrality of language acquisition.", "In addition to the core algorithm, the framework offers enough flexibility to represent a wide variety of processes from the highly abstract (e.g., Kauhanen (2016) ) to those grounded in soci-olinguistic and acquisition research (e.g., Yang (2009) ).", "In our investigation of Kauhanen's basic assumptions, we discover how seemingly innocuous decisions about population size and learning conspire to drive simulation results.", "If learners are conceived as categorical learners, population size becomes a deciding factor in the path of change.", "So while the original results are interesting and meaningful, they may only valid for small (on the order of 10 2 ) populations.", "In our simulation of the spread of the COT-CAUGHT merger, we show how a cognitivelymotivated model of acquisition requires a network model in order to represent population-level language change.", "The population is represented as a collection of individual clusters based on sociological work, but the clusters themselves are connected randomly.", "The fact that S-curves arise naturally from these networks underscores their centrality to language change.", "One problem that this line of simulation work has always faced has been the lack of viable comparison between models because every study implements its own learning, network, and interaction models.", "The modular nature of our framework advances against this trend since it is now possible to hold the population model constant while slotting in various learning models to test them against one another and vice-versa.", "Finally, since this framework reduces to Niyogi & Berwick's models in perfectly-mixed populations, it can be used to reason about the formal dynamics of language change as well.", "Without simulation, it would be difficult or impossible to undercover the interplay between acquisition and social structure on the propagation of language change.", "Neither factor alone can account for the theoretical or empirically observed patterns.", "Simulations of this kind which explicitly model both simultaneously is well equipped to provide insights that fieldwork and laboratory work cannot.", "As such, it is an invaluable complement to those more traditional methodologies." ] }
{ "paper_header_number": [ "1", "1.1", "2", "2.2", "2.3", "3", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Framework for Transmission in Social Networks", "Propagation in the Network", "Learning in the Network", "Application: Testing Assumptions", "Application: Mergers in Progress", "Background", "Model Setup", "Results", "Discussion" ] }
GEM-SciDuet-train-106#paper-1281#slide-5
Best of All Worlds
Impose density effects on a network structure and calculate the outcome of each iteration analytically + Captures the Principle of Density + Models key facts about social networks + No random process in the core algorithm
Impose density effects on a network structure and calculate the outcome of each iteration analytically + Captures the Principle of Density + Models key facts about social networks + No random process in the core algorithm
[]
GEM-SciDuet-train-106#paper-1281#slide-6
1281
A Framework for Representing Language Acquisition in a Population Setting
Language variation and change are driven both by individuals' internal cognitive processes and by the social structures through which language propagates. A wide range of computational frameworks have been proposed to connect these drivers. We compare the strengths and weaknesses of existing approaches and propose a new analytic framework which combines previous network models' ability to capture realistic social structure with practically and more elegant computational properties. The framework privileges the process of language acquisition and embeds learners in a social network but is modular so that population structure can be combined with different acquisition models. We demonstrate two applications for the framework: a test of practical concerns that arise when modeling acquisition in a population setting and an application of the framework to recent work on phonological mergers in progress.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217 ], "paper_content_text": [ "Introduction The process of language change should be thought of as a two-step cycle in which 1) individuals acquire their native languages from their predecessors then 2) pass them on to their successors.", "Small changes accrue over time this way and create both small-scale interpersonal variation and large-scale typological differences.", "It is easy to draw a strong analogy here between linguistic evolution and biological evolution.", "Both feature classic descent with modification, except while phenotypes are transmitted through genes and acted on by natural selection, language is both transmitted through and constrained by the individual (Cavalli-Sforza and Feldman, 1981; Ritt, 2004, etc.)", ".", "But while evolution, linguistic or otherwise, is driven by forces acting on the individual, it unfolds on the level of populations (Cavalli-Sforza and Feldman, 1981) .", "The influence of communitylevel social factors on the path of language change is a major focus of sociolinguistics (Labov, 2001; Milroy and Milroy, 1985; Rogers Everett, 1995) .", "Ideally, one could observe population-level variation unfold in real time while testing out individual factors, but this is impossible because nobody can travel back in time or fit entire natural environments into a lab.", "Change that has already happened is out of reach, and change in progress is buried in a world of confounds.", "The classic sociolinguistic method instead approaches the problem by inferring causal factors from patterns discovered in field interviews and corpora (Labov, 1994; Labov et al., 2005, etc.)", ".", "This is the primary source of empirical data in the field and the only way to look at language change in a naturalistic setting, but it is limited in that it cannot test cause and effect directly.", "More recently, controlled experimental studies have emerged as a complementary line of research which manipulate causal factors directly (Johnson et al., 1999; Campbell-Kibler, 2009, etc.)", ", but are inherently removed natural time and scale.", "A third approach, the one we build upon here, relies on computational modeling to simulate how sociolinguistic factors might work together in larger populations (Klein, 1966; Blythe and Croft, 2012; Kauhanen, 2016, etc.)", ".", "It has long been argued that language acquisition is the primary cause of language change (Sweet, 1899; Lightfoot, 1979; Niyogi, 1998, etc.)", ".", "In the last few decades, this connection has been modeled computationally (Gibson and Wexler, 1994; Kirby et al., 2000; Yang, 2000, etc.)", ", leading to the strong conclusion that change is the inevitable consequence of mixed linguistic input or finite learning periods (Niyogi and Berwick, 1996) , even if children are \"perfect\" learners.", "An important result connecting the learner and population emphasizes the need for this line of work: the space of paths of change available in populations is formally larger than the paths available to linear chains of iterated learners.", "Niyogi and Berwick (2009) prove formally that even perfectly-mixed (i.e., uniform and homogeneous social network) populations admit phase transitions in the path of change unavailable to chains of single learners commonly implemented in iterated learning (Kirby et al., 2000) .", "This suggests that small-population experimental studies in sociolinguistics and in child language acquisition do not paint the full picture of language change.", "We introduce a new framework for modeling language change in populations.", "It has an outer loop to represent generational progression, but it replaces the inner loop which calculates randomized interactions between agents with a single formula that is defined generally enough to allow the simulation of a wide range of scenarios.", "It builds upon the principled formalism described by Niyogi and Berwick (1996, et seq.)", ", privileging the acquisition model and separating it from the population model.", "The resulting modular framework is described in the following sections.", "First, Section 1.1 presents a survey of previous simulation work followed by a description of the new population model in Section 2.", "Next, Section 3 addresses practical concerns relating population size to assumptions about language acquisition.", "Finally, Section 4 introduces a case study on phonological change which demonstrates the need for appropriate models both of acquisition and populations.", "Related Work Computational models for the propagation of linguistic variation have been employed with a variety of research goals in mind.", "Every paper implements its own framework with few exceptions, so comparison across studies is difficult.", "Additionally, since each model is essentially 'boutique,' it is always possible that models are designed consciously or unconsciously to achieve a specific outcome rather driven by underlying principles.", "We group these frameworks into three classes according to their implementation, swarm, network, and algebraic, and discusses their strengths and weaknesses.", "The first class, called swarm here, models populations as collections of agents placed on a grid.", "They \"swarm\" around randomly according to some movement function, and \"interact\" when they occupy adjacent grid spaces (Satterfield, 2001; Harrison et al., 2002; Ke et al., 2008; Stanford and Kenny, 2013) .", "This tends toward concrete interpretation, for example, more mobile populations are expressed directly by more mobile agents.", "They capture Bloomfield (1933) 's \"principle of density\" which describes the observation that geographically or socially close individuals interact more frequently than those farther away.", "On the other hand, they provide little control over network structure, relying on series of explicit movement constraints in order to direct their agents, and since each one moves randomly at each iteration, these models have potentially thousands of degrees of freedom.", "Such simulations should be run many times if any sort of statistically expected results are to be computed.", "The second class, network frameworks, model speakers as nodes and interaction probabilities as weighted edges on network graphs (Minett and Wang, 2008; Baxter et al., 2009; Fagyal et al., 2010; Blythe and Croft, 2012; Kauhanen, 2016) .", "These frameworks offer precise control over social network structure and can test specific community models from within sociolinguistics.", "However, implementations usually proceed by some kind of iterative probabilistic node-pair selection process, and in this way suffer from the same statistical pitfalls as swarm frameworks.", "In contrast to swarm models, interaction is rigidly restricted to immediately connected nodes, so to achieve gradient interaction probabilities, edges must be frequently updated or nearly fully-connected graphs with carefully assigned edge weights would need to be constructed and motivated.", "The third class, algebraic frameworks, present analytic methods for determining the state of the network at the end of each iteration rather than relying on stochastic simulation of individual agents (Niyogi and Berwick, 1996, 1997; Yang, 2000; Baxter et al., 2006; Minett and Wang, 2008; Niyogi and Berwick, 2009) .", "Removing that inner loop is a more mathematically elegant approach and avoids dealing unnecessarily with statistics behind random trials.", "Removing that loop speeds up calculation as well, making larger simulations more tractable than with network or swarm frameworks.", "But this power is achieved by sacrificing the social network.", "Up to this point, such models have, to our knowledge, only been defined over perfectly-mixed (i.e., no network effects) populations.", "That assumption is useful for reasoning about the mathematical theory behind language change, but it hinders such models' utility in empirical studies.", "For example, though Baxter et al.", "(2006) and Minett and Wang (2008) implement algebraic models for perfectly mixed populations, they fall back on network models to model network effects.", "Framework for Transmission in Social Networks Algebraic frameworks have their mathematical advantage, but network frameworks provide a richer model for representing real-world population structures and swarm models capture density effects by default.", "An ideal framework would combine the benefits of all three of these.", "Here we do just that.", "We introduce a framework that instantiates Niyogi and Berwick (1996) 's acquisitiondriven formalism where change is handled explicitly as a two-step alternation between individual learners learning and populations interacting.", "It provides an analytic solution to the state of a network structure over which swarm-like behavior can be modeled.", "We begin by conceptualizing the framework in terms of agents traveling probabilistically over a network structure as in Algo.", "1 before introducing the analytic solution.", "There is an individual standing at every node in the graph, and at every iteration, each individual begins at some location and travels along the network's edges, at each step deciding to continue on or to stop and interact with the agent at that node.", "Any two agents with a nonzero weight path between them could potentially interact, so the overall probability of an interaction is a function of the shape of the network and the decay rate of the step probability.", "The shorter and higher weighted the path between two agents, the more likely they are to interact.", "This corresponds to the gradient interaction probabilities of swarm frameworks.", "Social networks are typically conceived of as graph structures with individuals as vertices and the social or geographical connections between individuals as edges, and this allows for a great deal of flexibility.", "If edges are undirected, then all interactions are equal and bidirectional, but if edges are directed, interactions may or may not be.", "Edges can be weighted to represent likelihood of interaction or some measure of social valuation, and this too can vary over time.", "Lastly, it is possible to add and remove nodes themselves to capture births, deaths, or migration.", "The network structure is represented computationally here as an adjacency matrix A.", "In a population of n individuals, this is n Γ— n where each element a ij is the weight of the connection from individual j to individual i.", "The matrix must be column stochastic (all columns sum to 1 and contain only positive elements) so that edge weights can be interpreted as probabilities.", "The special case where the matrix is symmetric (every a ij = a ji ) models undirected edges, and more strongly, the model reduces to perfectly-mixed populations when each a ij = 1 n .", "We define a notion of communities over the nodes of the network in order to add the option to categorize groups of individuals.", "Membership among c communities is identified with an n Γ— c indicator matrix C. Depending on the problem at hand, it is possible to calculate the average behavior of the learners within each community directly without having to calculate the behavior of each individual member.", "Propagation in the Network In a typical network model, the edge weights between nodes in A are interpreted directly as interaction probabilities, meaning that individuals only ever interact with their immediate graph neighbors.", "We take a different approach by allowing the agents to \"travel\" and potentially interact with any other agent whose node is connected by a path of non-zero edges.", "If the number of traveling steps were fixed at k, the probability of each pair interacting would be defined as A k .", "It is more complicated for us since the number of steps traveled is a random variable.", "The probability of j interacting with i (p(ij)) is the probability of them interacting after k steps times the probability of k for all values of k as in Eqn.", "1.", "Combining this intuition with A yields the interaction probabilities for all i, j pairs.", "p(ij) = k p(ij|k steps) p(k steps) (1) The pattern of linguistic variants or grammars (in the formal sense where grammar g is the intensional equivalent of language L g ) within a network unfolds as a dynamical system over the course of many iterations, and learners' positions within the network mediate which ones they eventually acquire.", "In a system with g grammars and n individuals, a n Γ— g row-stochastic matrix G specifies the probability with which each community expresses each grammar.", "Given this notion of interaction and the specification of grammars expressed within a network, it is possible to compute the distribution of grammars presented to each learner.", "This is the learners' linguistic environment and is represented by a matrix E in the same form as G .", "An environment function E n (G t , A) = E t+1 shown in Eqn.", "2 calculates E by first calculating all the interaction probabilities in the network then multiplying those by the grammars which every agent expresses to get the environment E. The Ξ± parameter from the geometric distribution 1 defines the travel decay rate.", "A lower Ξ± defines conceptually more mobile agents.", "More generally, E n is a special case of E(G t , C t , A t ) = E t+1 where the number of communities equals the number of individuals (c = n).", "C becomes the identity matrix without loss of generality, so the network's initial condition does not have to be defined explicitly.", "For any other community definition, an initial condition has to be defined as in Eqn.", "3 which specifies the starting point in the network that each agent conceptually begins traveling from.", "The output of E is a g Γ— c matrix giving the environment of the average agent in each community.", "2 E n (G t , A) = G t Ξ± (I βˆ’ (1 βˆ’ Ξ±)A) βˆ’1 (2) E(G t , C, A) = E n (G t , A)C(C C) βˆ’1 (3) The output of E must be broadcast to g Γ— n, which would result in the loss of some information unless the assumption can be made that each community is internally uniform.", "However, when that assumption can be made, the n Γ— n adjacency matrix admits a c Γ— c equitable partition A Ο€ (Eqn.", "4) (Schaub et al., 2016) which permits an alternate environment function E EP (G t , C, A) shown in Eqn.", "5 that is equivalent to the lossless E n if A.", "If n c, E EP is much faster to calculate because it only inverts a small c Γ— c matrix rather than a large n Γ— n. This makes it feasible to run much larger simulations than what has been done in the past.", "A Ο€ = (C C) βˆ’1 C AC (4) EEP = Ξ±G C (I βˆ’ (1 βˆ’ Ξ±)A Ο€ ) βˆ’1 (C C) βˆ’1 (5) Learning in the Network The environment function describes what inputs E t+1 are available to learners given the language expressed by the mature speakers of the previous age cohort with grammars G t .", "The second component of the framework describes the learning algorithm A(E t+1 ) = G t+1 , how individuals respond to their input environment.", "The resulting G t+1 describes which grammars those learners will eventually contribute to the subsequent generation's environment E t+2 .", "This back-andforth between adults' grammars G and childrens' environment E is the two-step cycle of language change (Fig.", "1) .", "In neutral change, learners would acquire grammars at the rates that they are expressed in their environments, but there is good reason to believe .", ".", ".", "G t β†’ E t+1 β†’ G t+1 .", ".", ".", "G t+i β†’ E t+i+1 .", ".", ".", "Figure 1: Language change as an alternation between G and E matrices that most language change involves differential fitness between competing variants, and most nontrivial learning algorithms yield some kind of fitness (Kroch, 1989; Yang, 2000; Blythe and Croft, 2012, etc.)", ", so A is rarely neutral.", "A neutral and simple advantaged model are both considered in Section 3, and a more complex learning algorithm is described for Section 4.", "Application: Testing Assumptions The general nature of the framework described here renders it suitable for reproducing the results of previous works and evaluating their assumptions.", "To demonstrate this, we reproduce the major result from Kauhanen (2016) , which tested the behavior of neutral change in networks of singlegrammar learners, in order to dissect two of its primary assumptions.", "Implemented in a typical network framework, the original setup contains n = 200 individuals in probabilistically generated centralized networks in which individuals mature categorically to the single most frequent grammar in their input.", "The author found that categorical neutral change produced chaotic paths of change regardless of network shape and that periodically \"rewiring\" some of the network edges smoothed this out.", "Without commenting on rewiring, we find that the combination of n and choice of categorical learners conspire to create the chaotic results.", "We create two communities, both centralized along the lines of the single cluster in Kauhanen (2016) , initialize all members of cluster 1 with grammar g 1 and all members of cluster 2 with grammar g 2 , and additional edges are added between members of clusters 1 and 2 to allow interaction.", "G is converted to an indicator matrix at the end of each learning iteration by rounding values to 0 and 1 in order to model categorical learners who only internalize the most common grammar in their inputs as in the original model.", "In a pair of infinitely large clusters or two clusters where individuals are permitted to learn a probabilistic distribution of grammars, each cluster should homogenize to a 50/50 distribution of g 1 and g 2 after some number of iterations depending on the specifics of the network shape and setting for Ξ± creating the red curves in Fig.", "2 .", "At n = 20000, each of 10 trials roughly follows the path of the predicted curve, but when run at the original n = 200 for 10 trials, this produces the type of chaotic behavior which Kauhanen (2016) attempts to repair.", "The outcome appears to be the result of an assumption made out of convenience (n = 200) rather than a principled decision.", "To further explore the impact of the population size assumption, we experiment on a model of advantaged change, which is typically contrasted with neutral change because of its tendency to produce \"well-behaved\" S-curve change (Blythe and Croft, 2012; Kauhanen, 2016) .", "This time, only a single cluster is created, and the advantaged grammar is initially assigned to 1% of the population.", "As seen in Figure 3 , results are chaotic for n = 200 once again and near predicted for n = 20000.", "This is important because at n = 200, advantaged change is chaotic, and most simulations both rise and fall.", "An experimenter who only studied advantaged change in small population might concluded that it is as ill-behaved as neutral change.", "While the conclusions that Kauhanen (2016) draws appear valid for n = 200, it is not clear to what extent they can be projected onto larger populations.", "This demonstrates the need for carefully choosing one's modeling assumptions and testing them out when possible.", "Application: Mergers in Progress The acquisition of phonological mergers in mixed input settings presents an interesting problem.", "It appears that mergers have an inherent advantage because they tend to spread at the expense of distinctions, and once they begin, they are rarely reversed (Labov, 1994) .", "Yang (2009) 's acquisition model quantifies this advantage as the relatively lower chance of misinterpretation if a listener assumes the merged grammar instead of the nonmerged grammar once a sufficient proportion of the environment is merged.", "Applied to Johnson (2007) 's detailed population study of the frontier of the COT-CAUGHT merger in the small towns along the border between Rhode Island and Massachusetts, this accurately predicts the ratio of merged input for a child to acquire the merged grammar, however when applied to a perfectly mixed population of learners, it fails to model the spread of the merged grammar in the population.", "Yang's model is input-driven, so it is conducive to simulation with minimal assumptions past those drawn from the empirical data.", "We test the behavior of this learning model in a typical population network and demonstrate that it produces a reasonable path of change.", "Background The COT-CAUGHT merger, also called the low back merger describes the phenomenon present in varieties of North American English spoken in eastern New England, western Pennsylvania, the American West, and Canada among others where the vowel in words like cot and the vowel in words like caught have come to be pronounced the same (Labov et al., 2005, pp.", "58-65) .", "The geographical extent of the merger is currently expanding, which might be expected if the merger has a cognitive or social advantage associated with it.", "Johnson (2007) 's study of the merger's frontier on the border Rhode Island and Massachusetts uncovered an interesting social dynamic that illustrates the merger's speed: there are families where the parents and older siblings non-merged, but the younger siblings are.", "The merger has swept through in only a few years and passed between the siblings.", "Yang (2009) seeks to understand why mergers have an advantage from a cognitive perspective, and his model treats the acquisition of mergers as an evolutionary process.", "Learners who receive both merged (M + ) and non-merged (M βˆ’ ) input entertain both a merged (g + ) and non-merged (g βˆ’ ) grammar and reward whichever grammar successfully parses the input.", "This kind of variational learner (Yang, 2000) is essentially an adaptation of the classic evolutionary Linear Reward Punishment model (Bush and Mosteller, 1953) .", "The fitness of each grammar is the probability in the limit that it will fail to parse any given input, and since it is virtually always the case that this probability is different for both grammars, fitness is virtually always asymmetric.", "The variational learner is characterized as follows.", "Given two grammars and an input token s, The learner parses s with g 1 with probability p and with g 2 with probability q = 1 βˆ’ p. p is rewarded according to whether the choice of g successfully parses s (g β†’ s) or it fails to (g s), where Ξ³ is some small constant.", "p = p + Ξ³q, g β†’ s (1 βˆ’ Ξ³)p, g s Given a specific problem, one can calculate a penalty probability C for each g, the proportion of input that would cause g s. The grammar with the lower C has the advantage, so the other one will be driven down in the long run.", "C can be estimated from type frequencies in a corpus, and the model is non-parametric because these values do not depend on Ξ³. lim tβ†’βˆž p t = C 2 C 1 + C 2 lim tβ†’βˆž q t = C 1 C 1 + C 2 To understand the COT-CAUGHT merger empirically, one must reason about what kind of input would trigger a penalty and then calculate the penalty probabilities of the merged grammar C + and non-merged grammar C βˆ’ from a corpus.", "This model considers parsing failure to be the rate of initial misinterpretation, and for a vowel merger, the only inputs that could create an initial misinterpretation are minimal pairs because they become homophones.", "Examples of COT-CAUGHT minimal pairs include cot-caught, Don-Dawn, stock-stalk, odd-awed, collar-caller, and so on.", "The merged g + grammar collapses would-be minimal pairs into homophones, so the penalty rate C + comes down to lexical access.", "Under the observation that more frequent homophones are retrieved first regardless of syntactic context (Caramazza et al., 2001) , g + listeners only suffer initial misinterpretation when the less frequent member of a pair is uttered regardless of the rate of M + .", "If H is the sum token frequency of all minimal pairs and h i o , h i oh are the frequencies of the ith pair's members, then C+ is calculated by Eqn.", "6.", "In contrast, g βˆ’ listeners are sensitive to the phonemic distinction, so they misinterpret M βˆ’ input at the rate of mishearing one vowel for the other (Peterson and Barney, 1952) (second half of Eqn.", "7).", "And given M + input, they misinterpret whenever they hear the phoneme which g βˆ’ does not expect (e.g., a merged speaker pronouncing cot with the CAUGHT vowel) times the probability of not mishearing that vowel (1-) plus times the probability of hearing the right vowel (i.e., the merged speaker pronounces cot with the COT vowel but it is misheard anyway) (first half of Eqn.", "7).", "Since g βˆ’ misinterpretation rates are a function of the rate of M + (p) in the environment, there is a threshold of M + speakers above which the merged grammar has a fitness advantage over the non-merged one.", "C + = 1 H i min(h i o , h i oh ) (6) C βˆ’ = 1 H i p 0 ((1 βˆ’ oh )h i o + oh h i oh ) (7) +q 0 ( oh h i o + oh h i oh ) Calculating this threshold for the frequent minimal pairs that Yang extracts from the Wortschatz project (Biemann et al., 2004 ) corpus 3 and mishearing rates from Peterson and Barney (1952), the Yang model predicts that a learner exposed to at least ∼ 17% COT-CAUGHT-merged input will acquire the merger.", "This threshold represents a strong advantage for M + because it is well under the 50% threshold expected for neutral (non-advantaged) change and it is very close to what was found in Johnson (2007) 's sociolinguistic study.", "It predicts that younger children may have g + while their parents and even older siblings have g βˆ’ if the 17% threshold was crossed in E after the acquisition period of the older sibling but before that of the younger sibling.", "Model Setup All the mechanics behind the learning model reduce to a simple statement: learners acquires g + iff > 17% of their input is M + and they acquire g βˆ’ otherwise.", "However, this kind of categorical learner in a perfectly-mixed population leads to immediate fixation at either g βˆ’ or g + in a single iteration, since the proportion of g + speakers in the population is equivalent to the proportion of M + input in every learner's environment.", "This is not realistic change.", "Clearly, social network structure is at least as important as the learning algorithm in modeling the spread of the merger.", "We model the change in a non-uniform social network of 100 centralized clusters of 75 individuals each.", "75 was chosen as half Dunbar's number, the maximum number of reliable social connections that an adult can maintain (Dunbar, 2010) .", "There are two grammars, g + and g βˆ’ , and learners internalize one or the other according to the 17% threshold of M + in their input.", "One cluster represents the source of the merger and is initialized at 100% g + , while the rest begin 100% g βˆ’ .", "Inter-cluster connections are chosen randomly so that some connections are between central members of the clusters and some are between peripheral members.", "The one merged cluster is connected to half the other clusters representing those at the frontier of the change, and each other cluster is connected to five randomly chosen ones.", "4 This network structure echoes work in sociolinguistics, in particular, Milroy and Milroy (1985) 's notion of strong and weak connections in language change, where weak connections between social clusters are particularly important for propagation of a change.", "Propagation of the merged grammar is calculated by E n because we are interested in the behavior of individuals without loss of precision and because it cannot be assumed that each cluster is internally uniform.", "5 Since the spread of the merger has been rapid enough to detect over a period of a few years, iterations are modeled as short age co-horts rather than full generations in the first experiments by updating only a randomly chosen 10% of nodes at each iteration because only a fraction of the population is learning at any given time.", "A model where every node is updated is investigated as well.", "Results The behavior of this simulation is shown graphically in Figure 4 .", "The fine/colored lines indicate the rate of M + within each initially non-merged cluster, and the bold/black line shows the average rate across all initially non-merged.", "The merger spreads from cluster to cluster in succession over the \"weak\" inter-cluster connections and through each cluster over the 'strong' connections before moving on to the next ones.", "Everett, 1995) members have the merger, a period of rapid diffusion of the merger, then some time where a few laggards resist the merger.", "As a result, most clusters exhibit an S-like shape.", "A few clusters change rapidly because of their especially wellconnected positions in the network, and some lag behind the rest because they are poorly connected to the rest of the network.", "More interestingly, the population-wide average, the population-level data at the kind of granularity that is often studied, yields a smooth S-curve with a shallower slope than the individual clusters.", "The fact that it arises naturally here in a network that conforms with typical network shapes but was otherwise randomly generated is encouraging because the experiment was not set up so that it would produce such a curve, and the steep rate of change in individual clusters is what is expected for a change that is rapid enough to affect siblings differently.", "In the above simulation, only a fraction of nodes were updated at each iteration in order to model a rapid change.", "In order to confirm that this choice is not affecting the results and to test a purer implementation of the framework presented here, we remove that constraint and update every node at each iteration.", "Figure 5 shows what happens over 20 iterations in a network that is otherwise identical but with 2/5 as many inter-cluster connections as the original.", "A qualitatively similar pattern arises, so the choice to update only a fraction of the population is not crucially affecting the results.", "In all experiments so far, social connections were fixed at the first iteration even though connections in real populations tend to change over time.", "To investigate that modeling assumption, we perform another simulation in which connections are randomly updated both within and across clusters at each iteration akin to Kauhanen (2016) 's rewiring.", "The result as shown in Figure 6 is similar to before, with one major difference.", "The individual clusters transition more closely in time because no individual cluster remains poorly connected or especially well connected throughout the entire simulation.", "Finally, we test our assumptions about population size by repeating the experiments on a smaller network of 40 clusters of 18 individuals.", "The results are qualitatively similar, but the S-curve appears to be more sensitive to probabilistic connections in the network.", "To explore this, we present the average network-wide rate of (M + ) across 10 trials, revealing that an S-like curve is formed each time but that its slope varies.", "A few trials never reach 100% because some of the clusters are not connected to the innovative one.", "The slope varies between trials, indicating that the rate of change is a function of both the population structure and the learning algorithm, but the network size does not substantially affect these results.", "Discussion The algebraic-network framework for modeling population-level language change presented here has substantial practical and theoretical advantages over previous ones.", "It is much simpler computationally than previous frameworks because it calculates the statistically expected behavior of each generation analytically and therefore removes the entire inner loop of calculating stochastic inter-agent interactions from the simulation.", "It follows the Niyogi and Berwick (1996) formalism for language change which presents a clean and modular way of reasoning about the problem and promotes the centrality of language acquisition.", "In addition to the core algorithm, the framework offers enough flexibility to represent a wide variety of processes from the highly abstract (e.g., Kauhanen (2016) ) to those grounded in soci-olinguistic and acquisition research (e.g., Yang (2009) ).", "In our investigation of Kauhanen's basic assumptions, we discover how seemingly innocuous decisions about population size and learning conspire to drive simulation results.", "If learners are conceived as categorical learners, population size becomes a deciding factor in the path of change.", "So while the original results are interesting and meaningful, they may only valid for small (on the order of 10 2 ) populations.", "In our simulation of the spread of the COT-CAUGHT merger, we show how a cognitivelymotivated model of acquisition requires a network model in order to represent population-level language change.", "The population is represented as a collection of individual clusters based on sociological work, but the clusters themselves are connected randomly.", "The fact that S-curves arise naturally from these networks underscores their centrality to language change.", "One problem that this line of simulation work has always faced has been the lack of viable comparison between models because every study implements its own learning, network, and interaction models.", "The modular nature of our framework advances against this trend since it is now possible to hold the population model constant while slotting in various learning models to test them against one another and vice-versa.", "Finally, since this framework reduces to Niyogi & Berwick's models in perfectly-mixed populations, it can be used to reason about the formal dynamics of language change as well.", "Without simulation, it would be difficult or impossible to undercover the interplay between acquisition and social structure on the propagation of language change.", "Neither factor alone can account for the theoretical or empirically observed patterns.", "Simulations of this kind which explicitly model both simultaneously is well equipped to provide insights that fieldwork and laboratory work cannot.", "As such, it is an invaluable complement to those more traditional methodologies." ] }
{ "paper_header_number": [ "1", "1.1", "2", "2.2", "2.3", "3", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Framework for Transmission in Social Networks", "Propagation in the Network", "Learning in the Network", "Application: Testing Assumptions", "Application: Mergers in Progress", "Background", "Model Setup", "Results", "Discussion" ] }
GEM-SciDuet-train-106#paper-1281#slide-6
The Model
Language change as a two-step loop Propagation: Variants distribute through the network Acquisition: Individuals internalize them Propagation: L distributes through the network Acquisition: Individuals react to L to create G If this were a linear chain,
Language change as a two-step loop Propagation: Variants distribute through the network Acquisition: Individuals internalize them Propagation: L distributes through the network Acquisition: Individuals react to L to create G If this were a linear chain,
[]
GEM-SciDuet-train-106#paper-1281#slide-7
1281
A Framework for Representing Language Acquisition in a Population Setting
Language variation and change are driven both by individuals' internal cognitive processes and by the social structures through which language propagates. A wide range of computational frameworks have been proposed to connect these drivers. We compare the strengths and weaknesses of existing approaches and propose a new analytic framework which combines previous network models' ability to capture realistic social structure with practically and more elegant computational properties. The framework privileges the process of language acquisition and embeds learners in a social network but is modular so that population structure can be combined with different acquisition models. We demonstrate two applications for the framework: a test of practical concerns that arise when modeling acquisition in a population setting and an application of the framework to recent work on phonological mergers in progress.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217 ], "paper_content_text": [ "Introduction The process of language change should be thought of as a two-step cycle in which 1) individuals acquire their native languages from their predecessors then 2) pass them on to their successors.", "Small changes accrue over time this way and create both small-scale interpersonal variation and large-scale typological differences.", "It is easy to draw a strong analogy here between linguistic evolution and biological evolution.", "Both feature classic descent with modification, except while phenotypes are transmitted through genes and acted on by natural selection, language is both transmitted through and constrained by the individual (Cavalli-Sforza and Feldman, 1981; Ritt, 2004, etc.)", ".", "But while evolution, linguistic or otherwise, is driven by forces acting on the individual, it unfolds on the level of populations (Cavalli-Sforza and Feldman, 1981) .", "The influence of communitylevel social factors on the path of language change is a major focus of sociolinguistics (Labov, 2001; Milroy and Milroy, 1985; Rogers Everett, 1995) .", "Ideally, one could observe population-level variation unfold in real time while testing out individual factors, but this is impossible because nobody can travel back in time or fit entire natural environments into a lab.", "Change that has already happened is out of reach, and change in progress is buried in a world of confounds.", "The classic sociolinguistic method instead approaches the problem by inferring causal factors from patterns discovered in field interviews and corpora (Labov, 1994; Labov et al., 2005, etc.)", ".", "This is the primary source of empirical data in the field and the only way to look at language change in a naturalistic setting, but it is limited in that it cannot test cause and effect directly.", "More recently, controlled experimental studies have emerged as a complementary line of research which manipulate causal factors directly (Johnson et al., 1999; Campbell-Kibler, 2009, etc.)", ", but are inherently removed natural time and scale.", "A third approach, the one we build upon here, relies on computational modeling to simulate how sociolinguistic factors might work together in larger populations (Klein, 1966; Blythe and Croft, 2012; Kauhanen, 2016, etc.)", ".", "It has long been argued that language acquisition is the primary cause of language change (Sweet, 1899; Lightfoot, 1979; Niyogi, 1998, etc.)", ".", "In the last few decades, this connection has been modeled computationally (Gibson and Wexler, 1994; Kirby et al., 2000; Yang, 2000, etc.)", ", leading to the strong conclusion that change is the inevitable consequence of mixed linguistic input or finite learning periods (Niyogi and Berwick, 1996) , even if children are \"perfect\" learners.", "An important result connecting the learner and population emphasizes the need for this line of work: the space of paths of change available in populations is formally larger than the paths available to linear chains of iterated learners.", "Niyogi and Berwick (2009) prove formally that even perfectly-mixed (i.e., uniform and homogeneous social network) populations admit phase transitions in the path of change unavailable to chains of single learners commonly implemented in iterated learning (Kirby et al., 2000) .", "This suggests that small-population experimental studies in sociolinguistics and in child language acquisition do not paint the full picture of language change.", "We introduce a new framework for modeling language change in populations.", "It has an outer loop to represent generational progression, but it replaces the inner loop which calculates randomized interactions between agents with a single formula that is defined generally enough to allow the simulation of a wide range of scenarios.", "It builds upon the principled formalism described by Niyogi and Berwick (1996, et seq.)", ", privileging the acquisition model and separating it from the population model.", "The resulting modular framework is described in the following sections.", "First, Section 1.1 presents a survey of previous simulation work followed by a description of the new population model in Section 2.", "Next, Section 3 addresses practical concerns relating population size to assumptions about language acquisition.", "Finally, Section 4 introduces a case study on phonological change which demonstrates the need for appropriate models both of acquisition and populations.", "Related Work Computational models for the propagation of linguistic variation have been employed with a variety of research goals in mind.", "Every paper implements its own framework with few exceptions, so comparison across studies is difficult.", "Additionally, since each model is essentially 'boutique,' it is always possible that models are designed consciously or unconsciously to achieve a specific outcome rather driven by underlying principles.", "We group these frameworks into three classes according to their implementation, swarm, network, and algebraic, and discusses their strengths and weaknesses.", "The first class, called swarm here, models populations as collections of agents placed on a grid.", "They \"swarm\" around randomly according to some movement function, and \"interact\" when they occupy adjacent grid spaces (Satterfield, 2001; Harrison et al., 2002; Ke et al., 2008; Stanford and Kenny, 2013) .", "This tends toward concrete interpretation, for example, more mobile populations are expressed directly by more mobile agents.", "They capture Bloomfield (1933) 's \"principle of density\" which describes the observation that geographically or socially close individuals interact more frequently than those farther away.", "On the other hand, they provide little control over network structure, relying on series of explicit movement constraints in order to direct their agents, and since each one moves randomly at each iteration, these models have potentially thousands of degrees of freedom.", "Such simulations should be run many times if any sort of statistically expected results are to be computed.", "The second class, network frameworks, model speakers as nodes and interaction probabilities as weighted edges on network graphs (Minett and Wang, 2008; Baxter et al., 2009; Fagyal et al., 2010; Blythe and Croft, 2012; Kauhanen, 2016) .", "These frameworks offer precise control over social network structure and can test specific community models from within sociolinguistics.", "However, implementations usually proceed by some kind of iterative probabilistic node-pair selection process, and in this way suffer from the same statistical pitfalls as swarm frameworks.", "In contrast to swarm models, interaction is rigidly restricted to immediately connected nodes, so to achieve gradient interaction probabilities, edges must be frequently updated or nearly fully-connected graphs with carefully assigned edge weights would need to be constructed and motivated.", "The third class, algebraic frameworks, present analytic methods for determining the state of the network at the end of each iteration rather than relying on stochastic simulation of individual agents (Niyogi and Berwick, 1996, 1997; Yang, 2000; Baxter et al., 2006; Minett and Wang, 2008; Niyogi and Berwick, 2009) .", "Removing that inner loop is a more mathematically elegant approach and avoids dealing unnecessarily with statistics behind random trials.", "Removing that loop speeds up calculation as well, making larger simulations more tractable than with network or swarm frameworks.", "But this power is achieved by sacrificing the social network.", "Up to this point, such models have, to our knowledge, only been defined over perfectly-mixed (i.e., no network effects) populations.", "That assumption is useful for reasoning about the mathematical theory behind language change, but it hinders such models' utility in empirical studies.", "For example, though Baxter et al.", "(2006) and Minett and Wang (2008) implement algebraic models for perfectly mixed populations, they fall back on network models to model network effects.", "Framework for Transmission in Social Networks Algebraic frameworks have their mathematical advantage, but network frameworks provide a richer model for representing real-world population structures and swarm models capture density effects by default.", "An ideal framework would combine the benefits of all three of these.", "Here we do just that.", "We introduce a framework that instantiates Niyogi and Berwick (1996) 's acquisitiondriven formalism where change is handled explicitly as a two-step alternation between individual learners learning and populations interacting.", "It provides an analytic solution to the state of a network structure over which swarm-like behavior can be modeled.", "We begin by conceptualizing the framework in terms of agents traveling probabilistically over a network structure as in Algo.", "1 before introducing the analytic solution.", "There is an individual standing at every node in the graph, and at every iteration, each individual begins at some location and travels along the network's edges, at each step deciding to continue on or to stop and interact with the agent at that node.", "Any two agents with a nonzero weight path between them could potentially interact, so the overall probability of an interaction is a function of the shape of the network and the decay rate of the step probability.", "The shorter and higher weighted the path between two agents, the more likely they are to interact.", "This corresponds to the gradient interaction probabilities of swarm frameworks.", "Social networks are typically conceived of as graph structures with individuals as vertices and the social or geographical connections between individuals as edges, and this allows for a great deal of flexibility.", "If edges are undirected, then all interactions are equal and bidirectional, but if edges are directed, interactions may or may not be.", "Edges can be weighted to represent likelihood of interaction or some measure of social valuation, and this too can vary over time.", "Lastly, it is possible to add and remove nodes themselves to capture births, deaths, or migration.", "The network structure is represented computationally here as an adjacency matrix A.", "In a population of n individuals, this is n Γ— n where each element a ij is the weight of the connection from individual j to individual i.", "The matrix must be column stochastic (all columns sum to 1 and contain only positive elements) so that edge weights can be interpreted as probabilities.", "The special case where the matrix is symmetric (every a ij = a ji ) models undirected edges, and more strongly, the model reduces to perfectly-mixed populations when each a ij = 1 n .", "We define a notion of communities over the nodes of the network in order to add the option to categorize groups of individuals.", "Membership among c communities is identified with an n Γ— c indicator matrix C. Depending on the problem at hand, it is possible to calculate the average behavior of the learners within each community directly without having to calculate the behavior of each individual member.", "Propagation in the Network In a typical network model, the edge weights between nodes in A are interpreted directly as interaction probabilities, meaning that individuals only ever interact with their immediate graph neighbors.", "We take a different approach by allowing the agents to \"travel\" and potentially interact with any other agent whose node is connected by a path of non-zero edges.", "If the number of traveling steps were fixed at k, the probability of each pair interacting would be defined as A k .", "It is more complicated for us since the number of steps traveled is a random variable.", "The probability of j interacting with i (p(ij)) is the probability of them interacting after k steps times the probability of k for all values of k as in Eqn.", "1.", "Combining this intuition with A yields the interaction probabilities for all i, j pairs.", "p(ij) = k p(ij|k steps) p(k steps) (1) The pattern of linguistic variants or grammars (in the formal sense where grammar g is the intensional equivalent of language L g ) within a network unfolds as a dynamical system over the course of many iterations, and learners' positions within the network mediate which ones they eventually acquire.", "In a system with g grammars and n individuals, a n Γ— g row-stochastic matrix G specifies the probability with which each community expresses each grammar.", "Given this notion of interaction and the specification of grammars expressed within a network, it is possible to compute the distribution of grammars presented to each learner.", "This is the learners' linguistic environment and is represented by a matrix E in the same form as G .", "An environment function E n (G t , A) = E t+1 shown in Eqn.", "2 calculates E by first calculating all the interaction probabilities in the network then multiplying those by the grammars which every agent expresses to get the environment E. The Ξ± parameter from the geometric distribution 1 defines the travel decay rate.", "A lower Ξ± defines conceptually more mobile agents.", "More generally, E n is a special case of E(G t , C t , A t ) = E t+1 where the number of communities equals the number of individuals (c = n).", "C becomes the identity matrix without loss of generality, so the network's initial condition does not have to be defined explicitly.", "For any other community definition, an initial condition has to be defined as in Eqn.", "3 which specifies the starting point in the network that each agent conceptually begins traveling from.", "The output of E is a g Γ— c matrix giving the environment of the average agent in each community.", "2 E n (G t , A) = G t Ξ± (I βˆ’ (1 βˆ’ Ξ±)A) βˆ’1 (2) E(G t , C, A) = E n (G t , A)C(C C) βˆ’1 (3) The output of E must be broadcast to g Γ— n, which would result in the loss of some information unless the assumption can be made that each community is internally uniform.", "However, when that assumption can be made, the n Γ— n adjacency matrix admits a c Γ— c equitable partition A Ο€ (Eqn.", "4) (Schaub et al., 2016) which permits an alternate environment function E EP (G t , C, A) shown in Eqn.", "5 that is equivalent to the lossless E n if A.", "If n c, E EP is much faster to calculate because it only inverts a small c Γ— c matrix rather than a large n Γ— n. This makes it feasible to run much larger simulations than what has been done in the past.", "A Ο€ = (C C) βˆ’1 C AC (4) EEP = Ξ±G C (I βˆ’ (1 βˆ’ Ξ±)A Ο€ ) βˆ’1 (C C) βˆ’1 (5) Learning in the Network The environment function describes what inputs E t+1 are available to learners given the language expressed by the mature speakers of the previous age cohort with grammars G t .", "The second component of the framework describes the learning algorithm A(E t+1 ) = G t+1 , how individuals respond to their input environment.", "The resulting G t+1 describes which grammars those learners will eventually contribute to the subsequent generation's environment E t+2 .", "This back-andforth between adults' grammars G and childrens' environment E is the two-step cycle of language change (Fig.", "1) .", "In neutral change, learners would acquire grammars at the rates that they are expressed in their environments, but there is good reason to believe .", ".", ".", "G t β†’ E t+1 β†’ G t+1 .", ".", ".", "G t+i β†’ E t+i+1 .", ".", ".", "Figure 1: Language change as an alternation between G and E matrices that most language change involves differential fitness between competing variants, and most nontrivial learning algorithms yield some kind of fitness (Kroch, 1989; Yang, 2000; Blythe and Croft, 2012, etc.)", ", so A is rarely neutral.", "A neutral and simple advantaged model are both considered in Section 3, and a more complex learning algorithm is described for Section 4.", "Application: Testing Assumptions The general nature of the framework described here renders it suitable for reproducing the results of previous works and evaluating their assumptions.", "To demonstrate this, we reproduce the major result from Kauhanen (2016) , which tested the behavior of neutral change in networks of singlegrammar learners, in order to dissect two of its primary assumptions.", "Implemented in a typical network framework, the original setup contains n = 200 individuals in probabilistically generated centralized networks in which individuals mature categorically to the single most frequent grammar in their input.", "The author found that categorical neutral change produced chaotic paths of change regardless of network shape and that periodically \"rewiring\" some of the network edges smoothed this out.", "Without commenting on rewiring, we find that the combination of n and choice of categorical learners conspire to create the chaotic results.", "We create two communities, both centralized along the lines of the single cluster in Kauhanen (2016) , initialize all members of cluster 1 with grammar g 1 and all members of cluster 2 with grammar g 2 , and additional edges are added between members of clusters 1 and 2 to allow interaction.", "G is converted to an indicator matrix at the end of each learning iteration by rounding values to 0 and 1 in order to model categorical learners who only internalize the most common grammar in their inputs as in the original model.", "In a pair of infinitely large clusters or two clusters where individuals are permitted to learn a probabilistic distribution of grammars, each cluster should homogenize to a 50/50 distribution of g 1 and g 2 after some number of iterations depending on the specifics of the network shape and setting for Ξ± creating the red curves in Fig.", "2 .", "At n = 20000, each of 10 trials roughly follows the path of the predicted curve, but when run at the original n = 200 for 10 trials, this produces the type of chaotic behavior which Kauhanen (2016) attempts to repair.", "The outcome appears to be the result of an assumption made out of convenience (n = 200) rather than a principled decision.", "To further explore the impact of the population size assumption, we experiment on a model of advantaged change, which is typically contrasted with neutral change because of its tendency to produce \"well-behaved\" S-curve change (Blythe and Croft, 2012; Kauhanen, 2016) .", "This time, only a single cluster is created, and the advantaged grammar is initially assigned to 1% of the population.", "As seen in Figure 3 , results are chaotic for n = 200 once again and near predicted for n = 20000.", "This is important because at n = 200, advantaged change is chaotic, and most simulations both rise and fall.", "An experimenter who only studied advantaged change in small population might concluded that it is as ill-behaved as neutral change.", "While the conclusions that Kauhanen (2016) draws appear valid for n = 200, it is not clear to what extent they can be projected onto larger populations.", "This demonstrates the need for carefully choosing one's modeling assumptions and testing them out when possible.", "Application: Mergers in Progress The acquisition of phonological mergers in mixed input settings presents an interesting problem.", "It appears that mergers have an inherent advantage because they tend to spread at the expense of distinctions, and once they begin, they are rarely reversed (Labov, 1994) .", "Yang (2009) 's acquisition model quantifies this advantage as the relatively lower chance of misinterpretation if a listener assumes the merged grammar instead of the nonmerged grammar once a sufficient proportion of the environment is merged.", "Applied to Johnson (2007) 's detailed population study of the frontier of the COT-CAUGHT merger in the small towns along the border between Rhode Island and Massachusetts, this accurately predicts the ratio of merged input for a child to acquire the merged grammar, however when applied to a perfectly mixed population of learners, it fails to model the spread of the merged grammar in the population.", "Yang's model is input-driven, so it is conducive to simulation with minimal assumptions past those drawn from the empirical data.", "We test the behavior of this learning model in a typical population network and demonstrate that it produces a reasonable path of change.", "Background The COT-CAUGHT merger, also called the low back merger describes the phenomenon present in varieties of North American English spoken in eastern New England, western Pennsylvania, the American West, and Canada among others where the vowel in words like cot and the vowel in words like caught have come to be pronounced the same (Labov et al., 2005, pp.", "58-65) .", "The geographical extent of the merger is currently expanding, which might be expected if the merger has a cognitive or social advantage associated with it.", "Johnson (2007) 's study of the merger's frontier on the border Rhode Island and Massachusetts uncovered an interesting social dynamic that illustrates the merger's speed: there are families where the parents and older siblings non-merged, but the younger siblings are.", "The merger has swept through in only a few years and passed between the siblings.", "Yang (2009) seeks to understand why mergers have an advantage from a cognitive perspective, and his model treats the acquisition of mergers as an evolutionary process.", "Learners who receive both merged (M + ) and non-merged (M βˆ’ ) input entertain both a merged (g + ) and non-merged (g βˆ’ ) grammar and reward whichever grammar successfully parses the input.", "This kind of variational learner (Yang, 2000) is essentially an adaptation of the classic evolutionary Linear Reward Punishment model (Bush and Mosteller, 1953) .", "The fitness of each grammar is the probability in the limit that it will fail to parse any given input, and since it is virtually always the case that this probability is different for both grammars, fitness is virtually always asymmetric.", "The variational learner is characterized as follows.", "Given two grammars and an input token s, The learner parses s with g 1 with probability p and with g 2 with probability q = 1 βˆ’ p. p is rewarded according to whether the choice of g successfully parses s (g β†’ s) or it fails to (g s), where Ξ³ is some small constant.", "p = p + Ξ³q, g β†’ s (1 βˆ’ Ξ³)p, g s Given a specific problem, one can calculate a penalty probability C for each g, the proportion of input that would cause g s. The grammar with the lower C has the advantage, so the other one will be driven down in the long run.", "C can be estimated from type frequencies in a corpus, and the model is non-parametric because these values do not depend on Ξ³. lim tβ†’βˆž p t = C 2 C 1 + C 2 lim tβ†’βˆž q t = C 1 C 1 + C 2 To understand the COT-CAUGHT merger empirically, one must reason about what kind of input would trigger a penalty and then calculate the penalty probabilities of the merged grammar C + and non-merged grammar C βˆ’ from a corpus.", "This model considers parsing failure to be the rate of initial misinterpretation, and for a vowel merger, the only inputs that could create an initial misinterpretation are minimal pairs because they become homophones.", "Examples of COT-CAUGHT minimal pairs include cot-caught, Don-Dawn, stock-stalk, odd-awed, collar-caller, and so on.", "The merged g + grammar collapses would-be minimal pairs into homophones, so the penalty rate C + comes down to lexical access.", "Under the observation that more frequent homophones are retrieved first regardless of syntactic context (Caramazza et al., 2001) , g + listeners only suffer initial misinterpretation when the less frequent member of a pair is uttered regardless of the rate of M + .", "If H is the sum token frequency of all minimal pairs and h i o , h i oh are the frequencies of the ith pair's members, then C+ is calculated by Eqn.", "6.", "In contrast, g βˆ’ listeners are sensitive to the phonemic distinction, so they misinterpret M βˆ’ input at the rate of mishearing one vowel for the other (Peterson and Barney, 1952) (second half of Eqn.", "7).", "And given M + input, they misinterpret whenever they hear the phoneme which g βˆ’ does not expect (e.g., a merged speaker pronouncing cot with the CAUGHT vowel) times the probability of not mishearing that vowel (1-) plus times the probability of hearing the right vowel (i.e., the merged speaker pronounces cot with the COT vowel but it is misheard anyway) (first half of Eqn.", "7).", "Since g βˆ’ misinterpretation rates are a function of the rate of M + (p) in the environment, there is a threshold of M + speakers above which the merged grammar has a fitness advantage over the non-merged one.", "C + = 1 H i min(h i o , h i oh ) (6) C βˆ’ = 1 H i p 0 ((1 βˆ’ oh )h i o + oh h i oh ) (7) +q 0 ( oh h i o + oh h i oh ) Calculating this threshold for the frequent minimal pairs that Yang extracts from the Wortschatz project (Biemann et al., 2004 ) corpus 3 and mishearing rates from Peterson and Barney (1952), the Yang model predicts that a learner exposed to at least ∼ 17% COT-CAUGHT-merged input will acquire the merger.", "This threshold represents a strong advantage for M + because it is well under the 50% threshold expected for neutral (non-advantaged) change and it is very close to what was found in Johnson (2007) 's sociolinguistic study.", "It predicts that younger children may have g + while their parents and even older siblings have g βˆ’ if the 17% threshold was crossed in E after the acquisition period of the older sibling but before that of the younger sibling.", "Model Setup All the mechanics behind the learning model reduce to a simple statement: learners acquires g + iff > 17% of their input is M + and they acquire g βˆ’ otherwise.", "However, this kind of categorical learner in a perfectly-mixed population leads to immediate fixation at either g βˆ’ or g + in a single iteration, since the proportion of g + speakers in the population is equivalent to the proportion of M + input in every learner's environment.", "This is not realistic change.", "Clearly, social network structure is at least as important as the learning algorithm in modeling the spread of the merger.", "We model the change in a non-uniform social network of 100 centralized clusters of 75 individuals each.", "75 was chosen as half Dunbar's number, the maximum number of reliable social connections that an adult can maintain (Dunbar, 2010) .", "There are two grammars, g + and g βˆ’ , and learners internalize one or the other according to the 17% threshold of M + in their input.", "One cluster represents the source of the merger and is initialized at 100% g + , while the rest begin 100% g βˆ’ .", "Inter-cluster connections are chosen randomly so that some connections are between central members of the clusters and some are between peripheral members.", "The one merged cluster is connected to half the other clusters representing those at the frontier of the change, and each other cluster is connected to five randomly chosen ones.", "4 This network structure echoes work in sociolinguistics, in particular, Milroy and Milroy (1985) 's notion of strong and weak connections in language change, where weak connections between social clusters are particularly important for propagation of a change.", "Propagation of the merged grammar is calculated by E n because we are interested in the behavior of individuals without loss of precision and because it cannot be assumed that each cluster is internally uniform.", "5 Since the spread of the merger has been rapid enough to detect over a period of a few years, iterations are modeled as short age co-horts rather than full generations in the first experiments by updating only a randomly chosen 10% of nodes at each iteration because only a fraction of the population is learning at any given time.", "A model where every node is updated is investigated as well.", "Results The behavior of this simulation is shown graphically in Figure 4 .", "The fine/colored lines indicate the rate of M + within each initially non-merged cluster, and the bold/black line shows the average rate across all initially non-merged.", "The merger spreads from cluster to cluster in succession over the \"weak\" inter-cluster connections and through each cluster over the 'strong' connections before moving on to the next ones.", "Everett, 1995) members have the merger, a period of rapid diffusion of the merger, then some time where a few laggards resist the merger.", "As a result, most clusters exhibit an S-like shape.", "A few clusters change rapidly because of their especially wellconnected positions in the network, and some lag behind the rest because they are poorly connected to the rest of the network.", "More interestingly, the population-wide average, the population-level data at the kind of granularity that is often studied, yields a smooth S-curve with a shallower slope than the individual clusters.", "The fact that it arises naturally here in a network that conforms with typical network shapes but was otherwise randomly generated is encouraging because the experiment was not set up so that it would produce such a curve, and the steep rate of change in individual clusters is what is expected for a change that is rapid enough to affect siblings differently.", "In the above simulation, only a fraction of nodes were updated at each iteration in order to model a rapid change.", "In order to confirm that this choice is not affecting the results and to test a purer implementation of the framework presented here, we remove that constraint and update every node at each iteration.", "Figure 5 shows what happens over 20 iterations in a network that is otherwise identical but with 2/5 as many inter-cluster connections as the original.", "A qualitatively similar pattern arises, so the choice to update only a fraction of the population is not crucially affecting the results.", "In all experiments so far, social connections were fixed at the first iteration even though connections in real populations tend to change over time.", "To investigate that modeling assumption, we perform another simulation in which connections are randomly updated both within and across clusters at each iteration akin to Kauhanen (2016) 's rewiring.", "The result as shown in Figure 6 is similar to before, with one major difference.", "The individual clusters transition more closely in time because no individual cluster remains poorly connected or especially well connected throughout the entire simulation.", "Finally, we test our assumptions about population size by repeating the experiments on a smaller network of 40 clusters of 18 individuals.", "The results are qualitatively similar, but the S-curve appears to be more sensitive to probabilistic connections in the network.", "To explore this, we present the average network-wide rate of (M + ) across 10 trials, revealing that an S-like curve is formed each time but that its slope varies.", "A few trials never reach 100% because some of the clusters are not connected to the innovative one.", "The slope varies between trials, indicating that the rate of change is a function of both the population structure and the learning algorithm, but the network size does not substantially affect these results.", "Discussion The algebraic-network framework for modeling population-level language change presented here has substantial practical and theoretical advantages over previous ones.", "It is much simpler computationally than previous frameworks because it calculates the statistically expected behavior of each generation analytically and therefore removes the entire inner loop of calculating stochastic inter-agent interactions from the simulation.", "It follows the Niyogi and Berwick (1996) formalism for language change which presents a clean and modular way of reasoning about the problem and promotes the centrality of language acquisition.", "In addition to the core algorithm, the framework offers enough flexibility to represent a wide variety of processes from the highly abstract (e.g., Kauhanen (2016) ) to those grounded in soci-olinguistic and acquisition research (e.g., Yang (2009) ).", "In our investigation of Kauhanen's basic assumptions, we discover how seemingly innocuous decisions about population size and learning conspire to drive simulation results.", "If learners are conceived as categorical learners, population size becomes a deciding factor in the path of change.", "So while the original results are interesting and meaningful, they may only valid for small (on the order of 10 2 ) populations.", "In our simulation of the spread of the COT-CAUGHT merger, we show how a cognitivelymotivated model of acquisition requires a network model in order to represent population-level language change.", "The population is represented as a collection of individual clusters based on sociological work, but the clusters themselves are connected randomly.", "The fact that S-curves arise naturally from these networks underscores their centrality to language change.", "One problem that this line of simulation work has always faced has been the lack of viable comparison between models because every study implements its own learning, network, and interaction models.", "The modular nature of our framework advances against this trend since it is now possible to hold the population model constant while slotting in various learning models to test them against one another and vice-versa.", "Finally, since this framework reduces to Niyogi & Berwick's models in perfectly-mixed populations, it can be used to reason about the formal dynamics of language change as well.", "Without simulation, it would be difficult or impossible to undercover the interplay between acquisition and social structure on the propagation of language change.", "Neither factor alone can account for the theoretical or empirically observed patterns.", "Simulations of this kind which explicitly model both simultaneously is well equipped to provide insights that fieldwork and laboratory work cannot.", "As such, it is an invaluable complement to those more traditional methodologies." ] }
{ "paper_header_number": [ "1", "1.1", "2", "2.2", "2.3", "3", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Framework for Transmission in Social Networks", "Propagation in the Network", "Learning in the Network", "Application: Testing Assumptions", "Application: Mergers in Progress", "Background", "Model Setup", "Results", "Discussion" ] }
GEM-SciDuet-train-106#paper-1281#slide-7
Vocabulary
L: That which is transmitted G: That which generates/describes/distinguishes L That which is learned/influenced by L Grammar Variety Latent Variable
L: That which is transmitted G: That which generates/describes/distinguishes L That which is learned/influenced by L Grammar Variety Latent Variable
[]
GEM-SciDuet-train-106#paper-1281#slide-8
1281
A Framework for Representing Language Acquisition in a Population Setting
Language variation and change are driven both by individuals' internal cognitive processes and by the social structures through which language propagates. A wide range of computational frameworks have been proposed to connect these drivers. We compare the strengths and weaknesses of existing approaches and propose a new analytic framework which combines previous network models' ability to capture realistic social structure with practically and more elegant computational properties. The framework privileges the process of language acquisition and embeds learners in a social network but is modular so that population structure can be combined with different acquisition models. We demonstrate two applications for the framework: a test of practical concerns that arise when modeling acquisition in a population setting and an application of the framework to recent work on phonological mergers in progress.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217 ], "paper_content_text": [ "Introduction The process of language change should be thought of as a two-step cycle in which 1) individuals acquire their native languages from their predecessors then 2) pass them on to their successors.", "Small changes accrue over time this way and create both small-scale interpersonal variation and large-scale typological differences.", "It is easy to draw a strong analogy here between linguistic evolution and biological evolution.", "Both feature classic descent with modification, except while phenotypes are transmitted through genes and acted on by natural selection, language is both transmitted through and constrained by the individual (Cavalli-Sforza and Feldman, 1981; Ritt, 2004, etc.)", ".", "But while evolution, linguistic or otherwise, is driven by forces acting on the individual, it unfolds on the level of populations (Cavalli-Sforza and Feldman, 1981) .", "The influence of communitylevel social factors on the path of language change is a major focus of sociolinguistics (Labov, 2001; Milroy and Milroy, 1985; Rogers Everett, 1995) .", "Ideally, one could observe population-level variation unfold in real time while testing out individual factors, but this is impossible because nobody can travel back in time or fit entire natural environments into a lab.", "Change that has already happened is out of reach, and change in progress is buried in a world of confounds.", "The classic sociolinguistic method instead approaches the problem by inferring causal factors from patterns discovered in field interviews and corpora (Labov, 1994; Labov et al., 2005, etc.)", ".", "This is the primary source of empirical data in the field and the only way to look at language change in a naturalistic setting, but it is limited in that it cannot test cause and effect directly.", "More recently, controlled experimental studies have emerged as a complementary line of research which manipulate causal factors directly (Johnson et al., 1999; Campbell-Kibler, 2009, etc.)", ", but are inherently removed natural time and scale.", "A third approach, the one we build upon here, relies on computational modeling to simulate how sociolinguistic factors might work together in larger populations (Klein, 1966; Blythe and Croft, 2012; Kauhanen, 2016, etc.)", ".", "It has long been argued that language acquisition is the primary cause of language change (Sweet, 1899; Lightfoot, 1979; Niyogi, 1998, etc.)", ".", "In the last few decades, this connection has been modeled computationally (Gibson and Wexler, 1994; Kirby et al., 2000; Yang, 2000, etc.)", ", leading to the strong conclusion that change is the inevitable consequence of mixed linguistic input or finite learning periods (Niyogi and Berwick, 1996) , even if children are \"perfect\" learners.", "An important result connecting the learner and population emphasizes the need for this line of work: the space of paths of change available in populations is formally larger than the paths available to linear chains of iterated learners.", "Niyogi and Berwick (2009) prove formally that even perfectly-mixed (i.e., uniform and homogeneous social network) populations admit phase transitions in the path of change unavailable to chains of single learners commonly implemented in iterated learning (Kirby et al., 2000) .", "This suggests that small-population experimental studies in sociolinguistics and in child language acquisition do not paint the full picture of language change.", "We introduce a new framework for modeling language change in populations.", "It has an outer loop to represent generational progression, but it replaces the inner loop which calculates randomized interactions between agents with a single formula that is defined generally enough to allow the simulation of a wide range of scenarios.", "It builds upon the principled formalism described by Niyogi and Berwick (1996, et seq.)", ", privileging the acquisition model and separating it from the population model.", "The resulting modular framework is described in the following sections.", "First, Section 1.1 presents a survey of previous simulation work followed by a description of the new population model in Section 2.", "Next, Section 3 addresses practical concerns relating population size to assumptions about language acquisition.", "Finally, Section 4 introduces a case study on phonological change which demonstrates the need for appropriate models both of acquisition and populations.", "Related Work Computational models for the propagation of linguistic variation have been employed with a variety of research goals in mind.", "Every paper implements its own framework with few exceptions, so comparison across studies is difficult.", "Additionally, since each model is essentially 'boutique,' it is always possible that models are designed consciously or unconsciously to achieve a specific outcome rather driven by underlying principles.", "We group these frameworks into three classes according to their implementation, swarm, network, and algebraic, and discusses their strengths and weaknesses.", "The first class, called swarm here, models populations as collections of agents placed on a grid.", "They \"swarm\" around randomly according to some movement function, and \"interact\" when they occupy adjacent grid spaces (Satterfield, 2001; Harrison et al., 2002; Ke et al., 2008; Stanford and Kenny, 2013) .", "This tends toward concrete interpretation, for example, more mobile populations are expressed directly by more mobile agents.", "They capture Bloomfield (1933) 's \"principle of density\" which describes the observation that geographically or socially close individuals interact more frequently than those farther away.", "On the other hand, they provide little control over network structure, relying on series of explicit movement constraints in order to direct their agents, and since each one moves randomly at each iteration, these models have potentially thousands of degrees of freedom.", "Such simulations should be run many times if any sort of statistically expected results are to be computed.", "The second class, network frameworks, model speakers as nodes and interaction probabilities as weighted edges on network graphs (Minett and Wang, 2008; Baxter et al., 2009; Fagyal et al., 2010; Blythe and Croft, 2012; Kauhanen, 2016) .", "These frameworks offer precise control over social network structure and can test specific community models from within sociolinguistics.", "However, implementations usually proceed by some kind of iterative probabilistic node-pair selection process, and in this way suffer from the same statistical pitfalls as swarm frameworks.", "In contrast to swarm models, interaction is rigidly restricted to immediately connected nodes, so to achieve gradient interaction probabilities, edges must be frequently updated or nearly fully-connected graphs with carefully assigned edge weights would need to be constructed and motivated.", "The third class, algebraic frameworks, present analytic methods for determining the state of the network at the end of each iteration rather than relying on stochastic simulation of individual agents (Niyogi and Berwick, 1996, 1997; Yang, 2000; Baxter et al., 2006; Minett and Wang, 2008; Niyogi and Berwick, 2009) .", "Removing that inner loop is a more mathematically elegant approach and avoids dealing unnecessarily with statistics behind random trials.", "Removing that loop speeds up calculation as well, making larger simulations more tractable than with network or swarm frameworks.", "But this power is achieved by sacrificing the social network.", "Up to this point, such models have, to our knowledge, only been defined over perfectly-mixed (i.e., no network effects) populations.", "That assumption is useful for reasoning about the mathematical theory behind language change, but it hinders such models' utility in empirical studies.", "For example, though Baxter et al.", "(2006) and Minett and Wang (2008) implement algebraic models for perfectly mixed populations, they fall back on network models to model network effects.", "Framework for Transmission in Social Networks Algebraic frameworks have their mathematical advantage, but network frameworks provide a richer model for representing real-world population structures and swarm models capture density effects by default.", "An ideal framework would combine the benefits of all three of these.", "Here we do just that.", "We introduce a framework that instantiates Niyogi and Berwick (1996) 's acquisitiondriven formalism where change is handled explicitly as a two-step alternation between individual learners learning and populations interacting.", "It provides an analytic solution to the state of a network structure over which swarm-like behavior can be modeled.", "We begin by conceptualizing the framework in terms of agents traveling probabilistically over a network structure as in Algo.", "1 before introducing the analytic solution.", "There is an individual standing at every node in the graph, and at every iteration, each individual begins at some location and travels along the network's edges, at each step deciding to continue on or to stop and interact with the agent at that node.", "Any two agents with a nonzero weight path between them could potentially interact, so the overall probability of an interaction is a function of the shape of the network and the decay rate of the step probability.", "The shorter and higher weighted the path between two agents, the more likely they are to interact.", "This corresponds to the gradient interaction probabilities of swarm frameworks.", "Social networks are typically conceived of as graph structures with individuals as vertices and the social or geographical connections between individuals as edges, and this allows for a great deal of flexibility.", "If edges are undirected, then all interactions are equal and bidirectional, but if edges are directed, interactions may or may not be.", "Edges can be weighted to represent likelihood of interaction or some measure of social valuation, and this too can vary over time.", "Lastly, it is possible to add and remove nodes themselves to capture births, deaths, or migration.", "The network structure is represented computationally here as an adjacency matrix A.", "In a population of n individuals, this is n Γ— n where each element a ij is the weight of the connection from individual j to individual i.", "The matrix must be column stochastic (all columns sum to 1 and contain only positive elements) so that edge weights can be interpreted as probabilities.", "The special case where the matrix is symmetric (every a ij = a ji ) models undirected edges, and more strongly, the model reduces to perfectly-mixed populations when each a ij = 1 n .", "We define a notion of communities over the nodes of the network in order to add the option to categorize groups of individuals.", "Membership among c communities is identified with an n Γ— c indicator matrix C. Depending on the problem at hand, it is possible to calculate the average behavior of the learners within each community directly without having to calculate the behavior of each individual member.", "Propagation in the Network In a typical network model, the edge weights between nodes in A are interpreted directly as interaction probabilities, meaning that individuals only ever interact with their immediate graph neighbors.", "We take a different approach by allowing the agents to \"travel\" and potentially interact with any other agent whose node is connected by a path of non-zero edges.", "If the number of traveling steps were fixed at k, the probability of each pair interacting would be defined as A k .", "It is more complicated for us since the number of steps traveled is a random variable.", "The probability of j interacting with i (p(ij)) is the probability of them interacting after k steps times the probability of k for all values of k as in Eqn.", "1.", "Combining this intuition with A yields the interaction probabilities for all i, j pairs.", "p(ij) = k p(ij|k steps) p(k steps) (1) The pattern of linguistic variants or grammars (in the formal sense where grammar g is the intensional equivalent of language L g ) within a network unfolds as a dynamical system over the course of many iterations, and learners' positions within the network mediate which ones they eventually acquire.", "In a system with g grammars and n individuals, a n Γ— g row-stochastic matrix G specifies the probability with which each community expresses each grammar.", "Given this notion of interaction and the specification of grammars expressed within a network, it is possible to compute the distribution of grammars presented to each learner.", "This is the learners' linguistic environment and is represented by a matrix E in the same form as G .", "An environment function E n (G t , A) = E t+1 shown in Eqn.", "2 calculates E by first calculating all the interaction probabilities in the network then multiplying those by the grammars which every agent expresses to get the environment E. The Ξ± parameter from the geometric distribution 1 defines the travel decay rate.", "A lower Ξ± defines conceptually more mobile agents.", "More generally, E n is a special case of E(G t , C t , A t ) = E t+1 where the number of communities equals the number of individuals (c = n).", "C becomes the identity matrix without loss of generality, so the network's initial condition does not have to be defined explicitly.", "For any other community definition, an initial condition has to be defined as in Eqn.", "3 which specifies the starting point in the network that each agent conceptually begins traveling from.", "The output of E is a g Γ— c matrix giving the environment of the average agent in each community.", "2 E n (G t , A) = G t Ξ± (I βˆ’ (1 βˆ’ Ξ±)A) βˆ’1 (2) E(G t , C, A) = E n (G t , A)C(C C) βˆ’1 (3) The output of E must be broadcast to g Γ— n, which would result in the loss of some information unless the assumption can be made that each community is internally uniform.", "However, when that assumption can be made, the n Γ— n adjacency matrix admits a c Γ— c equitable partition A Ο€ (Eqn.", "4) (Schaub et al., 2016) which permits an alternate environment function E EP (G t , C, A) shown in Eqn.", "5 that is equivalent to the lossless E n if A.", "If n c, E EP is much faster to calculate because it only inverts a small c Γ— c matrix rather than a large n Γ— n. This makes it feasible to run much larger simulations than what has been done in the past.", "A Ο€ = (C C) βˆ’1 C AC (4) EEP = Ξ±G C (I βˆ’ (1 βˆ’ Ξ±)A Ο€ ) βˆ’1 (C C) βˆ’1 (5) Learning in the Network The environment function describes what inputs E t+1 are available to learners given the language expressed by the mature speakers of the previous age cohort with grammars G t .", "The second component of the framework describes the learning algorithm A(E t+1 ) = G t+1 , how individuals respond to their input environment.", "The resulting G t+1 describes which grammars those learners will eventually contribute to the subsequent generation's environment E t+2 .", "This back-andforth between adults' grammars G and childrens' environment E is the two-step cycle of language change (Fig.", "1) .", "In neutral change, learners would acquire grammars at the rates that they are expressed in their environments, but there is good reason to believe .", ".", ".", "G t β†’ E t+1 β†’ G t+1 .", ".", ".", "G t+i β†’ E t+i+1 .", ".", ".", "Figure 1: Language change as an alternation between G and E matrices that most language change involves differential fitness between competing variants, and most nontrivial learning algorithms yield some kind of fitness (Kroch, 1989; Yang, 2000; Blythe and Croft, 2012, etc.)", ", so A is rarely neutral.", "A neutral and simple advantaged model are both considered in Section 3, and a more complex learning algorithm is described for Section 4.", "Application: Testing Assumptions The general nature of the framework described here renders it suitable for reproducing the results of previous works and evaluating their assumptions.", "To demonstrate this, we reproduce the major result from Kauhanen (2016) , which tested the behavior of neutral change in networks of singlegrammar learners, in order to dissect two of its primary assumptions.", "Implemented in a typical network framework, the original setup contains n = 200 individuals in probabilistically generated centralized networks in which individuals mature categorically to the single most frequent grammar in their input.", "The author found that categorical neutral change produced chaotic paths of change regardless of network shape and that periodically \"rewiring\" some of the network edges smoothed this out.", "Without commenting on rewiring, we find that the combination of n and choice of categorical learners conspire to create the chaotic results.", "We create two communities, both centralized along the lines of the single cluster in Kauhanen (2016) , initialize all members of cluster 1 with grammar g 1 and all members of cluster 2 with grammar g 2 , and additional edges are added between members of clusters 1 and 2 to allow interaction.", "G is converted to an indicator matrix at the end of each learning iteration by rounding values to 0 and 1 in order to model categorical learners who only internalize the most common grammar in their inputs as in the original model.", "In a pair of infinitely large clusters or two clusters where individuals are permitted to learn a probabilistic distribution of grammars, each cluster should homogenize to a 50/50 distribution of g 1 and g 2 after some number of iterations depending on the specifics of the network shape and setting for Ξ± creating the red curves in Fig.", "2 .", "At n = 20000, each of 10 trials roughly follows the path of the predicted curve, but when run at the original n = 200 for 10 trials, this produces the type of chaotic behavior which Kauhanen (2016) attempts to repair.", "The outcome appears to be the result of an assumption made out of convenience (n = 200) rather than a principled decision.", "To further explore the impact of the population size assumption, we experiment on a model of advantaged change, which is typically contrasted with neutral change because of its tendency to produce \"well-behaved\" S-curve change (Blythe and Croft, 2012; Kauhanen, 2016) .", "This time, only a single cluster is created, and the advantaged grammar is initially assigned to 1% of the population.", "As seen in Figure 3 , results are chaotic for n = 200 once again and near predicted for n = 20000.", "This is important because at n = 200, advantaged change is chaotic, and most simulations both rise and fall.", "An experimenter who only studied advantaged change in small population might concluded that it is as ill-behaved as neutral change.", "While the conclusions that Kauhanen (2016) draws appear valid for n = 200, it is not clear to what extent they can be projected onto larger populations.", "This demonstrates the need for carefully choosing one's modeling assumptions and testing them out when possible.", "Application: Mergers in Progress The acquisition of phonological mergers in mixed input settings presents an interesting problem.", "It appears that mergers have an inherent advantage because they tend to spread at the expense of distinctions, and once they begin, they are rarely reversed (Labov, 1994) .", "Yang (2009) 's acquisition model quantifies this advantage as the relatively lower chance of misinterpretation if a listener assumes the merged grammar instead of the nonmerged grammar once a sufficient proportion of the environment is merged.", "Applied to Johnson (2007) 's detailed population study of the frontier of the COT-CAUGHT merger in the small towns along the border between Rhode Island and Massachusetts, this accurately predicts the ratio of merged input for a child to acquire the merged grammar, however when applied to a perfectly mixed population of learners, it fails to model the spread of the merged grammar in the population.", "Yang's model is input-driven, so it is conducive to simulation with minimal assumptions past those drawn from the empirical data.", "We test the behavior of this learning model in a typical population network and demonstrate that it produces a reasonable path of change.", "Background The COT-CAUGHT merger, also called the low back merger describes the phenomenon present in varieties of North American English spoken in eastern New England, western Pennsylvania, the American West, and Canada among others where the vowel in words like cot and the vowel in words like caught have come to be pronounced the same (Labov et al., 2005, pp.", "58-65) .", "The geographical extent of the merger is currently expanding, which might be expected if the merger has a cognitive or social advantage associated with it.", "Johnson (2007) 's study of the merger's frontier on the border Rhode Island and Massachusetts uncovered an interesting social dynamic that illustrates the merger's speed: there are families where the parents and older siblings non-merged, but the younger siblings are.", "The merger has swept through in only a few years and passed between the siblings.", "Yang (2009) seeks to understand why mergers have an advantage from a cognitive perspective, and his model treats the acquisition of mergers as an evolutionary process.", "Learners who receive both merged (M + ) and non-merged (M βˆ’ ) input entertain both a merged (g + ) and non-merged (g βˆ’ ) grammar and reward whichever grammar successfully parses the input.", "This kind of variational learner (Yang, 2000) is essentially an adaptation of the classic evolutionary Linear Reward Punishment model (Bush and Mosteller, 1953) .", "The fitness of each grammar is the probability in the limit that it will fail to parse any given input, and since it is virtually always the case that this probability is different for both grammars, fitness is virtually always asymmetric.", "The variational learner is characterized as follows.", "Given two grammars and an input token s, The learner parses s with g 1 with probability p and with g 2 with probability q = 1 βˆ’ p. p is rewarded according to whether the choice of g successfully parses s (g β†’ s) or it fails to (g s), where Ξ³ is some small constant.", "p = p + Ξ³q, g β†’ s (1 βˆ’ Ξ³)p, g s Given a specific problem, one can calculate a penalty probability C for each g, the proportion of input that would cause g s. The grammar with the lower C has the advantage, so the other one will be driven down in the long run.", "C can be estimated from type frequencies in a corpus, and the model is non-parametric because these values do not depend on Ξ³. lim tβ†’βˆž p t = C 2 C 1 + C 2 lim tβ†’βˆž q t = C 1 C 1 + C 2 To understand the COT-CAUGHT merger empirically, one must reason about what kind of input would trigger a penalty and then calculate the penalty probabilities of the merged grammar C + and non-merged grammar C βˆ’ from a corpus.", "This model considers parsing failure to be the rate of initial misinterpretation, and for a vowel merger, the only inputs that could create an initial misinterpretation are minimal pairs because they become homophones.", "Examples of COT-CAUGHT minimal pairs include cot-caught, Don-Dawn, stock-stalk, odd-awed, collar-caller, and so on.", "The merged g + grammar collapses would-be minimal pairs into homophones, so the penalty rate C + comes down to lexical access.", "Under the observation that more frequent homophones are retrieved first regardless of syntactic context (Caramazza et al., 2001) , g + listeners only suffer initial misinterpretation when the less frequent member of a pair is uttered regardless of the rate of M + .", "If H is the sum token frequency of all minimal pairs and h i o , h i oh are the frequencies of the ith pair's members, then C+ is calculated by Eqn.", "6.", "In contrast, g βˆ’ listeners are sensitive to the phonemic distinction, so they misinterpret M βˆ’ input at the rate of mishearing one vowel for the other (Peterson and Barney, 1952) (second half of Eqn.", "7).", "And given M + input, they misinterpret whenever they hear the phoneme which g βˆ’ does not expect (e.g., a merged speaker pronouncing cot with the CAUGHT vowel) times the probability of not mishearing that vowel (1-) plus times the probability of hearing the right vowel (i.e., the merged speaker pronounces cot with the COT vowel but it is misheard anyway) (first half of Eqn.", "7).", "Since g βˆ’ misinterpretation rates are a function of the rate of M + (p) in the environment, there is a threshold of M + speakers above which the merged grammar has a fitness advantage over the non-merged one.", "C + = 1 H i min(h i o , h i oh ) (6) C βˆ’ = 1 H i p 0 ((1 βˆ’ oh )h i o + oh h i oh ) (7) +q 0 ( oh h i o + oh h i oh ) Calculating this threshold for the frequent minimal pairs that Yang extracts from the Wortschatz project (Biemann et al., 2004 ) corpus 3 and mishearing rates from Peterson and Barney (1952), the Yang model predicts that a learner exposed to at least ∼ 17% COT-CAUGHT-merged input will acquire the merger.", "This threshold represents a strong advantage for M + because it is well under the 50% threshold expected for neutral (non-advantaged) change and it is very close to what was found in Johnson (2007) 's sociolinguistic study.", "It predicts that younger children may have g + while their parents and even older siblings have g βˆ’ if the 17% threshold was crossed in E after the acquisition period of the older sibling but before that of the younger sibling.", "Model Setup All the mechanics behind the learning model reduce to a simple statement: learners acquires g + iff > 17% of their input is M + and they acquire g βˆ’ otherwise.", "However, this kind of categorical learner in a perfectly-mixed population leads to immediate fixation at either g βˆ’ or g + in a single iteration, since the proportion of g + speakers in the population is equivalent to the proportion of M + input in every learner's environment.", "This is not realistic change.", "Clearly, social network structure is at least as important as the learning algorithm in modeling the spread of the merger.", "We model the change in a non-uniform social network of 100 centralized clusters of 75 individuals each.", "75 was chosen as half Dunbar's number, the maximum number of reliable social connections that an adult can maintain (Dunbar, 2010) .", "There are two grammars, g + and g βˆ’ , and learners internalize one or the other according to the 17% threshold of M + in their input.", "One cluster represents the source of the merger and is initialized at 100% g + , while the rest begin 100% g βˆ’ .", "Inter-cluster connections are chosen randomly so that some connections are between central members of the clusters and some are between peripheral members.", "The one merged cluster is connected to half the other clusters representing those at the frontier of the change, and each other cluster is connected to five randomly chosen ones.", "4 This network structure echoes work in sociolinguistics, in particular, Milroy and Milroy (1985) 's notion of strong and weak connections in language change, where weak connections between social clusters are particularly important for propagation of a change.", "Propagation of the merged grammar is calculated by E n because we are interested in the behavior of individuals without loss of precision and because it cannot be assumed that each cluster is internally uniform.", "5 Since the spread of the merger has been rapid enough to detect over a period of a few years, iterations are modeled as short age co-horts rather than full generations in the first experiments by updating only a randomly chosen 10% of nodes at each iteration because only a fraction of the population is learning at any given time.", "A model where every node is updated is investigated as well.", "Results The behavior of this simulation is shown graphically in Figure 4 .", "The fine/colored lines indicate the rate of M + within each initially non-merged cluster, and the bold/black line shows the average rate across all initially non-merged.", "The merger spreads from cluster to cluster in succession over the \"weak\" inter-cluster connections and through each cluster over the 'strong' connections before moving on to the next ones.", "Everett, 1995) members have the merger, a period of rapid diffusion of the merger, then some time where a few laggards resist the merger.", "As a result, most clusters exhibit an S-like shape.", "A few clusters change rapidly because of their especially wellconnected positions in the network, and some lag behind the rest because they are poorly connected to the rest of the network.", "More interestingly, the population-wide average, the population-level data at the kind of granularity that is often studied, yields a smooth S-curve with a shallower slope than the individual clusters.", "The fact that it arises naturally here in a network that conforms with typical network shapes but was otherwise randomly generated is encouraging because the experiment was not set up so that it would produce such a curve, and the steep rate of change in individual clusters is what is expected for a change that is rapid enough to affect siblings differently.", "In the above simulation, only a fraction of nodes were updated at each iteration in order to model a rapid change.", "In order to confirm that this choice is not affecting the results and to test a purer implementation of the framework presented here, we remove that constraint and update every node at each iteration.", "Figure 5 shows what happens over 20 iterations in a network that is otherwise identical but with 2/5 as many inter-cluster connections as the original.", "A qualitatively similar pattern arises, so the choice to update only a fraction of the population is not crucially affecting the results.", "In all experiments so far, social connections were fixed at the first iteration even though connections in real populations tend to change over time.", "To investigate that modeling assumption, we perform another simulation in which connections are randomly updated both within and across clusters at each iteration akin to Kauhanen (2016) 's rewiring.", "The result as shown in Figure 6 is similar to before, with one major difference.", "The individual clusters transition more closely in time because no individual cluster remains poorly connected or especially well connected throughout the entire simulation.", "Finally, we test our assumptions about population size by repeating the experiments on a smaller network of 40 clusters of 18 individuals.", "The results are qualitatively similar, but the S-curve appears to be more sensitive to probabilistic connections in the network.", "To explore this, we present the average network-wide rate of (M + ) across 10 trials, revealing that an S-like curve is formed each time but that its slope varies.", "A few trials never reach 100% because some of the clusters are not connected to the innovative one.", "The slope varies between trials, indicating that the rate of change is a function of both the population structure and the learning algorithm, but the network size does not substantially affect these results.", "Discussion The algebraic-network framework for modeling population-level language change presented here has substantial practical and theoretical advantages over previous ones.", "It is much simpler computationally than previous frameworks because it calculates the statistically expected behavior of each generation analytically and therefore removes the entire inner loop of calculating stochastic inter-agent interactions from the simulation.", "It follows the Niyogi and Berwick (1996) formalism for language change which presents a clean and modular way of reasoning about the problem and promotes the centrality of language acquisition.", "In addition to the core algorithm, the framework offers enough flexibility to represent a wide variety of processes from the highly abstract (e.g., Kauhanen (2016) ) to those grounded in soci-olinguistic and acquisition research (e.g., Yang (2009) ).", "In our investigation of Kauhanen's basic assumptions, we discover how seemingly innocuous decisions about population size and learning conspire to drive simulation results.", "If learners are conceived as categorical learners, population size becomes a deciding factor in the path of change.", "So while the original results are interesting and meaningful, they may only valid for small (on the order of 10 2 ) populations.", "In our simulation of the spread of the COT-CAUGHT merger, we show how a cognitivelymotivated model of acquisition requires a network model in order to represent population-level language change.", "The population is represented as a collection of individual clusters based on sociological work, but the clusters themselves are connected randomly.", "The fact that S-curves arise naturally from these networks underscores their centrality to language change.", "One problem that this line of simulation work has always faced has been the lack of viable comparison between models because every study implements its own learning, network, and interaction models.", "The modular nature of our framework advances against this trend since it is now possible to hold the population model constant while slotting in various learning models to test them against one another and vice-versa.", "Finally, since this framework reduces to Niyogi & Berwick's models in perfectly-mixed populations, it can be used to reason about the formal dynamics of language change as well.", "Without simulation, it would be difficult or impossible to undercover the interplay between acquisition and social structure on the propagation of language change.", "Neither factor alone can account for the theoretical or empirically observed patterns.", "Simulations of this kind which explicitly model both simultaneously is well equipped to provide insights that fieldwork and laboratory work cannot.", "As such, it is an invaluable complement to those more traditional methodologies." ] }
{ "paper_header_number": [ "1", "1.1", "2", "2.2", "2.3", "3", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Framework for Transmission in Social Networks", "Propagation in the Network", "Learning in the Network", "Application: Testing Assumptions", "Application: Mergers in Progress", "Background", "Model Setup", "Results", "Discussion" ] }
GEM-SciDuet-train-106#paper-1281#slide-8
Binary G Examples
G: {Merged grammar, Non-merged grammar} L: Merged or non-merged instances of cot and caught words {Dived-generating grammar, Dove-generating grammar} Instances of the past tense of dive as dived or dove {have+NEG = havent got grammar, have+NEG = dont have grammar} Instances of havent got and instances of dont have
G: {Merged grammar, Non-merged grammar} L: Merged or non-merged instances of cot and caught words {Dived-generating grammar, Dove-generating grammar} Instances of the past tense of dive as dived or dove {have+NEG = havent got grammar, have+NEG = dont have grammar} Instances of havent got and instances of dont have
[]
GEM-SciDuet-train-106#paper-1281#slide-9
1281
A Framework for Representing Language Acquisition in a Population Setting
Language variation and change are driven both by individuals' internal cognitive processes and by the social structures through which language propagates. A wide range of computational frameworks have been proposed to connect these drivers. We compare the strengths and weaknesses of existing approaches and propose a new analytic framework which combines previous network models' ability to capture realistic social structure with practically and more elegant computational properties. The framework privileges the process of language acquisition and embeds learners in a social network but is modular so that population structure can be combined with different acquisition models. We demonstrate two applications for the framework: a test of practical concerns that arise when modeling acquisition in a population setting and an application of the framework to recent work on phonological mergers in progress.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217 ], "paper_content_text": [ "Introduction The process of language change should be thought of as a two-step cycle in which 1) individuals acquire their native languages from their predecessors then 2) pass them on to their successors.", "Small changes accrue over time this way and create both small-scale interpersonal variation and large-scale typological differences.", "It is easy to draw a strong analogy here between linguistic evolution and biological evolution.", "Both feature classic descent with modification, except while phenotypes are transmitted through genes and acted on by natural selection, language is both transmitted through and constrained by the individual (Cavalli-Sforza and Feldman, 1981; Ritt, 2004, etc.)", ".", "But while evolution, linguistic or otherwise, is driven by forces acting on the individual, it unfolds on the level of populations (Cavalli-Sforza and Feldman, 1981) .", "The influence of communitylevel social factors on the path of language change is a major focus of sociolinguistics (Labov, 2001; Milroy and Milroy, 1985; Rogers Everett, 1995) .", "Ideally, one could observe population-level variation unfold in real time while testing out individual factors, but this is impossible because nobody can travel back in time or fit entire natural environments into a lab.", "Change that has already happened is out of reach, and change in progress is buried in a world of confounds.", "The classic sociolinguistic method instead approaches the problem by inferring causal factors from patterns discovered in field interviews and corpora (Labov, 1994; Labov et al., 2005, etc.)", ".", "This is the primary source of empirical data in the field and the only way to look at language change in a naturalistic setting, but it is limited in that it cannot test cause and effect directly.", "More recently, controlled experimental studies have emerged as a complementary line of research which manipulate causal factors directly (Johnson et al., 1999; Campbell-Kibler, 2009, etc.)", ", but are inherently removed natural time and scale.", "A third approach, the one we build upon here, relies on computational modeling to simulate how sociolinguistic factors might work together in larger populations (Klein, 1966; Blythe and Croft, 2012; Kauhanen, 2016, etc.)", ".", "It has long been argued that language acquisition is the primary cause of language change (Sweet, 1899; Lightfoot, 1979; Niyogi, 1998, etc.)", ".", "In the last few decades, this connection has been modeled computationally (Gibson and Wexler, 1994; Kirby et al., 2000; Yang, 2000, etc.)", ", leading to the strong conclusion that change is the inevitable consequence of mixed linguistic input or finite learning periods (Niyogi and Berwick, 1996) , even if children are \"perfect\" learners.", "An important result connecting the learner and population emphasizes the need for this line of work: the space of paths of change available in populations is formally larger than the paths available to linear chains of iterated learners.", "Niyogi and Berwick (2009) prove formally that even perfectly-mixed (i.e., uniform and homogeneous social network) populations admit phase transitions in the path of change unavailable to chains of single learners commonly implemented in iterated learning (Kirby et al., 2000) .", "This suggests that small-population experimental studies in sociolinguistics and in child language acquisition do not paint the full picture of language change.", "We introduce a new framework for modeling language change in populations.", "It has an outer loop to represent generational progression, but it replaces the inner loop which calculates randomized interactions between agents with a single formula that is defined generally enough to allow the simulation of a wide range of scenarios.", "It builds upon the principled formalism described by Niyogi and Berwick (1996, et seq.)", ", privileging the acquisition model and separating it from the population model.", "The resulting modular framework is described in the following sections.", "First, Section 1.1 presents a survey of previous simulation work followed by a description of the new population model in Section 2.", "Next, Section 3 addresses practical concerns relating population size to assumptions about language acquisition.", "Finally, Section 4 introduces a case study on phonological change which demonstrates the need for appropriate models both of acquisition and populations.", "Related Work Computational models for the propagation of linguistic variation have been employed with a variety of research goals in mind.", "Every paper implements its own framework with few exceptions, so comparison across studies is difficult.", "Additionally, since each model is essentially 'boutique,' it is always possible that models are designed consciously or unconsciously to achieve a specific outcome rather driven by underlying principles.", "We group these frameworks into three classes according to their implementation, swarm, network, and algebraic, and discusses their strengths and weaknesses.", "The first class, called swarm here, models populations as collections of agents placed on a grid.", "They \"swarm\" around randomly according to some movement function, and \"interact\" when they occupy adjacent grid spaces (Satterfield, 2001; Harrison et al., 2002; Ke et al., 2008; Stanford and Kenny, 2013) .", "This tends toward concrete interpretation, for example, more mobile populations are expressed directly by more mobile agents.", "They capture Bloomfield (1933) 's \"principle of density\" which describes the observation that geographically or socially close individuals interact more frequently than those farther away.", "On the other hand, they provide little control over network structure, relying on series of explicit movement constraints in order to direct their agents, and since each one moves randomly at each iteration, these models have potentially thousands of degrees of freedom.", "Such simulations should be run many times if any sort of statistically expected results are to be computed.", "The second class, network frameworks, model speakers as nodes and interaction probabilities as weighted edges on network graphs (Minett and Wang, 2008; Baxter et al., 2009; Fagyal et al., 2010; Blythe and Croft, 2012; Kauhanen, 2016) .", "These frameworks offer precise control over social network structure and can test specific community models from within sociolinguistics.", "However, implementations usually proceed by some kind of iterative probabilistic node-pair selection process, and in this way suffer from the same statistical pitfalls as swarm frameworks.", "In contrast to swarm models, interaction is rigidly restricted to immediately connected nodes, so to achieve gradient interaction probabilities, edges must be frequently updated or nearly fully-connected graphs with carefully assigned edge weights would need to be constructed and motivated.", "The third class, algebraic frameworks, present analytic methods for determining the state of the network at the end of each iteration rather than relying on stochastic simulation of individual agents (Niyogi and Berwick, 1996, 1997; Yang, 2000; Baxter et al., 2006; Minett and Wang, 2008; Niyogi and Berwick, 2009) .", "Removing that inner loop is a more mathematically elegant approach and avoids dealing unnecessarily with statistics behind random trials.", "Removing that loop speeds up calculation as well, making larger simulations more tractable than with network or swarm frameworks.", "But this power is achieved by sacrificing the social network.", "Up to this point, such models have, to our knowledge, only been defined over perfectly-mixed (i.e., no network effects) populations.", "That assumption is useful for reasoning about the mathematical theory behind language change, but it hinders such models' utility in empirical studies.", "For example, though Baxter et al.", "(2006) and Minett and Wang (2008) implement algebraic models for perfectly mixed populations, they fall back on network models to model network effects.", "Framework for Transmission in Social Networks Algebraic frameworks have their mathematical advantage, but network frameworks provide a richer model for representing real-world population structures and swarm models capture density effects by default.", "An ideal framework would combine the benefits of all three of these.", "Here we do just that.", "We introduce a framework that instantiates Niyogi and Berwick (1996) 's acquisitiondriven formalism where change is handled explicitly as a two-step alternation between individual learners learning and populations interacting.", "It provides an analytic solution to the state of a network structure over which swarm-like behavior can be modeled.", "We begin by conceptualizing the framework in terms of agents traveling probabilistically over a network structure as in Algo.", "1 before introducing the analytic solution.", "There is an individual standing at every node in the graph, and at every iteration, each individual begins at some location and travels along the network's edges, at each step deciding to continue on or to stop and interact with the agent at that node.", "Any two agents with a nonzero weight path between them could potentially interact, so the overall probability of an interaction is a function of the shape of the network and the decay rate of the step probability.", "The shorter and higher weighted the path between two agents, the more likely they are to interact.", "This corresponds to the gradient interaction probabilities of swarm frameworks.", "Social networks are typically conceived of as graph structures with individuals as vertices and the social or geographical connections between individuals as edges, and this allows for a great deal of flexibility.", "If edges are undirected, then all interactions are equal and bidirectional, but if edges are directed, interactions may or may not be.", "Edges can be weighted to represent likelihood of interaction or some measure of social valuation, and this too can vary over time.", "Lastly, it is possible to add and remove nodes themselves to capture births, deaths, or migration.", "The network structure is represented computationally here as an adjacency matrix A.", "In a population of n individuals, this is n Γ— n where each element a ij is the weight of the connection from individual j to individual i.", "The matrix must be column stochastic (all columns sum to 1 and contain only positive elements) so that edge weights can be interpreted as probabilities.", "The special case where the matrix is symmetric (every a ij = a ji ) models undirected edges, and more strongly, the model reduces to perfectly-mixed populations when each a ij = 1 n .", "We define a notion of communities over the nodes of the network in order to add the option to categorize groups of individuals.", "Membership among c communities is identified with an n Γ— c indicator matrix C. Depending on the problem at hand, it is possible to calculate the average behavior of the learners within each community directly without having to calculate the behavior of each individual member.", "Propagation in the Network In a typical network model, the edge weights between nodes in A are interpreted directly as interaction probabilities, meaning that individuals only ever interact with their immediate graph neighbors.", "We take a different approach by allowing the agents to \"travel\" and potentially interact with any other agent whose node is connected by a path of non-zero edges.", "If the number of traveling steps were fixed at k, the probability of each pair interacting would be defined as A k .", "It is more complicated for us since the number of steps traveled is a random variable.", "The probability of j interacting with i (p(ij)) is the probability of them interacting after k steps times the probability of k for all values of k as in Eqn.", "1.", "Combining this intuition with A yields the interaction probabilities for all i, j pairs.", "p(ij) = k p(ij|k steps) p(k steps) (1) The pattern of linguistic variants or grammars (in the formal sense where grammar g is the intensional equivalent of language L g ) within a network unfolds as a dynamical system over the course of many iterations, and learners' positions within the network mediate which ones they eventually acquire.", "In a system with g grammars and n individuals, a n Γ— g row-stochastic matrix G specifies the probability with which each community expresses each grammar.", "Given this notion of interaction and the specification of grammars expressed within a network, it is possible to compute the distribution of grammars presented to each learner.", "This is the learners' linguistic environment and is represented by a matrix E in the same form as G .", "An environment function E n (G t , A) = E t+1 shown in Eqn.", "2 calculates E by first calculating all the interaction probabilities in the network then multiplying those by the grammars which every agent expresses to get the environment E. The Ξ± parameter from the geometric distribution 1 defines the travel decay rate.", "A lower Ξ± defines conceptually more mobile agents.", "More generally, E n is a special case of E(G t , C t , A t ) = E t+1 where the number of communities equals the number of individuals (c = n).", "C becomes the identity matrix without loss of generality, so the network's initial condition does not have to be defined explicitly.", "For any other community definition, an initial condition has to be defined as in Eqn.", "3 which specifies the starting point in the network that each agent conceptually begins traveling from.", "The output of E is a g Γ— c matrix giving the environment of the average agent in each community.", "2 E n (G t , A) = G t Ξ± (I βˆ’ (1 βˆ’ Ξ±)A) βˆ’1 (2) E(G t , C, A) = E n (G t , A)C(C C) βˆ’1 (3) The output of E must be broadcast to g Γ— n, which would result in the loss of some information unless the assumption can be made that each community is internally uniform.", "However, when that assumption can be made, the n Γ— n adjacency matrix admits a c Γ— c equitable partition A Ο€ (Eqn.", "4) (Schaub et al., 2016) which permits an alternate environment function E EP (G t , C, A) shown in Eqn.", "5 that is equivalent to the lossless E n if A.", "If n c, E EP is much faster to calculate because it only inverts a small c Γ— c matrix rather than a large n Γ— n. This makes it feasible to run much larger simulations than what has been done in the past.", "A Ο€ = (C C) βˆ’1 C AC (4) EEP = Ξ±G C (I βˆ’ (1 βˆ’ Ξ±)A Ο€ ) βˆ’1 (C C) βˆ’1 (5) Learning in the Network The environment function describes what inputs E t+1 are available to learners given the language expressed by the mature speakers of the previous age cohort with grammars G t .", "The second component of the framework describes the learning algorithm A(E t+1 ) = G t+1 , how individuals respond to their input environment.", "The resulting G t+1 describes which grammars those learners will eventually contribute to the subsequent generation's environment E t+2 .", "This back-andforth between adults' grammars G and childrens' environment E is the two-step cycle of language change (Fig.", "1) .", "In neutral change, learners would acquire grammars at the rates that they are expressed in their environments, but there is good reason to believe .", ".", ".", "G t β†’ E t+1 β†’ G t+1 .", ".", ".", "G t+i β†’ E t+i+1 .", ".", ".", "Figure 1: Language change as an alternation between G and E matrices that most language change involves differential fitness between competing variants, and most nontrivial learning algorithms yield some kind of fitness (Kroch, 1989; Yang, 2000; Blythe and Croft, 2012, etc.)", ", so A is rarely neutral.", "A neutral and simple advantaged model are both considered in Section 3, and a more complex learning algorithm is described for Section 4.", "Application: Testing Assumptions The general nature of the framework described here renders it suitable for reproducing the results of previous works and evaluating their assumptions.", "To demonstrate this, we reproduce the major result from Kauhanen (2016) , which tested the behavior of neutral change in networks of singlegrammar learners, in order to dissect two of its primary assumptions.", "Implemented in a typical network framework, the original setup contains n = 200 individuals in probabilistically generated centralized networks in which individuals mature categorically to the single most frequent grammar in their input.", "The author found that categorical neutral change produced chaotic paths of change regardless of network shape and that periodically \"rewiring\" some of the network edges smoothed this out.", "Without commenting on rewiring, we find that the combination of n and choice of categorical learners conspire to create the chaotic results.", "We create two communities, both centralized along the lines of the single cluster in Kauhanen (2016) , initialize all members of cluster 1 with grammar g 1 and all members of cluster 2 with grammar g 2 , and additional edges are added between members of clusters 1 and 2 to allow interaction.", "G is converted to an indicator matrix at the end of each learning iteration by rounding values to 0 and 1 in order to model categorical learners who only internalize the most common grammar in their inputs as in the original model.", "In a pair of infinitely large clusters or two clusters where individuals are permitted to learn a probabilistic distribution of grammars, each cluster should homogenize to a 50/50 distribution of g 1 and g 2 after some number of iterations depending on the specifics of the network shape and setting for Ξ± creating the red curves in Fig.", "2 .", "At n = 20000, each of 10 trials roughly follows the path of the predicted curve, but when run at the original n = 200 for 10 trials, this produces the type of chaotic behavior which Kauhanen (2016) attempts to repair.", "The outcome appears to be the result of an assumption made out of convenience (n = 200) rather than a principled decision.", "To further explore the impact of the population size assumption, we experiment on a model of advantaged change, which is typically contrasted with neutral change because of its tendency to produce \"well-behaved\" S-curve change (Blythe and Croft, 2012; Kauhanen, 2016) .", "This time, only a single cluster is created, and the advantaged grammar is initially assigned to 1% of the population.", "As seen in Figure 3 , results are chaotic for n = 200 once again and near predicted for n = 20000.", "This is important because at n = 200, advantaged change is chaotic, and most simulations both rise and fall.", "An experimenter who only studied advantaged change in small population might concluded that it is as ill-behaved as neutral change.", "While the conclusions that Kauhanen (2016) draws appear valid for n = 200, it is not clear to what extent they can be projected onto larger populations.", "This demonstrates the need for carefully choosing one's modeling assumptions and testing them out when possible.", "Application: Mergers in Progress The acquisition of phonological mergers in mixed input settings presents an interesting problem.", "It appears that mergers have an inherent advantage because they tend to spread at the expense of distinctions, and once they begin, they are rarely reversed (Labov, 1994) .", "Yang (2009) 's acquisition model quantifies this advantage as the relatively lower chance of misinterpretation if a listener assumes the merged grammar instead of the nonmerged grammar once a sufficient proportion of the environment is merged.", "Applied to Johnson (2007) 's detailed population study of the frontier of the COT-CAUGHT merger in the small towns along the border between Rhode Island and Massachusetts, this accurately predicts the ratio of merged input for a child to acquire the merged grammar, however when applied to a perfectly mixed population of learners, it fails to model the spread of the merged grammar in the population.", "Yang's model is input-driven, so it is conducive to simulation with minimal assumptions past those drawn from the empirical data.", "We test the behavior of this learning model in a typical population network and demonstrate that it produces a reasonable path of change.", "Background The COT-CAUGHT merger, also called the low back merger describes the phenomenon present in varieties of North American English spoken in eastern New England, western Pennsylvania, the American West, and Canada among others where the vowel in words like cot and the vowel in words like caught have come to be pronounced the same (Labov et al., 2005, pp.", "58-65) .", "The geographical extent of the merger is currently expanding, which might be expected if the merger has a cognitive or social advantage associated with it.", "Johnson (2007) 's study of the merger's frontier on the border Rhode Island and Massachusetts uncovered an interesting social dynamic that illustrates the merger's speed: there are families where the parents and older siblings non-merged, but the younger siblings are.", "The merger has swept through in only a few years and passed between the siblings.", "Yang (2009) seeks to understand why mergers have an advantage from a cognitive perspective, and his model treats the acquisition of mergers as an evolutionary process.", "Learners who receive both merged (M + ) and non-merged (M βˆ’ ) input entertain both a merged (g + ) and non-merged (g βˆ’ ) grammar and reward whichever grammar successfully parses the input.", "This kind of variational learner (Yang, 2000) is essentially an adaptation of the classic evolutionary Linear Reward Punishment model (Bush and Mosteller, 1953) .", "The fitness of each grammar is the probability in the limit that it will fail to parse any given input, and since it is virtually always the case that this probability is different for both grammars, fitness is virtually always asymmetric.", "The variational learner is characterized as follows.", "Given two grammars and an input token s, The learner parses s with g 1 with probability p and with g 2 with probability q = 1 βˆ’ p. p is rewarded according to whether the choice of g successfully parses s (g β†’ s) or it fails to (g s), where Ξ³ is some small constant.", "p = p + Ξ³q, g β†’ s (1 βˆ’ Ξ³)p, g s Given a specific problem, one can calculate a penalty probability C for each g, the proportion of input that would cause g s. The grammar with the lower C has the advantage, so the other one will be driven down in the long run.", "C can be estimated from type frequencies in a corpus, and the model is non-parametric because these values do not depend on Ξ³. lim tβ†’βˆž p t = C 2 C 1 + C 2 lim tβ†’βˆž q t = C 1 C 1 + C 2 To understand the COT-CAUGHT merger empirically, one must reason about what kind of input would trigger a penalty and then calculate the penalty probabilities of the merged grammar C + and non-merged grammar C βˆ’ from a corpus.", "This model considers parsing failure to be the rate of initial misinterpretation, and for a vowel merger, the only inputs that could create an initial misinterpretation are minimal pairs because they become homophones.", "Examples of COT-CAUGHT minimal pairs include cot-caught, Don-Dawn, stock-stalk, odd-awed, collar-caller, and so on.", "The merged g + grammar collapses would-be minimal pairs into homophones, so the penalty rate C + comes down to lexical access.", "Under the observation that more frequent homophones are retrieved first regardless of syntactic context (Caramazza et al., 2001) , g + listeners only suffer initial misinterpretation when the less frequent member of a pair is uttered regardless of the rate of M + .", "If H is the sum token frequency of all minimal pairs and h i o , h i oh are the frequencies of the ith pair's members, then C+ is calculated by Eqn.", "6.", "In contrast, g βˆ’ listeners are sensitive to the phonemic distinction, so they misinterpret M βˆ’ input at the rate of mishearing one vowel for the other (Peterson and Barney, 1952) (second half of Eqn.", "7).", "And given M + input, they misinterpret whenever they hear the phoneme which g βˆ’ does not expect (e.g., a merged speaker pronouncing cot with the CAUGHT vowel) times the probability of not mishearing that vowel (1-) plus times the probability of hearing the right vowel (i.e., the merged speaker pronounces cot with the COT vowel but it is misheard anyway) (first half of Eqn.", "7).", "Since g βˆ’ misinterpretation rates are a function of the rate of M + (p) in the environment, there is a threshold of M + speakers above which the merged grammar has a fitness advantage over the non-merged one.", "C + = 1 H i min(h i o , h i oh ) (6) C βˆ’ = 1 H i p 0 ((1 βˆ’ oh )h i o + oh h i oh ) (7) +q 0 ( oh h i o + oh h i oh ) Calculating this threshold for the frequent minimal pairs that Yang extracts from the Wortschatz project (Biemann et al., 2004 ) corpus 3 and mishearing rates from Peterson and Barney (1952), the Yang model predicts that a learner exposed to at least ∼ 17% COT-CAUGHT-merged input will acquire the merger.", "This threshold represents a strong advantage for M + because it is well under the 50% threshold expected for neutral (non-advantaged) change and it is very close to what was found in Johnson (2007) 's sociolinguistic study.", "It predicts that younger children may have g + while their parents and even older siblings have g βˆ’ if the 17% threshold was crossed in E after the acquisition period of the older sibling but before that of the younger sibling.", "Model Setup All the mechanics behind the learning model reduce to a simple statement: learners acquires g + iff > 17% of their input is M + and they acquire g βˆ’ otherwise.", "However, this kind of categorical learner in a perfectly-mixed population leads to immediate fixation at either g βˆ’ or g + in a single iteration, since the proportion of g + speakers in the population is equivalent to the proportion of M + input in every learner's environment.", "This is not realistic change.", "Clearly, social network structure is at least as important as the learning algorithm in modeling the spread of the merger.", "We model the change in a non-uniform social network of 100 centralized clusters of 75 individuals each.", "75 was chosen as half Dunbar's number, the maximum number of reliable social connections that an adult can maintain (Dunbar, 2010) .", "There are two grammars, g + and g βˆ’ , and learners internalize one or the other according to the 17% threshold of M + in their input.", "One cluster represents the source of the merger and is initialized at 100% g + , while the rest begin 100% g βˆ’ .", "Inter-cluster connections are chosen randomly so that some connections are between central members of the clusters and some are between peripheral members.", "The one merged cluster is connected to half the other clusters representing those at the frontier of the change, and each other cluster is connected to five randomly chosen ones.", "4 This network structure echoes work in sociolinguistics, in particular, Milroy and Milroy (1985) 's notion of strong and weak connections in language change, where weak connections between social clusters are particularly important for propagation of a change.", "Propagation of the merged grammar is calculated by E n because we are interested in the behavior of individuals without loss of precision and because it cannot be assumed that each cluster is internally uniform.", "5 Since the spread of the merger has been rapid enough to detect over a period of a few years, iterations are modeled as short age co-horts rather than full generations in the first experiments by updating only a randomly chosen 10% of nodes at each iteration because only a fraction of the population is learning at any given time.", "A model where every node is updated is investigated as well.", "Results The behavior of this simulation is shown graphically in Figure 4 .", "The fine/colored lines indicate the rate of M + within each initially non-merged cluster, and the bold/black line shows the average rate across all initially non-merged.", "The merger spreads from cluster to cluster in succession over the \"weak\" inter-cluster connections and through each cluster over the 'strong' connections before moving on to the next ones.", "Everett, 1995) members have the merger, a period of rapid diffusion of the merger, then some time where a few laggards resist the merger.", "As a result, most clusters exhibit an S-like shape.", "A few clusters change rapidly because of their especially wellconnected positions in the network, and some lag behind the rest because they are poorly connected to the rest of the network.", "More interestingly, the population-wide average, the population-level data at the kind of granularity that is often studied, yields a smooth S-curve with a shallower slope than the individual clusters.", "The fact that it arises naturally here in a network that conforms with typical network shapes but was otherwise randomly generated is encouraging because the experiment was not set up so that it would produce such a curve, and the steep rate of change in individual clusters is what is expected for a change that is rapid enough to affect siblings differently.", "In the above simulation, only a fraction of nodes were updated at each iteration in order to model a rapid change.", "In order to confirm that this choice is not affecting the results and to test a purer implementation of the framework presented here, we remove that constraint and update every node at each iteration.", "Figure 5 shows what happens over 20 iterations in a network that is otherwise identical but with 2/5 as many inter-cluster connections as the original.", "A qualitatively similar pattern arises, so the choice to update only a fraction of the population is not crucially affecting the results.", "In all experiments so far, social connections were fixed at the first iteration even though connections in real populations tend to change over time.", "To investigate that modeling assumption, we perform another simulation in which connections are randomly updated both within and across clusters at each iteration akin to Kauhanen (2016) 's rewiring.", "The result as shown in Figure 6 is similar to before, with one major difference.", "The individual clusters transition more closely in time because no individual cluster remains poorly connected or especially well connected throughout the entire simulation.", "Finally, we test our assumptions about population size by repeating the experiments on a smaller network of 40 clusters of 18 individuals.", "The results are qualitatively similar, but the S-curve appears to be more sensitive to probabilistic connections in the network.", "To explore this, we present the average network-wide rate of (M + ) across 10 trials, revealing that an S-like curve is formed each time but that its slope varies.", "A few trials never reach 100% because some of the clusters are not connected to the innovative one.", "The slope varies between trials, indicating that the rate of change is a function of both the population structure and the learning algorithm, but the network size does not substantially affect these results.", "Discussion The algebraic-network framework for modeling population-level language change presented here has substantial practical and theoretical advantages over previous ones.", "It is much simpler computationally than previous frameworks because it calculates the statistically expected behavior of each generation analytically and therefore removes the entire inner loop of calculating stochastic inter-agent interactions from the simulation.", "It follows the Niyogi and Berwick (1996) formalism for language change which presents a clean and modular way of reasoning about the problem and promotes the centrality of language acquisition.", "In addition to the core algorithm, the framework offers enough flexibility to represent a wide variety of processes from the highly abstract (e.g., Kauhanen (2016) ) to those grounded in soci-olinguistic and acquisition research (e.g., Yang (2009) ).", "In our investigation of Kauhanen's basic assumptions, we discover how seemingly innocuous decisions about population size and learning conspire to drive simulation results.", "If learners are conceived as categorical learners, population size becomes a deciding factor in the path of change.", "So while the original results are interesting and meaningful, they may only valid for small (on the order of 10 2 ) populations.", "In our simulation of the spread of the COT-CAUGHT merger, we show how a cognitivelymotivated model of acquisition requires a network model in order to represent population-level language change.", "The population is represented as a collection of individual clusters based on sociological work, but the clusters themselves are connected randomly.", "The fact that S-curves arise naturally from these networks underscores their centrality to language change.", "One problem that this line of simulation work has always faced has been the lack of viable comparison between models because every study implements its own learning, network, and interaction models.", "The modular nature of our framework advances against this trend since it is now possible to hold the population model constant while slotting in various learning models to test them against one another and vice-versa.", "Finally, since this framework reduces to Niyogi & Berwick's models in perfectly-mixed populations, it can be used to reason about the formal dynamics of language change as well.", "Without simulation, it would be difficult or impossible to undercover the interplay between acquisition and social structure on the propagation of language change.", "Neither factor alone can account for the theoretical or empirically observed patterns.", "Simulations of this kind which explicitly model both simultaneously is well equipped to provide insights that fieldwork and laboratory work cannot.", "As such, it is an invaluable complement to those more traditional methodologies." ] }
{ "paper_header_number": [ "1", "1.1", "2", "2.2", "2.3", "3", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Framework for Transmission in Social Networks", "Propagation in the Network", "Learning in the Network", "Application: Testing Assumptions", "Application: Mergers in Progress", "Background", "Model Setup", "Results", "Discussion" ] }
GEM-SciDuet-train-106#paper-1281#slide-9
Intuition behind Propagation Algorithm
For the individual at each node Randomly select outgoing edge by weight and follow it OR stop; Increase chance of stopping next tim e; Interact with the individual at the curren t node; Nodes are not individuals. Individuals stand on nodes Individuals travel along edges and find someone to interact with by weight and follow it OR stop; Individuals connected by shorter or higher weighted paths are more likely to interact. Rather than simulating interactions in a loop, calculate a closed-form solution
For the individual at each node Randomly select outgoing edge by weight and follow it OR stop; Increase chance of stopping next tim e; Interact with the individual at the curren t node; Nodes are not individuals. Individuals stand on nodes Individuals travel along edges and find someone to interact with by weight and follow it OR stop; Individuals connected by shorter or higher weighted paths are more likely to interact. Rather than simulating interactions in a loop, calculate a closed-form solution
[]
GEM-SciDuet-train-106#paper-1281#slide-10
1281
A Framework for Representing Language Acquisition in a Population Setting
Language variation and change are driven both by individuals' internal cognitive processes and by the social structures through which language propagates. A wide range of computational frameworks have been proposed to connect these drivers. We compare the strengths and weaknesses of existing approaches and propose a new analytic framework which combines previous network models' ability to capture realistic social structure with practically and more elegant computational properties. The framework privileges the process of language acquisition and embeds learners in a social network but is modular so that population structure can be combined with different acquisition models. We demonstrate two applications for the framework: a test of practical concerns that arise when modeling acquisition in a population setting and an application of the framework to recent work on phonological mergers in progress.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217 ], "paper_content_text": [ "Introduction The process of language change should be thought of as a two-step cycle in which 1) individuals acquire their native languages from their predecessors then 2) pass them on to their successors.", "Small changes accrue over time this way and create both small-scale interpersonal variation and large-scale typological differences.", "It is easy to draw a strong analogy here between linguistic evolution and biological evolution.", "Both feature classic descent with modification, except while phenotypes are transmitted through genes and acted on by natural selection, language is both transmitted through and constrained by the individual (Cavalli-Sforza and Feldman, 1981; Ritt, 2004, etc.)", ".", "But while evolution, linguistic or otherwise, is driven by forces acting on the individual, it unfolds on the level of populations (Cavalli-Sforza and Feldman, 1981) .", "The influence of communitylevel social factors on the path of language change is a major focus of sociolinguistics (Labov, 2001; Milroy and Milroy, 1985; Rogers Everett, 1995) .", "Ideally, one could observe population-level variation unfold in real time while testing out individual factors, but this is impossible because nobody can travel back in time or fit entire natural environments into a lab.", "Change that has already happened is out of reach, and change in progress is buried in a world of confounds.", "The classic sociolinguistic method instead approaches the problem by inferring causal factors from patterns discovered in field interviews and corpora (Labov, 1994; Labov et al., 2005, etc.)", ".", "This is the primary source of empirical data in the field and the only way to look at language change in a naturalistic setting, but it is limited in that it cannot test cause and effect directly.", "More recently, controlled experimental studies have emerged as a complementary line of research which manipulate causal factors directly (Johnson et al., 1999; Campbell-Kibler, 2009, etc.)", ", but are inherently removed natural time and scale.", "A third approach, the one we build upon here, relies on computational modeling to simulate how sociolinguistic factors might work together in larger populations (Klein, 1966; Blythe and Croft, 2012; Kauhanen, 2016, etc.)", ".", "It has long been argued that language acquisition is the primary cause of language change (Sweet, 1899; Lightfoot, 1979; Niyogi, 1998, etc.)", ".", "In the last few decades, this connection has been modeled computationally (Gibson and Wexler, 1994; Kirby et al., 2000; Yang, 2000, etc.)", ", leading to the strong conclusion that change is the inevitable consequence of mixed linguistic input or finite learning periods (Niyogi and Berwick, 1996) , even if children are \"perfect\" learners.", "An important result connecting the learner and population emphasizes the need for this line of work: the space of paths of change available in populations is formally larger than the paths available to linear chains of iterated learners.", "Niyogi and Berwick (2009) prove formally that even perfectly-mixed (i.e., uniform and homogeneous social network) populations admit phase transitions in the path of change unavailable to chains of single learners commonly implemented in iterated learning (Kirby et al., 2000) .", "This suggests that small-population experimental studies in sociolinguistics and in child language acquisition do not paint the full picture of language change.", "We introduce a new framework for modeling language change in populations.", "It has an outer loop to represent generational progression, but it replaces the inner loop which calculates randomized interactions between agents with a single formula that is defined generally enough to allow the simulation of a wide range of scenarios.", "It builds upon the principled formalism described by Niyogi and Berwick (1996, et seq.)", ", privileging the acquisition model and separating it from the population model.", "The resulting modular framework is described in the following sections.", "First, Section 1.1 presents a survey of previous simulation work followed by a description of the new population model in Section 2.", "Next, Section 3 addresses practical concerns relating population size to assumptions about language acquisition.", "Finally, Section 4 introduces a case study on phonological change which demonstrates the need for appropriate models both of acquisition and populations.", "Related Work Computational models for the propagation of linguistic variation have been employed with a variety of research goals in mind.", "Every paper implements its own framework with few exceptions, so comparison across studies is difficult.", "Additionally, since each model is essentially 'boutique,' it is always possible that models are designed consciously or unconsciously to achieve a specific outcome rather driven by underlying principles.", "We group these frameworks into three classes according to their implementation, swarm, network, and algebraic, and discusses their strengths and weaknesses.", "The first class, called swarm here, models populations as collections of agents placed on a grid.", "They \"swarm\" around randomly according to some movement function, and \"interact\" when they occupy adjacent grid spaces (Satterfield, 2001; Harrison et al., 2002; Ke et al., 2008; Stanford and Kenny, 2013) .", "This tends toward concrete interpretation, for example, more mobile populations are expressed directly by more mobile agents.", "They capture Bloomfield (1933) 's \"principle of density\" which describes the observation that geographically or socially close individuals interact more frequently than those farther away.", "On the other hand, they provide little control over network structure, relying on series of explicit movement constraints in order to direct their agents, and since each one moves randomly at each iteration, these models have potentially thousands of degrees of freedom.", "Such simulations should be run many times if any sort of statistically expected results are to be computed.", "The second class, network frameworks, model speakers as nodes and interaction probabilities as weighted edges on network graphs (Minett and Wang, 2008; Baxter et al., 2009; Fagyal et al., 2010; Blythe and Croft, 2012; Kauhanen, 2016) .", "These frameworks offer precise control over social network structure and can test specific community models from within sociolinguistics.", "However, implementations usually proceed by some kind of iterative probabilistic node-pair selection process, and in this way suffer from the same statistical pitfalls as swarm frameworks.", "In contrast to swarm models, interaction is rigidly restricted to immediately connected nodes, so to achieve gradient interaction probabilities, edges must be frequently updated or nearly fully-connected graphs with carefully assigned edge weights would need to be constructed and motivated.", "The third class, algebraic frameworks, present analytic methods for determining the state of the network at the end of each iteration rather than relying on stochastic simulation of individual agents (Niyogi and Berwick, 1996, 1997; Yang, 2000; Baxter et al., 2006; Minett and Wang, 2008; Niyogi and Berwick, 2009) .", "Removing that inner loop is a more mathematically elegant approach and avoids dealing unnecessarily with statistics behind random trials.", "Removing that loop speeds up calculation as well, making larger simulations more tractable than with network or swarm frameworks.", "But this power is achieved by sacrificing the social network.", "Up to this point, such models have, to our knowledge, only been defined over perfectly-mixed (i.e., no network effects) populations.", "That assumption is useful for reasoning about the mathematical theory behind language change, but it hinders such models' utility in empirical studies.", "For example, though Baxter et al.", "(2006) and Minett and Wang (2008) implement algebraic models for perfectly mixed populations, they fall back on network models to model network effects.", "Framework for Transmission in Social Networks Algebraic frameworks have their mathematical advantage, but network frameworks provide a richer model for representing real-world population structures and swarm models capture density effects by default.", "An ideal framework would combine the benefits of all three of these.", "Here we do just that.", "We introduce a framework that instantiates Niyogi and Berwick (1996) 's acquisitiondriven formalism where change is handled explicitly as a two-step alternation between individual learners learning and populations interacting.", "It provides an analytic solution to the state of a network structure over which swarm-like behavior can be modeled.", "We begin by conceptualizing the framework in terms of agents traveling probabilistically over a network structure as in Algo.", "1 before introducing the analytic solution.", "There is an individual standing at every node in the graph, and at every iteration, each individual begins at some location and travels along the network's edges, at each step deciding to continue on or to stop and interact with the agent at that node.", "Any two agents with a nonzero weight path between them could potentially interact, so the overall probability of an interaction is a function of the shape of the network and the decay rate of the step probability.", "The shorter and higher weighted the path between two agents, the more likely they are to interact.", "This corresponds to the gradient interaction probabilities of swarm frameworks.", "Social networks are typically conceived of as graph structures with individuals as vertices and the social or geographical connections between individuals as edges, and this allows for a great deal of flexibility.", "If edges are undirected, then all interactions are equal and bidirectional, but if edges are directed, interactions may or may not be.", "Edges can be weighted to represent likelihood of interaction or some measure of social valuation, and this too can vary over time.", "Lastly, it is possible to add and remove nodes themselves to capture births, deaths, or migration.", "The network structure is represented computationally here as an adjacency matrix A.", "In a population of n individuals, this is n Γ— n where each element a ij is the weight of the connection from individual j to individual i.", "The matrix must be column stochastic (all columns sum to 1 and contain only positive elements) so that edge weights can be interpreted as probabilities.", "The special case where the matrix is symmetric (every a ij = a ji ) models undirected edges, and more strongly, the model reduces to perfectly-mixed populations when each a ij = 1 n .", "We define a notion of communities over the nodes of the network in order to add the option to categorize groups of individuals.", "Membership among c communities is identified with an n Γ— c indicator matrix C. Depending on the problem at hand, it is possible to calculate the average behavior of the learners within each community directly without having to calculate the behavior of each individual member.", "Propagation in the Network In a typical network model, the edge weights between nodes in A are interpreted directly as interaction probabilities, meaning that individuals only ever interact with their immediate graph neighbors.", "We take a different approach by allowing the agents to \"travel\" and potentially interact with any other agent whose node is connected by a path of non-zero edges.", "If the number of traveling steps were fixed at k, the probability of each pair interacting would be defined as A k .", "It is more complicated for us since the number of steps traveled is a random variable.", "The probability of j interacting with i (p(ij)) is the probability of them interacting after k steps times the probability of k for all values of k as in Eqn.", "1.", "Combining this intuition with A yields the interaction probabilities for all i, j pairs.", "p(ij) = k p(ij|k steps) p(k steps) (1) The pattern of linguistic variants or grammars (in the formal sense where grammar g is the intensional equivalent of language L g ) within a network unfolds as a dynamical system over the course of many iterations, and learners' positions within the network mediate which ones they eventually acquire.", "In a system with g grammars and n individuals, a n Γ— g row-stochastic matrix G specifies the probability with which each community expresses each grammar.", "Given this notion of interaction and the specification of grammars expressed within a network, it is possible to compute the distribution of grammars presented to each learner.", "This is the learners' linguistic environment and is represented by a matrix E in the same form as G .", "An environment function E n (G t , A) = E t+1 shown in Eqn.", "2 calculates E by first calculating all the interaction probabilities in the network then multiplying those by the grammars which every agent expresses to get the environment E. The Ξ± parameter from the geometric distribution 1 defines the travel decay rate.", "A lower Ξ± defines conceptually more mobile agents.", "More generally, E n is a special case of E(G t , C t , A t ) = E t+1 where the number of communities equals the number of individuals (c = n).", "C becomes the identity matrix without loss of generality, so the network's initial condition does not have to be defined explicitly.", "For any other community definition, an initial condition has to be defined as in Eqn.", "3 which specifies the starting point in the network that each agent conceptually begins traveling from.", "The output of E is a g Γ— c matrix giving the environment of the average agent in each community.", "2 E n (G t , A) = G t Ξ± (I βˆ’ (1 βˆ’ Ξ±)A) βˆ’1 (2) E(G t , C, A) = E n (G t , A)C(C C) βˆ’1 (3) The output of E must be broadcast to g Γ— n, which would result in the loss of some information unless the assumption can be made that each community is internally uniform.", "However, when that assumption can be made, the n Γ— n adjacency matrix admits a c Γ— c equitable partition A Ο€ (Eqn.", "4) (Schaub et al., 2016) which permits an alternate environment function E EP (G t , C, A) shown in Eqn.", "5 that is equivalent to the lossless E n if A.", "If n c, E EP is much faster to calculate because it only inverts a small c Γ— c matrix rather than a large n Γ— n. This makes it feasible to run much larger simulations than what has been done in the past.", "A Ο€ = (C C) βˆ’1 C AC (4) EEP = Ξ±G C (I βˆ’ (1 βˆ’ Ξ±)A Ο€ ) βˆ’1 (C C) βˆ’1 (5) Learning in the Network The environment function describes what inputs E t+1 are available to learners given the language expressed by the mature speakers of the previous age cohort with grammars G t .", "The second component of the framework describes the learning algorithm A(E t+1 ) = G t+1 , how individuals respond to their input environment.", "The resulting G t+1 describes which grammars those learners will eventually contribute to the subsequent generation's environment E t+2 .", "This back-andforth between adults' grammars G and childrens' environment E is the two-step cycle of language change (Fig.", "1) .", "In neutral change, learners would acquire grammars at the rates that they are expressed in their environments, but there is good reason to believe .", ".", ".", "G t β†’ E t+1 β†’ G t+1 .", ".", ".", "G t+i β†’ E t+i+1 .", ".", ".", "Figure 1: Language change as an alternation between G and E matrices that most language change involves differential fitness between competing variants, and most nontrivial learning algorithms yield some kind of fitness (Kroch, 1989; Yang, 2000; Blythe and Croft, 2012, etc.)", ", so A is rarely neutral.", "A neutral and simple advantaged model are both considered in Section 3, and a more complex learning algorithm is described for Section 4.", "Application: Testing Assumptions The general nature of the framework described here renders it suitable for reproducing the results of previous works and evaluating their assumptions.", "To demonstrate this, we reproduce the major result from Kauhanen (2016) , which tested the behavior of neutral change in networks of singlegrammar learners, in order to dissect two of its primary assumptions.", "Implemented in a typical network framework, the original setup contains n = 200 individuals in probabilistically generated centralized networks in which individuals mature categorically to the single most frequent grammar in their input.", "The author found that categorical neutral change produced chaotic paths of change regardless of network shape and that periodically \"rewiring\" some of the network edges smoothed this out.", "Without commenting on rewiring, we find that the combination of n and choice of categorical learners conspire to create the chaotic results.", "We create two communities, both centralized along the lines of the single cluster in Kauhanen (2016) , initialize all members of cluster 1 with grammar g 1 and all members of cluster 2 with grammar g 2 , and additional edges are added between members of clusters 1 and 2 to allow interaction.", "G is converted to an indicator matrix at the end of each learning iteration by rounding values to 0 and 1 in order to model categorical learners who only internalize the most common grammar in their inputs as in the original model.", "In a pair of infinitely large clusters or two clusters where individuals are permitted to learn a probabilistic distribution of grammars, each cluster should homogenize to a 50/50 distribution of g 1 and g 2 after some number of iterations depending on the specifics of the network shape and setting for Ξ± creating the red curves in Fig.", "2 .", "At n = 20000, each of 10 trials roughly follows the path of the predicted curve, but when run at the original n = 200 for 10 trials, this produces the type of chaotic behavior which Kauhanen (2016) attempts to repair.", "The outcome appears to be the result of an assumption made out of convenience (n = 200) rather than a principled decision.", "To further explore the impact of the population size assumption, we experiment on a model of advantaged change, which is typically contrasted with neutral change because of its tendency to produce \"well-behaved\" S-curve change (Blythe and Croft, 2012; Kauhanen, 2016) .", "This time, only a single cluster is created, and the advantaged grammar is initially assigned to 1% of the population.", "As seen in Figure 3 , results are chaotic for n = 200 once again and near predicted for n = 20000.", "This is important because at n = 200, advantaged change is chaotic, and most simulations both rise and fall.", "An experimenter who only studied advantaged change in small population might concluded that it is as ill-behaved as neutral change.", "While the conclusions that Kauhanen (2016) draws appear valid for n = 200, it is not clear to what extent they can be projected onto larger populations.", "This demonstrates the need for carefully choosing one's modeling assumptions and testing them out when possible.", "Application: Mergers in Progress The acquisition of phonological mergers in mixed input settings presents an interesting problem.", "It appears that mergers have an inherent advantage because they tend to spread at the expense of distinctions, and once they begin, they are rarely reversed (Labov, 1994) .", "Yang (2009) 's acquisition model quantifies this advantage as the relatively lower chance of misinterpretation if a listener assumes the merged grammar instead of the nonmerged grammar once a sufficient proportion of the environment is merged.", "Applied to Johnson (2007) 's detailed population study of the frontier of the COT-CAUGHT merger in the small towns along the border between Rhode Island and Massachusetts, this accurately predicts the ratio of merged input for a child to acquire the merged grammar, however when applied to a perfectly mixed population of learners, it fails to model the spread of the merged grammar in the population.", "Yang's model is input-driven, so it is conducive to simulation with minimal assumptions past those drawn from the empirical data.", "We test the behavior of this learning model in a typical population network and demonstrate that it produces a reasonable path of change.", "Background The COT-CAUGHT merger, also called the low back merger describes the phenomenon present in varieties of North American English spoken in eastern New England, western Pennsylvania, the American West, and Canada among others where the vowel in words like cot and the vowel in words like caught have come to be pronounced the same (Labov et al., 2005, pp.", "58-65) .", "The geographical extent of the merger is currently expanding, which might be expected if the merger has a cognitive or social advantage associated with it.", "Johnson (2007) 's study of the merger's frontier on the border Rhode Island and Massachusetts uncovered an interesting social dynamic that illustrates the merger's speed: there are families where the parents and older siblings non-merged, but the younger siblings are.", "The merger has swept through in only a few years and passed between the siblings.", "Yang (2009) seeks to understand why mergers have an advantage from a cognitive perspective, and his model treats the acquisition of mergers as an evolutionary process.", "Learners who receive both merged (M + ) and non-merged (M βˆ’ ) input entertain both a merged (g + ) and non-merged (g βˆ’ ) grammar and reward whichever grammar successfully parses the input.", "This kind of variational learner (Yang, 2000) is essentially an adaptation of the classic evolutionary Linear Reward Punishment model (Bush and Mosteller, 1953) .", "The fitness of each grammar is the probability in the limit that it will fail to parse any given input, and since it is virtually always the case that this probability is different for both grammars, fitness is virtually always asymmetric.", "The variational learner is characterized as follows.", "Given two grammars and an input token s, The learner parses s with g 1 with probability p and with g 2 with probability q = 1 βˆ’ p. p is rewarded according to whether the choice of g successfully parses s (g β†’ s) or it fails to (g s), where Ξ³ is some small constant.", "p = p + Ξ³q, g β†’ s (1 βˆ’ Ξ³)p, g s Given a specific problem, one can calculate a penalty probability C for each g, the proportion of input that would cause g s. The grammar with the lower C has the advantage, so the other one will be driven down in the long run.", "C can be estimated from type frequencies in a corpus, and the model is non-parametric because these values do not depend on Ξ³. lim tβ†’βˆž p t = C 2 C 1 + C 2 lim tβ†’βˆž q t = C 1 C 1 + C 2 To understand the COT-CAUGHT merger empirically, one must reason about what kind of input would trigger a penalty and then calculate the penalty probabilities of the merged grammar C + and non-merged grammar C βˆ’ from a corpus.", "This model considers parsing failure to be the rate of initial misinterpretation, and for a vowel merger, the only inputs that could create an initial misinterpretation are minimal pairs because they become homophones.", "Examples of COT-CAUGHT minimal pairs include cot-caught, Don-Dawn, stock-stalk, odd-awed, collar-caller, and so on.", "The merged g + grammar collapses would-be minimal pairs into homophones, so the penalty rate C + comes down to lexical access.", "Under the observation that more frequent homophones are retrieved first regardless of syntactic context (Caramazza et al., 2001) , g + listeners only suffer initial misinterpretation when the less frequent member of a pair is uttered regardless of the rate of M + .", "If H is the sum token frequency of all minimal pairs and h i o , h i oh are the frequencies of the ith pair's members, then C+ is calculated by Eqn.", "6.", "In contrast, g βˆ’ listeners are sensitive to the phonemic distinction, so they misinterpret M βˆ’ input at the rate of mishearing one vowel for the other (Peterson and Barney, 1952) (second half of Eqn.", "7).", "And given M + input, they misinterpret whenever they hear the phoneme which g βˆ’ does not expect (e.g., a merged speaker pronouncing cot with the CAUGHT vowel) times the probability of not mishearing that vowel (1-) plus times the probability of hearing the right vowel (i.e., the merged speaker pronounces cot with the COT vowel but it is misheard anyway) (first half of Eqn.", "7).", "Since g βˆ’ misinterpretation rates are a function of the rate of M + (p) in the environment, there is a threshold of M + speakers above which the merged grammar has a fitness advantage over the non-merged one.", "C + = 1 H i min(h i o , h i oh ) (6) C βˆ’ = 1 H i p 0 ((1 βˆ’ oh )h i o + oh h i oh ) (7) +q 0 ( oh h i o + oh h i oh ) Calculating this threshold for the frequent minimal pairs that Yang extracts from the Wortschatz project (Biemann et al., 2004 ) corpus 3 and mishearing rates from Peterson and Barney (1952), the Yang model predicts that a learner exposed to at least ∼ 17% COT-CAUGHT-merged input will acquire the merger.", "This threshold represents a strong advantage for M + because it is well under the 50% threshold expected for neutral (non-advantaged) change and it is very close to what was found in Johnson (2007) 's sociolinguistic study.", "It predicts that younger children may have g + while their parents and even older siblings have g βˆ’ if the 17% threshold was crossed in E after the acquisition period of the older sibling but before that of the younger sibling.", "Model Setup All the mechanics behind the learning model reduce to a simple statement: learners acquires g + iff > 17% of their input is M + and they acquire g βˆ’ otherwise.", "However, this kind of categorical learner in a perfectly-mixed population leads to immediate fixation at either g βˆ’ or g + in a single iteration, since the proportion of g + speakers in the population is equivalent to the proportion of M + input in every learner's environment.", "This is not realistic change.", "Clearly, social network structure is at least as important as the learning algorithm in modeling the spread of the merger.", "We model the change in a non-uniform social network of 100 centralized clusters of 75 individuals each.", "75 was chosen as half Dunbar's number, the maximum number of reliable social connections that an adult can maintain (Dunbar, 2010) .", "There are two grammars, g + and g βˆ’ , and learners internalize one or the other according to the 17% threshold of M + in their input.", "One cluster represents the source of the merger and is initialized at 100% g + , while the rest begin 100% g βˆ’ .", "Inter-cluster connections are chosen randomly so that some connections are between central members of the clusters and some are between peripheral members.", "The one merged cluster is connected to half the other clusters representing those at the frontier of the change, and each other cluster is connected to five randomly chosen ones.", "4 This network structure echoes work in sociolinguistics, in particular, Milroy and Milroy (1985) 's notion of strong and weak connections in language change, where weak connections between social clusters are particularly important for propagation of a change.", "Propagation of the merged grammar is calculated by E n because we are interested in the behavior of individuals without loss of precision and because it cannot be assumed that each cluster is internally uniform.", "5 Since the spread of the merger has been rapid enough to detect over a period of a few years, iterations are modeled as short age co-horts rather than full generations in the first experiments by updating only a randomly chosen 10% of nodes at each iteration because only a fraction of the population is learning at any given time.", "A model where every node is updated is investigated as well.", "Results The behavior of this simulation is shown graphically in Figure 4 .", "The fine/colored lines indicate the rate of M + within each initially non-merged cluster, and the bold/black line shows the average rate across all initially non-merged.", "The merger spreads from cluster to cluster in succession over the \"weak\" inter-cluster connections and through each cluster over the 'strong' connections before moving on to the next ones.", "Everett, 1995) members have the merger, a period of rapid diffusion of the merger, then some time where a few laggards resist the merger.", "As a result, most clusters exhibit an S-like shape.", "A few clusters change rapidly because of their especially wellconnected positions in the network, and some lag behind the rest because they are poorly connected to the rest of the network.", "More interestingly, the population-wide average, the population-level data at the kind of granularity that is often studied, yields a smooth S-curve with a shallower slope than the individual clusters.", "The fact that it arises naturally here in a network that conforms with typical network shapes but was otherwise randomly generated is encouraging because the experiment was not set up so that it would produce such a curve, and the steep rate of change in individual clusters is what is expected for a change that is rapid enough to affect siblings differently.", "In the above simulation, only a fraction of nodes were updated at each iteration in order to model a rapid change.", "In order to confirm that this choice is not affecting the results and to test a purer implementation of the framework presented here, we remove that constraint and update every node at each iteration.", "Figure 5 shows what happens over 20 iterations in a network that is otherwise identical but with 2/5 as many inter-cluster connections as the original.", "A qualitatively similar pattern arises, so the choice to update only a fraction of the population is not crucially affecting the results.", "In all experiments so far, social connections were fixed at the first iteration even though connections in real populations tend to change over time.", "To investigate that modeling assumption, we perform another simulation in which connections are randomly updated both within and across clusters at each iteration akin to Kauhanen (2016) 's rewiring.", "The result as shown in Figure 6 is similar to before, with one major difference.", "The individual clusters transition more closely in time because no individual cluster remains poorly connected or especially well connected throughout the entire simulation.", "Finally, we test our assumptions about population size by repeating the experiments on a smaller network of 40 clusters of 18 individuals.", "The results are qualitatively similar, but the S-curve appears to be more sensitive to probabilistic connections in the network.", "To explore this, we present the average network-wide rate of (M + ) across 10 trials, revealing that an S-like curve is formed each time but that its slope varies.", "A few trials never reach 100% because some of the clusters are not connected to the innovative one.", "The slope varies between trials, indicating that the rate of change is a function of both the population structure and the learning algorithm, but the network size does not substantially affect these results.", "Discussion The algebraic-network framework for modeling population-level language change presented here has substantial practical and theoretical advantages over previous ones.", "It is much simpler computationally than previous frameworks because it calculates the statistically expected behavior of each generation analytically and therefore removes the entire inner loop of calculating stochastic inter-agent interactions from the simulation.", "It follows the Niyogi and Berwick (1996) formalism for language change which presents a clean and modular way of reasoning about the problem and promotes the centrality of language acquisition.", "In addition to the core algorithm, the framework offers enough flexibility to represent a wide variety of processes from the highly abstract (e.g., Kauhanen (2016) ) to those grounded in soci-olinguistic and acquisition research (e.g., Yang (2009) ).", "In our investigation of Kauhanen's basic assumptions, we discover how seemingly innocuous decisions about population size and learning conspire to drive simulation results.", "If learners are conceived as categorical learners, population size becomes a deciding factor in the path of change.", "So while the original results are interesting and meaningful, they may only valid for small (on the order of 10 2 ) populations.", "In our simulation of the spread of the COT-CAUGHT merger, we show how a cognitivelymotivated model of acquisition requires a network model in order to represent population-level language change.", "The population is represented as a collection of individual clusters based on sociological work, but the clusters themselves are connected randomly.", "The fact that S-curves arise naturally from these networks underscores their centrality to language change.", "One problem that this line of simulation work has always faced has been the lack of viable comparison between models because every study implements its own learning, network, and interaction models.", "The modular nature of our framework advances against this trend since it is now possible to hold the population model constant while slotting in various learning models to test them against one another and vice-versa.", "Finally, since this framework reduces to Niyogi & Berwick's models in perfectly-mixed populations, it can be used to reason about the formal dynamics of language change as well.", "Without simulation, it would be difficult or impossible to undercover the interplay between acquisition and social structure on the propagation of language change.", "Neither factor alone can account for the theoretical or empirically observed patterns.", "Simulations of this kind which explicitly model both simultaneously is well equipped to provide insights that fieldwork and laboratory work cannot.", "As such, it is an invaluable complement to those more traditional methodologies." ] }
{ "paper_header_number": [ "1", "1.1", "2", "2.2", "2.3", "3", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Framework for Transmission in Social Networks", "Propagation in the Network", "Learning in the Network", "Application: Testing Assumptions", "Application: Mergers in Progress", "Background", "Model Setup", "Results", "Discussion" ] }
GEM-SciDuet-train-106#paper-1281#slide-10
The Propagation Function
E is a g x n matrix: n individuals, g possible grammars For each individual, the proportion of input drawn from each grammar Of the previous generation G is an n x g matrix Proportions by which each individual produces L A is an n x n adjacency matrix The probabilities that nodes i, j interact given that the number of steps travelled declines by a geometric distribution parameter from that distribution [0,1]
E is a g x n matrix: n individuals, g possible grammars For each individual, the proportion of input drawn from each grammar Of the previous generation G is an n x g matrix Proportions by which each individual produces L A is an n x n adjacency matrix The probabilities that nodes i, j interact given that the number of steps travelled declines by a geometric distribution parameter from that distribution [0,1]
[]
GEM-SciDuet-train-106#paper-1281#slide-11
1281
A Framework for Representing Language Acquisition in a Population Setting
Language variation and change are driven both by individuals' internal cognitive processes and by the social structures through which language propagates. A wide range of computational frameworks have been proposed to connect these drivers. We compare the strengths and weaknesses of existing approaches and propose a new analytic framework which combines previous network models' ability to capture realistic social structure with practically and more elegant computational properties. The framework privileges the process of language acquisition and embeds learners in a social network but is modular so that population structure can be combined with different acquisition models. We demonstrate two applications for the framework: a test of practical concerns that arise when modeling acquisition in a population setting and an application of the framework to recent work on phonological mergers in progress.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217 ], "paper_content_text": [ "Introduction The process of language change should be thought of as a two-step cycle in which 1) individuals acquire their native languages from their predecessors then 2) pass them on to their successors.", "Small changes accrue over time this way and create both small-scale interpersonal variation and large-scale typological differences.", "It is easy to draw a strong analogy here between linguistic evolution and biological evolution.", "Both feature classic descent with modification, except while phenotypes are transmitted through genes and acted on by natural selection, language is both transmitted through and constrained by the individual (Cavalli-Sforza and Feldman, 1981; Ritt, 2004, etc.)", ".", "But while evolution, linguistic or otherwise, is driven by forces acting on the individual, it unfolds on the level of populations (Cavalli-Sforza and Feldman, 1981) .", "The influence of communitylevel social factors on the path of language change is a major focus of sociolinguistics (Labov, 2001; Milroy and Milroy, 1985; Rogers Everett, 1995) .", "Ideally, one could observe population-level variation unfold in real time while testing out individual factors, but this is impossible because nobody can travel back in time or fit entire natural environments into a lab.", "Change that has already happened is out of reach, and change in progress is buried in a world of confounds.", "The classic sociolinguistic method instead approaches the problem by inferring causal factors from patterns discovered in field interviews and corpora (Labov, 1994; Labov et al., 2005, etc.)", ".", "This is the primary source of empirical data in the field and the only way to look at language change in a naturalistic setting, but it is limited in that it cannot test cause and effect directly.", "More recently, controlled experimental studies have emerged as a complementary line of research which manipulate causal factors directly (Johnson et al., 1999; Campbell-Kibler, 2009, etc.)", ", but are inherently removed natural time and scale.", "A third approach, the one we build upon here, relies on computational modeling to simulate how sociolinguistic factors might work together in larger populations (Klein, 1966; Blythe and Croft, 2012; Kauhanen, 2016, etc.)", ".", "It has long been argued that language acquisition is the primary cause of language change (Sweet, 1899; Lightfoot, 1979; Niyogi, 1998, etc.)", ".", "In the last few decades, this connection has been modeled computationally (Gibson and Wexler, 1994; Kirby et al., 2000; Yang, 2000, etc.)", ", leading to the strong conclusion that change is the inevitable consequence of mixed linguistic input or finite learning periods (Niyogi and Berwick, 1996) , even if children are \"perfect\" learners.", "An important result connecting the learner and population emphasizes the need for this line of work: the space of paths of change available in populations is formally larger than the paths available to linear chains of iterated learners.", "Niyogi and Berwick (2009) prove formally that even perfectly-mixed (i.e., uniform and homogeneous social network) populations admit phase transitions in the path of change unavailable to chains of single learners commonly implemented in iterated learning (Kirby et al., 2000) .", "This suggests that small-population experimental studies in sociolinguistics and in child language acquisition do not paint the full picture of language change.", "We introduce a new framework for modeling language change in populations.", "It has an outer loop to represent generational progression, but it replaces the inner loop which calculates randomized interactions between agents with a single formula that is defined generally enough to allow the simulation of a wide range of scenarios.", "It builds upon the principled formalism described by Niyogi and Berwick (1996, et seq.)", ", privileging the acquisition model and separating it from the population model.", "The resulting modular framework is described in the following sections.", "First, Section 1.1 presents a survey of previous simulation work followed by a description of the new population model in Section 2.", "Next, Section 3 addresses practical concerns relating population size to assumptions about language acquisition.", "Finally, Section 4 introduces a case study on phonological change which demonstrates the need for appropriate models both of acquisition and populations.", "Related Work Computational models for the propagation of linguistic variation have been employed with a variety of research goals in mind.", "Every paper implements its own framework with few exceptions, so comparison across studies is difficult.", "Additionally, since each model is essentially 'boutique,' it is always possible that models are designed consciously or unconsciously to achieve a specific outcome rather driven by underlying principles.", "We group these frameworks into three classes according to their implementation, swarm, network, and algebraic, and discusses their strengths and weaknesses.", "The first class, called swarm here, models populations as collections of agents placed on a grid.", "They \"swarm\" around randomly according to some movement function, and \"interact\" when they occupy adjacent grid spaces (Satterfield, 2001; Harrison et al., 2002; Ke et al., 2008; Stanford and Kenny, 2013) .", "This tends toward concrete interpretation, for example, more mobile populations are expressed directly by more mobile agents.", "They capture Bloomfield (1933) 's \"principle of density\" which describes the observation that geographically or socially close individuals interact more frequently than those farther away.", "On the other hand, they provide little control over network structure, relying on series of explicit movement constraints in order to direct their agents, and since each one moves randomly at each iteration, these models have potentially thousands of degrees of freedom.", "Such simulations should be run many times if any sort of statistically expected results are to be computed.", "The second class, network frameworks, model speakers as nodes and interaction probabilities as weighted edges on network graphs (Minett and Wang, 2008; Baxter et al., 2009; Fagyal et al., 2010; Blythe and Croft, 2012; Kauhanen, 2016) .", "These frameworks offer precise control over social network structure and can test specific community models from within sociolinguistics.", "However, implementations usually proceed by some kind of iterative probabilistic node-pair selection process, and in this way suffer from the same statistical pitfalls as swarm frameworks.", "In contrast to swarm models, interaction is rigidly restricted to immediately connected nodes, so to achieve gradient interaction probabilities, edges must be frequently updated or nearly fully-connected graphs with carefully assigned edge weights would need to be constructed and motivated.", "The third class, algebraic frameworks, present analytic methods for determining the state of the network at the end of each iteration rather than relying on stochastic simulation of individual agents (Niyogi and Berwick, 1996, 1997; Yang, 2000; Baxter et al., 2006; Minett and Wang, 2008; Niyogi and Berwick, 2009) .", "Removing that inner loop is a more mathematically elegant approach and avoids dealing unnecessarily with statistics behind random trials.", "Removing that loop speeds up calculation as well, making larger simulations more tractable than with network or swarm frameworks.", "But this power is achieved by sacrificing the social network.", "Up to this point, such models have, to our knowledge, only been defined over perfectly-mixed (i.e., no network effects) populations.", "That assumption is useful for reasoning about the mathematical theory behind language change, but it hinders such models' utility in empirical studies.", "For example, though Baxter et al.", "(2006) and Minett and Wang (2008) implement algebraic models for perfectly mixed populations, they fall back on network models to model network effects.", "Framework for Transmission in Social Networks Algebraic frameworks have their mathematical advantage, but network frameworks provide a richer model for representing real-world population structures and swarm models capture density effects by default.", "An ideal framework would combine the benefits of all three of these.", "Here we do just that.", "We introduce a framework that instantiates Niyogi and Berwick (1996) 's acquisitiondriven formalism where change is handled explicitly as a two-step alternation between individual learners learning and populations interacting.", "It provides an analytic solution to the state of a network structure over which swarm-like behavior can be modeled.", "We begin by conceptualizing the framework in terms of agents traveling probabilistically over a network structure as in Algo.", "1 before introducing the analytic solution.", "There is an individual standing at every node in the graph, and at every iteration, each individual begins at some location and travels along the network's edges, at each step deciding to continue on or to stop and interact with the agent at that node.", "Any two agents with a nonzero weight path between them could potentially interact, so the overall probability of an interaction is a function of the shape of the network and the decay rate of the step probability.", "The shorter and higher weighted the path between two agents, the more likely they are to interact.", "This corresponds to the gradient interaction probabilities of swarm frameworks.", "Social networks are typically conceived of as graph structures with individuals as vertices and the social or geographical connections between individuals as edges, and this allows for a great deal of flexibility.", "If edges are undirected, then all interactions are equal and bidirectional, but if edges are directed, interactions may or may not be.", "Edges can be weighted to represent likelihood of interaction or some measure of social valuation, and this too can vary over time.", "Lastly, it is possible to add and remove nodes themselves to capture births, deaths, or migration.", "The network structure is represented computationally here as an adjacency matrix A.", "In a population of n individuals, this is n Γ— n where each element a ij is the weight of the connection from individual j to individual i.", "The matrix must be column stochastic (all columns sum to 1 and contain only positive elements) so that edge weights can be interpreted as probabilities.", "The special case where the matrix is symmetric (every a ij = a ji ) models undirected edges, and more strongly, the model reduces to perfectly-mixed populations when each a ij = 1 n .", "We define a notion of communities over the nodes of the network in order to add the option to categorize groups of individuals.", "Membership among c communities is identified with an n Γ— c indicator matrix C. Depending on the problem at hand, it is possible to calculate the average behavior of the learners within each community directly without having to calculate the behavior of each individual member.", "Propagation in the Network In a typical network model, the edge weights between nodes in A are interpreted directly as interaction probabilities, meaning that individuals only ever interact with their immediate graph neighbors.", "We take a different approach by allowing the agents to \"travel\" and potentially interact with any other agent whose node is connected by a path of non-zero edges.", "If the number of traveling steps were fixed at k, the probability of each pair interacting would be defined as A k .", "It is more complicated for us since the number of steps traveled is a random variable.", "The probability of j interacting with i (p(ij)) is the probability of them interacting after k steps times the probability of k for all values of k as in Eqn.", "1.", "Combining this intuition with A yields the interaction probabilities for all i, j pairs.", "p(ij) = k p(ij|k steps) p(k steps) (1) The pattern of linguistic variants or grammars (in the formal sense where grammar g is the intensional equivalent of language L g ) within a network unfolds as a dynamical system over the course of many iterations, and learners' positions within the network mediate which ones they eventually acquire.", "In a system with g grammars and n individuals, a n Γ— g row-stochastic matrix G specifies the probability with which each community expresses each grammar.", "Given this notion of interaction and the specification of grammars expressed within a network, it is possible to compute the distribution of grammars presented to each learner.", "This is the learners' linguistic environment and is represented by a matrix E in the same form as G .", "An environment function E n (G t , A) = E t+1 shown in Eqn.", "2 calculates E by first calculating all the interaction probabilities in the network then multiplying those by the grammars which every agent expresses to get the environment E. The Ξ± parameter from the geometric distribution 1 defines the travel decay rate.", "A lower Ξ± defines conceptually more mobile agents.", "More generally, E n is a special case of E(G t , C t , A t ) = E t+1 where the number of communities equals the number of individuals (c = n).", "C becomes the identity matrix without loss of generality, so the network's initial condition does not have to be defined explicitly.", "For any other community definition, an initial condition has to be defined as in Eqn.", "3 which specifies the starting point in the network that each agent conceptually begins traveling from.", "The output of E is a g Γ— c matrix giving the environment of the average agent in each community.", "2 E n (G t , A) = G t Ξ± (I βˆ’ (1 βˆ’ Ξ±)A) βˆ’1 (2) E(G t , C, A) = E n (G t , A)C(C C) βˆ’1 (3) The output of E must be broadcast to g Γ— n, which would result in the loss of some information unless the assumption can be made that each community is internally uniform.", "However, when that assumption can be made, the n Γ— n adjacency matrix admits a c Γ— c equitable partition A Ο€ (Eqn.", "4) (Schaub et al., 2016) which permits an alternate environment function E EP (G t , C, A) shown in Eqn.", "5 that is equivalent to the lossless E n if A.", "If n c, E EP is much faster to calculate because it only inverts a small c Γ— c matrix rather than a large n Γ— n. This makes it feasible to run much larger simulations than what has been done in the past.", "A Ο€ = (C C) βˆ’1 C AC (4) EEP = Ξ±G C (I βˆ’ (1 βˆ’ Ξ±)A Ο€ ) βˆ’1 (C C) βˆ’1 (5) Learning in the Network The environment function describes what inputs E t+1 are available to learners given the language expressed by the mature speakers of the previous age cohort with grammars G t .", "The second component of the framework describes the learning algorithm A(E t+1 ) = G t+1 , how individuals respond to their input environment.", "The resulting G t+1 describes which grammars those learners will eventually contribute to the subsequent generation's environment E t+2 .", "This back-andforth between adults' grammars G and childrens' environment E is the two-step cycle of language change (Fig.", "1) .", "In neutral change, learners would acquire grammars at the rates that they are expressed in their environments, but there is good reason to believe .", ".", ".", "G t β†’ E t+1 β†’ G t+1 .", ".", ".", "G t+i β†’ E t+i+1 .", ".", ".", "Figure 1: Language change as an alternation between G and E matrices that most language change involves differential fitness between competing variants, and most nontrivial learning algorithms yield some kind of fitness (Kroch, 1989; Yang, 2000; Blythe and Croft, 2012, etc.)", ", so A is rarely neutral.", "A neutral and simple advantaged model are both considered in Section 3, and a more complex learning algorithm is described for Section 4.", "Application: Testing Assumptions The general nature of the framework described here renders it suitable for reproducing the results of previous works and evaluating their assumptions.", "To demonstrate this, we reproduce the major result from Kauhanen (2016) , which tested the behavior of neutral change in networks of singlegrammar learners, in order to dissect two of its primary assumptions.", "Implemented in a typical network framework, the original setup contains n = 200 individuals in probabilistically generated centralized networks in which individuals mature categorically to the single most frequent grammar in their input.", "The author found that categorical neutral change produced chaotic paths of change regardless of network shape and that periodically \"rewiring\" some of the network edges smoothed this out.", "Without commenting on rewiring, we find that the combination of n and choice of categorical learners conspire to create the chaotic results.", "We create two communities, both centralized along the lines of the single cluster in Kauhanen (2016) , initialize all members of cluster 1 with grammar g 1 and all members of cluster 2 with grammar g 2 , and additional edges are added between members of clusters 1 and 2 to allow interaction.", "G is converted to an indicator matrix at the end of each learning iteration by rounding values to 0 and 1 in order to model categorical learners who only internalize the most common grammar in their inputs as in the original model.", "In a pair of infinitely large clusters or two clusters where individuals are permitted to learn a probabilistic distribution of grammars, each cluster should homogenize to a 50/50 distribution of g 1 and g 2 after some number of iterations depending on the specifics of the network shape and setting for Ξ± creating the red curves in Fig.", "2 .", "At n = 20000, each of 10 trials roughly follows the path of the predicted curve, but when run at the original n = 200 for 10 trials, this produces the type of chaotic behavior which Kauhanen (2016) attempts to repair.", "The outcome appears to be the result of an assumption made out of convenience (n = 200) rather than a principled decision.", "To further explore the impact of the population size assumption, we experiment on a model of advantaged change, which is typically contrasted with neutral change because of its tendency to produce \"well-behaved\" S-curve change (Blythe and Croft, 2012; Kauhanen, 2016) .", "This time, only a single cluster is created, and the advantaged grammar is initially assigned to 1% of the population.", "As seen in Figure 3 , results are chaotic for n = 200 once again and near predicted for n = 20000.", "This is important because at n = 200, advantaged change is chaotic, and most simulations both rise and fall.", "An experimenter who only studied advantaged change in small population might concluded that it is as ill-behaved as neutral change.", "While the conclusions that Kauhanen (2016) draws appear valid for n = 200, it is not clear to what extent they can be projected onto larger populations.", "This demonstrates the need for carefully choosing one's modeling assumptions and testing them out when possible.", "Application: Mergers in Progress The acquisition of phonological mergers in mixed input settings presents an interesting problem.", "It appears that mergers have an inherent advantage because they tend to spread at the expense of distinctions, and once they begin, they are rarely reversed (Labov, 1994) .", "Yang (2009) 's acquisition model quantifies this advantage as the relatively lower chance of misinterpretation if a listener assumes the merged grammar instead of the nonmerged grammar once a sufficient proportion of the environment is merged.", "Applied to Johnson (2007) 's detailed population study of the frontier of the COT-CAUGHT merger in the small towns along the border between Rhode Island and Massachusetts, this accurately predicts the ratio of merged input for a child to acquire the merged grammar, however when applied to a perfectly mixed population of learners, it fails to model the spread of the merged grammar in the population.", "Yang's model is input-driven, so it is conducive to simulation with minimal assumptions past those drawn from the empirical data.", "We test the behavior of this learning model in a typical population network and demonstrate that it produces a reasonable path of change.", "Background The COT-CAUGHT merger, also called the low back merger describes the phenomenon present in varieties of North American English spoken in eastern New England, western Pennsylvania, the American West, and Canada among others where the vowel in words like cot and the vowel in words like caught have come to be pronounced the same (Labov et al., 2005, pp.", "58-65) .", "The geographical extent of the merger is currently expanding, which might be expected if the merger has a cognitive or social advantage associated with it.", "Johnson (2007) 's study of the merger's frontier on the border Rhode Island and Massachusetts uncovered an interesting social dynamic that illustrates the merger's speed: there are families where the parents and older siblings non-merged, but the younger siblings are.", "The merger has swept through in only a few years and passed between the siblings.", "Yang (2009) seeks to understand why mergers have an advantage from a cognitive perspective, and his model treats the acquisition of mergers as an evolutionary process.", "Learners who receive both merged (M + ) and non-merged (M βˆ’ ) input entertain both a merged (g + ) and non-merged (g βˆ’ ) grammar and reward whichever grammar successfully parses the input.", "This kind of variational learner (Yang, 2000) is essentially an adaptation of the classic evolutionary Linear Reward Punishment model (Bush and Mosteller, 1953) .", "The fitness of each grammar is the probability in the limit that it will fail to parse any given input, and since it is virtually always the case that this probability is different for both grammars, fitness is virtually always asymmetric.", "The variational learner is characterized as follows.", "Given two grammars and an input token s, The learner parses s with g 1 with probability p and with g 2 with probability q = 1 βˆ’ p. p is rewarded according to whether the choice of g successfully parses s (g β†’ s) or it fails to (g s), where Ξ³ is some small constant.", "p = p + Ξ³q, g β†’ s (1 βˆ’ Ξ³)p, g s Given a specific problem, one can calculate a penalty probability C for each g, the proportion of input that would cause g s. The grammar with the lower C has the advantage, so the other one will be driven down in the long run.", "C can be estimated from type frequencies in a corpus, and the model is non-parametric because these values do not depend on Ξ³. lim tβ†’βˆž p t = C 2 C 1 + C 2 lim tβ†’βˆž q t = C 1 C 1 + C 2 To understand the COT-CAUGHT merger empirically, one must reason about what kind of input would trigger a penalty and then calculate the penalty probabilities of the merged grammar C + and non-merged grammar C βˆ’ from a corpus.", "This model considers parsing failure to be the rate of initial misinterpretation, and for a vowel merger, the only inputs that could create an initial misinterpretation are minimal pairs because they become homophones.", "Examples of COT-CAUGHT minimal pairs include cot-caught, Don-Dawn, stock-stalk, odd-awed, collar-caller, and so on.", "The merged g + grammar collapses would-be minimal pairs into homophones, so the penalty rate C + comes down to lexical access.", "Under the observation that more frequent homophones are retrieved first regardless of syntactic context (Caramazza et al., 2001) , g + listeners only suffer initial misinterpretation when the less frequent member of a pair is uttered regardless of the rate of M + .", "If H is the sum token frequency of all minimal pairs and h i o , h i oh are the frequencies of the ith pair's members, then C+ is calculated by Eqn.", "6.", "In contrast, g βˆ’ listeners are sensitive to the phonemic distinction, so they misinterpret M βˆ’ input at the rate of mishearing one vowel for the other (Peterson and Barney, 1952) (second half of Eqn.", "7).", "And given M + input, they misinterpret whenever they hear the phoneme which g βˆ’ does not expect (e.g., a merged speaker pronouncing cot with the CAUGHT vowel) times the probability of not mishearing that vowel (1-) plus times the probability of hearing the right vowel (i.e., the merged speaker pronounces cot with the COT vowel but it is misheard anyway) (first half of Eqn.", "7).", "Since g βˆ’ misinterpretation rates are a function of the rate of M + (p) in the environment, there is a threshold of M + speakers above which the merged grammar has a fitness advantage over the non-merged one.", "C + = 1 H i min(h i o , h i oh ) (6) C βˆ’ = 1 H i p 0 ((1 βˆ’ oh )h i o + oh h i oh ) (7) +q 0 ( oh h i o + oh h i oh ) Calculating this threshold for the frequent minimal pairs that Yang extracts from the Wortschatz project (Biemann et al., 2004 ) corpus 3 and mishearing rates from Peterson and Barney (1952), the Yang model predicts that a learner exposed to at least ∼ 17% COT-CAUGHT-merged input will acquire the merger.", "This threshold represents a strong advantage for M + because it is well under the 50% threshold expected for neutral (non-advantaged) change and it is very close to what was found in Johnson (2007) 's sociolinguistic study.", "It predicts that younger children may have g + while their parents and even older siblings have g βˆ’ if the 17% threshold was crossed in E after the acquisition period of the older sibling but before that of the younger sibling.", "Model Setup All the mechanics behind the learning model reduce to a simple statement: learners acquires g + iff > 17% of their input is M + and they acquire g βˆ’ otherwise.", "However, this kind of categorical learner in a perfectly-mixed population leads to immediate fixation at either g βˆ’ or g + in a single iteration, since the proportion of g + speakers in the population is equivalent to the proportion of M + input in every learner's environment.", "This is not realistic change.", "Clearly, social network structure is at least as important as the learning algorithm in modeling the spread of the merger.", "We model the change in a non-uniform social network of 100 centralized clusters of 75 individuals each.", "75 was chosen as half Dunbar's number, the maximum number of reliable social connections that an adult can maintain (Dunbar, 2010) .", "There are two grammars, g + and g βˆ’ , and learners internalize one or the other according to the 17% threshold of M + in their input.", "One cluster represents the source of the merger and is initialized at 100% g + , while the rest begin 100% g βˆ’ .", "Inter-cluster connections are chosen randomly so that some connections are between central members of the clusters and some are between peripheral members.", "The one merged cluster is connected to half the other clusters representing those at the frontier of the change, and each other cluster is connected to five randomly chosen ones.", "4 This network structure echoes work in sociolinguistics, in particular, Milroy and Milroy (1985) 's notion of strong and weak connections in language change, where weak connections between social clusters are particularly important for propagation of a change.", "Propagation of the merged grammar is calculated by E n because we are interested in the behavior of individuals without loss of precision and because it cannot be assumed that each cluster is internally uniform.", "5 Since the spread of the merger has been rapid enough to detect over a period of a few years, iterations are modeled as short age co-horts rather than full generations in the first experiments by updating only a randomly chosen 10% of nodes at each iteration because only a fraction of the population is learning at any given time.", "A model where every node is updated is investigated as well.", "Results The behavior of this simulation is shown graphically in Figure 4 .", "The fine/colored lines indicate the rate of M + within each initially non-merged cluster, and the bold/black line shows the average rate across all initially non-merged.", "The merger spreads from cluster to cluster in succession over the \"weak\" inter-cluster connections and through each cluster over the 'strong' connections before moving on to the next ones.", "Everett, 1995) members have the merger, a period of rapid diffusion of the merger, then some time where a few laggards resist the merger.", "As a result, most clusters exhibit an S-like shape.", "A few clusters change rapidly because of their especially wellconnected positions in the network, and some lag behind the rest because they are poorly connected to the rest of the network.", "More interestingly, the population-wide average, the population-level data at the kind of granularity that is often studied, yields a smooth S-curve with a shallower slope than the individual clusters.", "The fact that it arises naturally here in a network that conforms with typical network shapes but was otherwise randomly generated is encouraging because the experiment was not set up so that it would produce such a curve, and the steep rate of change in individual clusters is what is expected for a change that is rapid enough to affect siblings differently.", "In the above simulation, only a fraction of nodes were updated at each iteration in order to model a rapid change.", "In order to confirm that this choice is not affecting the results and to test a purer implementation of the framework presented here, we remove that constraint and update every node at each iteration.", "Figure 5 shows what happens over 20 iterations in a network that is otherwise identical but with 2/5 as many inter-cluster connections as the original.", "A qualitatively similar pattern arises, so the choice to update only a fraction of the population is not crucially affecting the results.", "In all experiments so far, social connections were fixed at the first iteration even though connections in real populations tend to change over time.", "To investigate that modeling assumption, we perform another simulation in which connections are randomly updated both within and across clusters at each iteration akin to Kauhanen (2016) 's rewiring.", "The result as shown in Figure 6 is similar to before, with one major difference.", "The individual clusters transition more closely in time because no individual cluster remains poorly connected or especially well connected throughout the entire simulation.", "Finally, we test our assumptions about population size by repeating the experiments on a smaller network of 40 clusters of 18 individuals.", "The results are qualitatively similar, but the S-curve appears to be more sensitive to probabilistic connections in the network.", "To explore this, we present the average network-wide rate of (M + ) across 10 trials, revealing that an S-like curve is formed each time but that its slope varies.", "A few trials never reach 100% because some of the clusters are not connected to the innovative one.", "The slope varies between trials, indicating that the rate of change is a function of both the population structure and the learning algorithm, but the network size does not substantially affect these results.", "Discussion The algebraic-network framework for modeling population-level language change presented here has substantial practical and theoretical advantages over previous ones.", "It is much simpler computationally than previous frameworks because it calculates the statistically expected behavior of each generation analytically and therefore removes the entire inner loop of calculating stochastic inter-agent interactions from the simulation.", "It follows the Niyogi and Berwick (1996) formalism for language change which presents a clean and modular way of reasoning about the problem and promotes the centrality of language acquisition.", "In addition to the core algorithm, the framework offers enough flexibility to represent a wide variety of processes from the highly abstract (e.g., Kauhanen (2016) ) to those grounded in soci-olinguistic and acquisition research (e.g., Yang (2009) ).", "In our investigation of Kauhanen's basic assumptions, we discover how seemingly innocuous decisions about population size and learning conspire to drive simulation results.", "If learners are conceived as categorical learners, population size becomes a deciding factor in the path of change.", "So while the original results are interesting and meaningful, they may only valid for small (on the order of 10 2 ) populations.", "In our simulation of the spread of the COT-CAUGHT merger, we show how a cognitivelymotivated model of acquisition requires a network model in order to represent population-level language change.", "The population is represented as a collection of individual clusters based on sociological work, but the clusters themselves are connected randomly.", "The fact that S-curves arise naturally from these networks underscores their centrality to language change.", "One problem that this line of simulation work has always faced has been the lack of viable comparison between models because every study implements its own learning, network, and interaction models.", "The modular nature of our framework advances against this trend since it is now possible to hold the population model constant while slotting in various learning models to test them against one another and vice-versa.", "Finally, since this framework reduces to Niyogi & Berwick's models in perfectly-mixed populations, it can be used to reason about the formal dynamics of language change as well.", "Without simulation, it would be difficult or impossible to undercover the interplay between acquisition and social structure on the propagation of language change.", "Neither factor alone can account for the theoretical or empirically observed patterns.", "Simulations of this kind which explicitly model both simultaneously is well equipped to provide insights that fieldwork and laboratory work cannot.", "As such, it is an invaluable complement to those more traditional methodologies." ] }
{ "paper_header_number": [ "1", "1.1", "2", "2.2", "2.3", "3", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Framework for Transmission in Social Networks", "Propagation in the Network", "Learning in the Network", "Application: Testing Assumptions", "Application: Mergers in Progress", "Background", "Model Setup", "Results", "Discussion" ] }
GEM-SciDuet-train-106#paper-1281#slide-11
The Acquisition Function
Should take Et as input and produce Gt+1 as output In the simplest case (neutral change), Gt+1 Et T The following case study uses a variational learner
Should take Et as input and produce Gt+1 as output In the simplest case (neutral change), Gt+1 Et T The following case study uses a variational learner
[]
GEM-SciDuet-train-106#paper-1281#slide-12
1281
A Framework for Representing Language Acquisition in a Population Setting
Language variation and change are driven both by individuals' internal cognitive processes and by the social structures through which language propagates. A wide range of computational frameworks have been proposed to connect these drivers. We compare the strengths and weaknesses of existing approaches and propose a new analytic framework which combines previous network models' ability to capture realistic social structure with practically and more elegant computational properties. The framework privileges the process of language acquisition and embeds learners in a social network but is modular so that population structure can be combined with different acquisition models. We demonstrate two applications for the framework: a test of practical concerns that arise when modeling acquisition in a population setting and an application of the framework to recent work on phonological mergers in progress.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217 ], "paper_content_text": [ "Introduction The process of language change should be thought of as a two-step cycle in which 1) individuals acquire their native languages from their predecessors then 2) pass them on to their successors.", "Small changes accrue over time this way and create both small-scale interpersonal variation and large-scale typological differences.", "It is easy to draw a strong analogy here between linguistic evolution and biological evolution.", "Both feature classic descent with modification, except while phenotypes are transmitted through genes and acted on by natural selection, language is both transmitted through and constrained by the individual (Cavalli-Sforza and Feldman, 1981; Ritt, 2004, etc.)", ".", "But while evolution, linguistic or otherwise, is driven by forces acting on the individual, it unfolds on the level of populations (Cavalli-Sforza and Feldman, 1981) .", "The influence of communitylevel social factors on the path of language change is a major focus of sociolinguistics (Labov, 2001; Milroy and Milroy, 1985; Rogers Everett, 1995) .", "Ideally, one could observe population-level variation unfold in real time while testing out individual factors, but this is impossible because nobody can travel back in time or fit entire natural environments into a lab.", "Change that has already happened is out of reach, and change in progress is buried in a world of confounds.", "The classic sociolinguistic method instead approaches the problem by inferring causal factors from patterns discovered in field interviews and corpora (Labov, 1994; Labov et al., 2005, etc.)", ".", "This is the primary source of empirical data in the field and the only way to look at language change in a naturalistic setting, but it is limited in that it cannot test cause and effect directly.", "More recently, controlled experimental studies have emerged as a complementary line of research which manipulate causal factors directly (Johnson et al., 1999; Campbell-Kibler, 2009, etc.)", ", but are inherently removed natural time and scale.", "A third approach, the one we build upon here, relies on computational modeling to simulate how sociolinguistic factors might work together in larger populations (Klein, 1966; Blythe and Croft, 2012; Kauhanen, 2016, etc.)", ".", "It has long been argued that language acquisition is the primary cause of language change (Sweet, 1899; Lightfoot, 1979; Niyogi, 1998, etc.)", ".", "In the last few decades, this connection has been modeled computationally (Gibson and Wexler, 1994; Kirby et al., 2000; Yang, 2000, etc.)", ", leading to the strong conclusion that change is the inevitable consequence of mixed linguistic input or finite learning periods (Niyogi and Berwick, 1996) , even if children are \"perfect\" learners.", "An important result connecting the learner and population emphasizes the need for this line of work: the space of paths of change available in populations is formally larger than the paths available to linear chains of iterated learners.", "Niyogi and Berwick (2009) prove formally that even perfectly-mixed (i.e., uniform and homogeneous social network) populations admit phase transitions in the path of change unavailable to chains of single learners commonly implemented in iterated learning (Kirby et al., 2000) .", "This suggests that small-population experimental studies in sociolinguistics and in child language acquisition do not paint the full picture of language change.", "We introduce a new framework for modeling language change in populations.", "It has an outer loop to represent generational progression, but it replaces the inner loop which calculates randomized interactions between agents with a single formula that is defined generally enough to allow the simulation of a wide range of scenarios.", "It builds upon the principled formalism described by Niyogi and Berwick (1996, et seq.)", ", privileging the acquisition model and separating it from the population model.", "The resulting modular framework is described in the following sections.", "First, Section 1.1 presents a survey of previous simulation work followed by a description of the new population model in Section 2.", "Next, Section 3 addresses practical concerns relating population size to assumptions about language acquisition.", "Finally, Section 4 introduces a case study on phonological change which demonstrates the need for appropriate models both of acquisition and populations.", "Related Work Computational models for the propagation of linguistic variation have been employed with a variety of research goals in mind.", "Every paper implements its own framework with few exceptions, so comparison across studies is difficult.", "Additionally, since each model is essentially 'boutique,' it is always possible that models are designed consciously or unconsciously to achieve a specific outcome rather driven by underlying principles.", "We group these frameworks into three classes according to their implementation, swarm, network, and algebraic, and discusses their strengths and weaknesses.", "The first class, called swarm here, models populations as collections of agents placed on a grid.", "They \"swarm\" around randomly according to some movement function, and \"interact\" when they occupy adjacent grid spaces (Satterfield, 2001; Harrison et al., 2002; Ke et al., 2008; Stanford and Kenny, 2013) .", "This tends toward concrete interpretation, for example, more mobile populations are expressed directly by more mobile agents.", "They capture Bloomfield (1933) 's \"principle of density\" which describes the observation that geographically or socially close individuals interact more frequently than those farther away.", "On the other hand, they provide little control over network structure, relying on series of explicit movement constraints in order to direct their agents, and since each one moves randomly at each iteration, these models have potentially thousands of degrees of freedom.", "Such simulations should be run many times if any sort of statistically expected results are to be computed.", "The second class, network frameworks, model speakers as nodes and interaction probabilities as weighted edges on network graphs (Minett and Wang, 2008; Baxter et al., 2009; Fagyal et al., 2010; Blythe and Croft, 2012; Kauhanen, 2016) .", "These frameworks offer precise control over social network structure and can test specific community models from within sociolinguistics.", "However, implementations usually proceed by some kind of iterative probabilistic node-pair selection process, and in this way suffer from the same statistical pitfalls as swarm frameworks.", "In contrast to swarm models, interaction is rigidly restricted to immediately connected nodes, so to achieve gradient interaction probabilities, edges must be frequently updated or nearly fully-connected graphs with carefully assigned edge weights would need to be constructed and motivated.", "The third class, algebraic frameworks, present analytic methods for determining the state of the network at the end of each iteration rather than relying on stochastic simulation of individual agents (Niyogi and Berwick, 1996, 1997; Yang, 2000; Baxter et al., 2006; Minett and Wang, 2008; Niyogi and Berwick, 2009) .", "Removing that inner loop is a more mathematically elegant approach and avoids dealing unnecessarily with statistics behind random trials.", "Removing that loop speeds up calculation as well, making larger simulations more tractable than with network or swarm frameworks.", "But this power is achieved by sacrificing the social network.", "Up to this point, such models have, to our knowledge, only been defined over perfectly-mixed (i.e., no network effects) populations.", "That assumption is useful for reasoning about the mathematical theory behind language change, but it hinders such models' utility in empirical studies.", "For example, though Baxter et al.", "(2006) and Minett and Wang (2008) implement algebraic models for perfectly mixed populations, they fall back on network models to model network effects.", "Framework for Transmission in Social Networks Algebraic frameworks have their mathematical advantage, but network frameworks provide a richer model for representing real-world population structures and swarm models capture density effects by default.", "An ideal framework would combine the benefits of all three of these.", "Here we do just that.", "We introduce a framework that instantiates Niyogi and Berwick (1996) 's acquisitiondriven formalism where change is handled explicitly as a two-step alternation between individual learners learning and populations interacting.", "It provides an analytic solution to the state of a network structure over which swarm-like behavior can be modeled.", "We begin by conceptualizing the framework in terms of agents traveling probabilistically over a network structure as in Algo.", "1 before introducing the analytic solution.", "There is an individual standing at every node in the graph, and at every iteration, each individual begins at some location and travels along the network's edges, at each step deciding to continue on or to stop and interact with the agent at that node.", "Any two agents with a nonzero weight path between them could potentially interact, so the overall probability of an interaction is a function of the shape of the network and the decay rate of the step probability.", "The shorter and higher weighted the path between two agents, the more likely they are to interact.", "This corresponds to the gradient interaction probabilities of swarm frameworks.", "Social networks are typically conceived of as graph structures with individuals as vertices and the social or geographical connections between individuals as edges, and this allows for a great deal of flexibility.", "If edges are undirected, then all interactions are equal and bidirectional, but if edges are directed, interactions may or may not be.", "Edges can be weighted to represent likelihood of interaction or some measure of social valuation, and this too can vary over time.", "Lastly, it is possible to add and remove nodes themselves to capture births, deaths, or migration.", "The network structure is represented computationally here as an adjacency matrix A.", "In a population of n individuals, this is n Γ— n where each element a ij is the weight of the connection from individual j to individual i.", "The matrix must be column stochastic (all columns sum to 1 and contain only positive elements) so that edge weights can be interpreted as probabilities.", "The special case where the matrix is symmetric (every a ij = a ji ) models undirected edges, and more strongly, the model reduces to perfectly-mixed populations when each a ij = 1 n .", "We define a notion of communities over the nodes of the network in order to add the option to categorize groups of individuals.", "Membership among c communities is identified with an n Γ— c indicator matrix C. Depending on the problem at hand, it is possible to calculate the average behavior of the learners within each community directly without having to calculate the behavior of each individual member.", "Propagation in the Network In a typical network model, the edge weights between nodes in A are interpreted directly as interaction probabilities, meaning that individuals only ever interact with their immediate graph neighbors.", "We take a different approach by allowing the agents to \"travel\" and potentially interact with any other agent whose node is connected by a path of non-zero edges.", "If the number of traveling steps were fixed at k, the probability of each pair interacting would be defined as A k .", "It is more complicated for us since the number of steps traveled is a random variable.", "The probability of j interacting with i (p(ij)) is the probability of them interacting after k steps times the probability of k for all values of k as in Eqn.", "1.", "Combining this intuition with A yields the interaction probabilities for all i, j pairs.", "p(ij) = k p(ij|k steps) p(k steps) (1) The pattern of linguistic variants or grammars (in the formal sense where grammar g is the intensional equivalent of language L g ) within a network unfolds as a dynamical system over the course of many iterations, and learners' positions within the network mediate which ones they eventually acquire.", "In a system with g grammars and n individuals, a n Γ— g row-stochastic matrix G specifies the probability with which each community expresses each grammar.", "Given this notion of interaction and the specification of grammars expressed within a network, it is possible to compute the distribution of grammars presented to each learner.", "This is the learners' linguistic environment and is represented by a matrix E in the same form as G .", "An environment function E n (G t , A) = E t+1 shown in Eqn.", "2 calculates E by first calculating all the interaction probabilities in the network then multiplying those by the grammars which every agent expresses to get the environment E. The Ξ± parameter from the geometric distribution 1 defines the travel decay rate.", "A lower Ξ± defines conceptually more mobile agents.", "More generally, E n is a special case of E(G t , C t , A t ) = E t+1 where the number of communities equals the number of individuals (c = n).", "C becomes the identity matrix without loss of generality, so the network's initial condition does not have to be defined explicitly.", "For any other community definition, an initial condition has to be defined as in Eqn.", "3 which specifies the starting point in the network that each agent conceptually begins traveling from.", "The output of E is a g Γ— c matrix giving the environment of the average agent in each community.", "2 E n (G t , A) = G t Ξ± (I βˆ’ (1 βˆ’ Ξ±)A) βˆ’1 (2) E(G t , C, A) = E n (G t , A)C(C C) βˆ’1 (3) The output of E must be broadcast to g Γ— n, which would result in the loss of some information unless the assumption can be made that each community is internally uniform.", "However, when that assumption can be made, the n Γ— n adjacency matrix admits a c Γ— c equitable partition A Ο€ (Eqn.", "4) (Schaub et al., 2016) which permits an alternate environment function E EP (G t , C, A) shown in Eqn.", "5 that is equivalent to the lossless E n if A.", "If n c, E EP is much faster to calculate because it only inverts a small c Γ— c matrix rather than a large n Γ— n. This makes it feasible to run much larger simulations than what has been done in the past.", "A Ο€ = (C C) βˆ’1 C AC (4) EEP = Ξ±G C (I βˆ’ (1 βˆ’ Ξ±)A Ο€ ) βˆ’1 (C C) βˆ’1 (5) Learning in the Network The environment function describes what inputs E t+1 are available to learners given the language expressed by the mature speakers of the previous age cohort with grammars G t .", "The second component of the framework describes the learning algorithm A(E t+1 ) = G t+1 , how individuals respond to their input environment.", "The resulting G t+1 describes which grammars those learners will eventually contribute to the subsequent generation's environment E t+2 .", "This back-andforth between adults' grammars G and childrens' environment E is the two-step cycle of language change (Fig.", "1) .", "In neutral change, learners would acquire grammars at the rates that they are expressed in their environments, but there is good reason to believe .", ".", ".", "G t β†’ E t+1 β†’ G t+1 .", ".", ".", "G t+i β†’ E t+i+1 .", ".", ".", "Figure 1: Language change as an alternation between G and E matrices that most language change involves differential fitness between competing variants, and most nontrivial learning algorithms yield some kind of fitness (Kroch, 1989; Yang, 2000; Blythe and Croft, 2012, etc.)", ", so A is rarely neutral.", "A neutral and simple advantaged model are both considered in Section 3, and a more complex learning algorithm is described for Section 4.", "Application: Testing Assumptions The general nature of the framework described here renders it suitable for reproducing the results of previous works and evaluating their assumptions.", "To demonstrate this, we reproduce the major result from Kauhanen (2016) , which tested the behavior of neutral change in networks of singlegrammar learners, in order to dissect two of its primary assumptions.", "Implemented in a typical network framework, the original setup contains n = 200 individuals in probabilistically generated centralized networks in which individuals mature categorically to the single most frequent grammar in their input.", "The author found that categorical neutral change produced chaotic paths of change regardless of network shape and that periodically \"rewiring\" some of the network edges smoothed this out.", "Without commenting on rewiring, we find that the combination of n and choice of categorical learners conspire to create the chaotic results.", "We create two communities, both centralized along the lines of the single cluster in Kauhanen (2016) , initialize all members of cluster 1 with grammar g 1 and all members of cluster 2 with grammar g 2 , and additional edges are added between members of clusters 1 and 2 to allow interaction.", "G is converted to an indicator matrix at the end of each learning iteration by rounding values to 0 and 1 in order to model categorical learners who only internalize the most common grammar in their inputs as in the original model.", "In a pair of infinitely large clusters or two clusters where individuals are permitted to learn a probabilistic distribution of grammars, each cluster should homogenize to a 50/50 distribution of g 1 and g 2 after some number of iterations depending on the specifics of the network shape and setting for Ξ± creating the red curves in Fig.", "2 .", "At n = 20000, each of 10 trials roughly follows the path of the predicted curve, but when run at the original n = 200 for 10 trials, this produces the type of chaotic behavior which Kauhanen (2016) attempts to repair.", "The outcome appears to be the result of an assumption made out of convenience (n = 200) rather than a principled decision.", "To further explore the impact of the population size assumption, we experiment on a model of advantaged change, which is typically contrasted with neutral change because of its tendency to produce \"well-behaved\" S-curve change (Blythe and Croft, 2012; Kauhanen, 2016) .", "This time, only a single cluster is created, and the advantaged grammar is initially assigned to 1% of the population.", "As seen in Figure 3 , results are chaotic for n = 200 once again and near predicted for n = 20000.", "This is important because at n = 200, advantaged change is chaotic, and most simulations both rise and fall.", "An experimenter who only studied advantaged change in small population might concluded that it is as ill-behaved as neutral change.", "While the conclusions that Kauhanen (2016) draws appear valid for n = 200, it is not clear to what extent they can be projected onto larger populations.", "This demonstrates the need for carefully choosing one's modeling assumptions and testing them out when possible.", "Application: Mergers in Progress The acquisition of phonological mergers in mixed input settings presents an interesting problem.", "It appears that mergers have an inherent advantage because they tend to spread at the expense of distinctions, and once they begin, they are rarely reversed (Labov, 1994) .", "Yang (2009) 's acquisition model quantifies this advantage as the relatively lower chance of misinterpretation if a listener assumes the merged grammar instead of the nonmerged grammar once a sufficient proportion of the environment is merged.", "Applied to Johnson (2007) 's detailed population study of the frontier of the COT-CAUGHT merger in the small towns along the border between Rhode Island and Massachusetts, this accurately predicts the ratio of merged input for a child to acquire the merged grammar, however when applied to a perfectly mixed population of learners, it fails to model the spread of the merged grammar in the population.", "Yang's model is input-driven, so it is conducive to simulation with minimal assumptions past those drawn from the empirical data.", "We test the behavior of this learning model in a typical population network and demonstrate that it produces a reasonable path of change.", "Background The COT-CAUGHT merger, also called the low back merger describes the phenomenon present in varieties of North American English spoken in eastern New England, western Pennsylvania, the American West, and Canada among others where the vowel in words like cot and the vowel in words like caught have come to be pronounced the same (Labov et al., 2005, pp.", "58-65) .", "The geographical extent of the merger is currently expanding, which might be expected if the merger has a cognitive or social advantage associated with it.", "Johnson (2007) 's study of the merger's frontier on the border Rhode Island and Massachusetts uncovered an interesting social dynamic that illustrates the merger's speed: there are families where the parents and older siblings non-merged, but the younger siblings are.", "The merger has swept through in only a few years and passed between the siblings.", "Yang (2009) seeks to understand why mergers have an advantage from a cognitive perspective, and his model treats the acquisition of mergers as an evolutionary process.", "Learners who receive both merged (M + ) and non-merged (M βˆ’ ) input entertain both a merged (g + ) and non-merged (g βˆ’ ) grammar and reward whichever grammar successfully parses the input.", "This kind of variational learner (Yang, 2000) is essentially an adaptation of the classic evolutionary Linear Reward Punishment model (Bush and Mosteller, 1953) .", "The fitness of each grammar is the probability in the limit that it will fail to parse any given input, and since it is virtually always the case that this probability is different for both grammars, fitness is virtually always asymmetric.", "The variational learner is characterized as follows.", "Given two grammars and an input token s, The learner parses s with g 1 with probability p and with g 2 with probability q = 1 βˆ’ p. p is rewarded according to whether the choice of g successfully parses s (g β†’ s) or it fails to (g s), where Ξ³ is some small constant.", "p = p + Ξ³q, g β†’ s (1 βˆ’ Ξ³)p, g s Given a specific problem, one can calculate a penalty probability C for each g, the proportion of input that would cause g s. The grammar with the lower C has the advantage, so the other one will be driven down in the long run.", "C can be estimated from type frequencies in a corpus, and the model is non-parametric because these values do not depend on Ξ³. lim tβ†’βˆž p t = C 2 C 1 + C 2 lim tβ†’βˆž q t = C 1 C 1 + C 2 To understand the COT-CAUGHT merger empirically, one must reason about what kind of input would trigger a penalty and then calculate the penalty probabilities of the merged grammar C + and non-merged grammar C βˆ’ from a corpus.", "This model considers parsing failure to be the rate of initial misinterpretation, and for a vowel merger, the only inputs that could create an initial misinterpretation are minimal pairs because they become homophones.", "Examples of COT-CAUGHT minimal pairs include cot-caught, Don-Dawn, stock-stalk, odd-awed, collar-caller, and so on.", "The merged g + grammar collapses would-be minimal pairs into homophones, so the penalty rate C + comes down to lexical access.", "Under the observation that more frequent homophones are retrieved first regardless of syntactic context (Caramazza et al., 2001) , g + listeners only suffer initial misinterpretation when the less frequent member of a pair is uttered regardless of the rate of M + .", "If H is the sum token frequency of all minimal pairs and h i o , h i oh are the frequencies of the ith pair's members, then C+ is calculated by Eqn.", "6.", "In contrast, g βˆ’ listeners are sensitive to the phonemic distinction, so they misinterpret M βˆ’ input at the rate of mishearing one vowel for the other (Peterson and Barney, 1952) (second half of Eqn.", "7).", "And given M + input, they misinterpret whenever they hear the phoneme which g βˆ’ does not expect (e.g., a merged speaker pronouncing cot with the CAUGHT vowel) times the probability of not mishearing that vowel (1-) plus times the probability of hearing the right vowel (i.e., the merged speaker pronounces cot with the COT vowel but it is misheard anyway) (first half of Eqn.", "7).", "Since g βˆ’ misinterpretation rates are a function of the rate of M + (p) in the environment, there is a threshold of M + speakers above which the merged grammar has a fitness advantage over the non-merged one.", "C + = 1 H i min(h i o , h i oh ) (6) C βˆ’ = 1 H i p 0 ((1 βˆ’ oh )h i o + oh h i oh ) (7) +q 0 ( oh h i o + oh h i oh ) Calculating this threshold for the frequent minimal pairs that Yang extracts from the Wortschatz project (Biemann et al., 2004 ) corpus 3 and mishearing rates from Peterson and Barney (1952), the Yang model predicts that a learner exposed to at least ∼ 17% COT-CAUGHT-merged input will acquire the merger.", "This threshold represents a strong advantage for M + because it is well under the 50% threshold expected for neutral (non-advantaged) change and it is very close to what was found in Johnson (2007) 's sociolinguistic study.", "It predicts that younger children may have g + while their parents and even older siblings have g βˆ’ if the 17% threshold was crossed in E after the acquisition period of the older sibling but before that of the younger sibling.", "Model Setup All the mechanics behind the learning model reduce to a simple statement: learners acquires g + iff > 17% of their input is M + and they acquire g βˆ’ otherwise.", "However, this kind of categorical learner in a perfectly-mixed population leads to immediate fixation at either g βˆ’ or g + in a single iteration, since the proportion of g + speakers in the population is equivalent to the proportion of M + input in every learner's environment.", "This is not realistic change.", "Clearly, social network structure is at least as important as the learning algorithm in modeling the spread of the merger.", "We model the change in a non-uniform social network of 100 centralized clusters of 75 individuals each.", "75 was chosen as half Dunbar's number, the maximum number of reliable social connections that an adult can maintain (Dunbar, 2010) .", "There are two grammars, g + and g βˆ’ , and learners internalize one or the other according to the 17% threshold of M + in their input.", "One cluster represents the source of the merger and is initialized at 100% g + , while the rest begin 100% g βˆ’ .", "Inter-cluster connections are chosen randomly so that some connections are between central members of the clusters and some are between peripheral members.", "The one merged cluster is connected to half the other clusters representing those at the frontier of the change, and each other cluster is connected to five randomly chosen ones.", "4 This network structure echoes work in sociolinguistics, in particular, Milroy and Milroy (1985) 's notion of strong and weak connections in language change, where weak connections between social clusters are particularly important for propagation of a change.", "Propagation of the merged grammar is calculated by E n because we are interested in the behavior of individuals without loss of precision and because it cannot be assumed that each cluster is internally uniform.", "5 Since the spread of the merger has been rapid enough to detect over a period of a few years, iterations are modeled as short age co-horts rather than full generations in the first experiments by updating only a randomly chosen 10% of nodes at each iteration because only a fraction of the population is learning at any given time.", "A model where every node is updated is investigated as well.", "Results The behavior of this simulation is shown graphically in Figure 4 .", "The fine/colored lines indicate the rate of M + within each initially non-merged cluster, and the bold/black line shows the average rate across all initially non-merged.", "The merger spreads from cluster to cluster in succession over the \"weak\" inter-cluster connections and through each cluster over the 'strong' connections before moving on to the next ones.", "Everett, 1995) members have the merger, a period of rapid diffusion of the merger, then some time where a few laggards resist the merger.", "As a result, most clusters exhibit an S-like shape.", "A few clusters change rapidly because of their especially wellconnected positions in the network, and some lag behind the rest because they are poorly connected to the rest of the network.", "More interestingly, the population-wide average, the population-level data at the kind of granularity that is often studied, yields a smooth S-curve with a shallower slope than the individual clusters.", "The fact that it arises naturally here in a network that conforms with typical network shapes but was otherwise randomly generated is encouraging because the experiment was not set up so that it would produce such a curve, and the steep rate of change in individual clusters is what is expected for a change that is rapid enough to affect siblings differently.", "In the above simulation, only a fraction of nodes were updated at each iteration in order to model a rapid change.", "In order to confirm that this choice is not affecting the results and to test a purer implementation of the framework presented here, we remove that constraint and update every node at each iteration.", "Figure 5 shows what happens over 20 iterations in a network that is otherwise identical but with 2/5 as many inter-cluster connections as the original.", "A qualitatively similar pattern arises, so the choice to update only a fraction of the population is not crucially affecting the results.", "In all experiments so far, social connections were fixed at the first iteration even though connections in real populations tend to change over time.", "To investigate that modeling assumption, we perform another simulation in which connections are randomly updated both within and across clusters at each iteration akin to Kauhanen (2016) 's rewiring.", "The result as shown in Figure 6 is similar to before, with one major difference.", "The individual clusters transition more closely in time because no individual cluster remains poorly connected or especially well connected throughout the entire simulation.", "Finally, we test our assumptions about population size by repeating the experiments on a smaller network of 40 clusters of 18 individuals.", "The results are qualitatively similar, but the S-curve appears to be more sensitive to probabilistic connections in the network.", "To explore this, we present the average network-wide rate of (M + ) across 10 trials, revealing that an S-like curve is formed each time but that its slope varies.", "A few trials never reach 100% because some of the clusters are not connected to the innovative one.", "The slope varies between trials, indicating that the rate of change is a function of both the population structure and the learning algorithm, but the network size does not substantially affect these results.", "Discussion The algebraic-network framework for modeling population-level language change presented here has substantial practical and theoretical advantages over previous ones.", "It is much simpler computationally than previous frameworks because it calculates the statistically expected behavior of each generation analytically and therefore removes the entire inner loop of calculating stochastic inter-agent interactions from the simulation.", "It follows the Niyogi and Berwick (1996) formalism for language change which presents a clean and modular way of reasoning about the problem and promotes the centrality of language acquisition.", "In addition to the core algorithm, the framework offers enough flexibility to represent a wide variety of processes from the highly abstract (e.g., Kauhanen (2016) ) to those grounded in soci-olinguistic and acquisition research (e.g., Yang (2009) ).", "In our investigation of Kauhanen's basic assumptions, we discover how seemingly innocuous decisions about population size and learning conspire to drive simulation results.", "If learners are conceived as categorical learners, population size becomes a deciding factor in the path of change.", "So while the original results are interesting and meaningful, they may only valid for small (on the order of 10 2 ) populations.", "In our simulation of the spread of the COT-CAUGHT merger, we show how a cognitivelymotivated model of acquisition requires a network model in order to represent population-level language change.", "The population is represented as a collection of individual clusters based on sociological work, but the clusters themselves are connected randomly.", "The fact that S-curves arise naturally from these networks underscores their centrality to language change.", "One problem that this line of simulation work has always faced has been the lack of viable comparison between models because every study implements its own learning, network, and interaction models.", "The modular nature of our framework advances against this trend since it is now possible to hold the population model constant while slotting in various learning models to test them against one another and vice-versa.", "Finally, since this framework reduces to Niyogi & Berwick's models in perfectly-mixed populations, it can be used to reason about the formal dynamics of language change as well.", "Without simulation, it would be difficult or impossible to undercover the interplay between acquisition and social structure on the propagation of language change.", "Neither factor alone can account for the theoretical or empirically observed patterns.", "Simulations of this kind which explicitly model both simultaneously is well equipped to provide insights that fieldwork and laboratory work cannot.", "As such, it is an invaluable complement to those more traditional methodologies." ] }
{ "paper_header_number": [ "1", "1.1", "2", "2.2", "2.3", "3", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Framework for Transmission in Social Networks", "Propagation in the Network", "Learning in the Network", "Application: Testing Assumptions", "Application: Mergers in Progress", "Background", "Model Setup", "Results", "Discussion" ] }
GEM-SciDuet-train-106#paper-1281#slide-12
Case Study
Spread of the Cot-Caught Merger
Spread of the Cot-Caught Merger
[]
GEM-SciDuet-train-106#paper-1281#slide-13
1281
A Framework for Representing Language Acquisition in a Population Setting
Language variation and change are driven both by individuals' internal cognitive processes and by the social structures through which language propagates. A wide range of computational frameworks have been proposed to connect these drivers. We compare the strengths and weaknesses of existing approaches and propose a new analytic framework which combines previous network models' ability to capture realistic social structure with practically and more elegant computational properties. The framework privileges the process of language acquisition and embeds learners in a social network but is modular so that population structure can be combined with different acquisition models. We demonstrate two applications for the framework: a test of practical concerns that arise when modeling acquisition in a population setting and an application of the framework to recent work on phonological mergers in progress.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217 ], "paper_content_text": [ "Introduction The process of language change should be thought of as a two-step cycle in which 1) individuals acquire their native languages from their predecessors then 2) pass them on to their successors.", "Small changes accrue over time this way and create both small-scale interpersonal variation and large-scale typological differences.", "It is easy to draw a strong analogy here between linguistic evolution and biological evolution.", "Both feature classic descent with modification, except while phenotypes are transmitted through genes and acted on by natural selection, language is both transmitted through and constrained by the individual (Cavalli-Sforza and Feldman, 1981; Ritt, 2004, etc.)", ".", "But while evolution, linguistic or otherwise, is driven by forces acting on the individual, it unfolds on the level of populations (Cavalli-Sforza and Feldman, 1981) .", "The influence of communitylevel social factors on the path of language change is a major focus of sociolinguistics (Labov, 2001; Milroy and Milroy, 1985; Rogers Everett, 1995) .", "Ideally, one could observe population-level variation unfold in real time while testing out individual factors, but this is impossible because nobody can travel back in time or fit entire natural environments into a lab.", "Change that has already happened is out of reach, and change in progress is buried in a world of confounds.", "The classic sociolinguistic method instead approaches the problem by inferring causal factors from patterns discovered in field interviews and corpora (Labov, 1994; Labov et al., 2005, etc.)", ".", "This is the primary source of empirical data in the field and the only way to look at language change in a naturalistic setting, but it is limited in that it cannot test cause and effect directly.", "More recently, controlled experimental studies have emerged as a complementary line of research which manipulate causal factors directly (Johnson et al., 1999; Campbell-Kibler, 2009, etc.)", ", but are inherently removed natural time and scale.", "A third approach, the one we build upon here, relies on computational modeling to simulate how sociolinguistic factors might work together in larger populations (Klein, 1966; Blythe and Croft, 2012; Kauhanen, 2016, etc.)", ".", "It has long been argued that language acquisition is the primary cause of language change (Sweet, 1899; Lightfoot, 1979; Niyogi, 1998, etc.)", ".", "In the last few decades, this connection has been modeled computationally (Gibson and Wexler, 1994; Kirby et al., 2000; Yang, 2000, etc.)", ", leading to the strong conclusion that change is the inevitable consequence of mixed linguistic input or finite learning periods (Niyogi and Berwick, 1996) , even if children are \"perfect\" learners.", "An important result connecting the learner and population emphasizes the need for this line of work: the space of paths of change available in populations is formally larger than the paths available to linear chains of iterated learners.", "Niyogi and Berwick (2009) prove formally that even perfectly-mixed (i.e., uniform and homogeneous social network) populations admit phase transitions in the path of change unavailable to chains of single learners commonly implemented in iterated learning (Kirby et al., 2000) .", "This suggests that small-population experimental studies in sociolinguistics and in child language acquisition do not paint the full picture of language change.", "We introduce a new framework for modeling language change in populations.", "It has an outer loop to represent generational progression, but it replaces the inner loop which calculates randomized interactions between agents with a single formula that is defined generally enough to allow the simulation of a wide range of scenarios.", "It builds upon the principled formalism described by Niyogi and Berwick (1996, et seq.)", ", privileging the acquisition model and separating it from the population model.", "The resulting modular framework is described in the following sections.", "First, Section 1.1 presents a survey of previous simulation work followed by a description of the new population model in Section 2.", "Next, Section 3 addresses practical concerns relating population size to assumptions about language acquisition.", "Finally, Section 4 introduces a case study on phonological change which demonstrates the need for appropriate models both of acquisition and populations.", "Related Work Computational models for the propagation of linguistic variation have been employed with a variety of research goals in mind.", "Every paper implements its own framework with few exceptions, so comparison across studies is difficult.", "Additionally, since each model is essentially 'boutique,' it is always possible that models are designed consciously or unconsciously to achieve a specific outcome rather driven by underlying principles.", "We group these frameworks into three classes according to their implementation, swarm, network, and algebraic, and discusses their strengths and weaknesses.", "The first class, called swarm here, models populations as collections of agents placed on a grid.", "They \"swarm\" around randomly according to some movement function, and \"interact\" when they occupy adjacent grid spaces (Satterfield, 2001; Harrison et al., 2002; Ke et al., 2008; Stanford and Kenny, 2013) .", "This tends toward concrete interpretation, for example, more mobile populations are expressed directly by more mobile agents.", "They capture Bloomfield (1933) 's \"principle of density\" which describes the observation that geographically or socially close individuals interact more frequently than those farther away.", "On the other hand, they provide little control over network structure, relying on series of explicit movement constraints in order to direct their agents, and since each one moves randomly at each iteration, these models have potentially thousands of degrees of freedom.", "Such simulations should be run many times if any sort of statistically expected results are to be computed.", "The second class, network frameworks, model speakers as nodes and interaction probabilities as weighted edges on network graphs (Minett and Wang, 2008; Baxter et al., 2009; Fagyal et al., 2010; Blythe and Croft, 2012; Kauhanen, 2016) .", "These frameworks offer precise control over social network structure and can test specific community models from within sociolinguistics.", "However, implementations usually proceed by some kind of iterative probabilistic node-pair selection process, and in this way suffer from the same statistical pitfalls as swarm frameworks.", "In contrast to swarm models, interaction is rigidly restricted to immediately connected nodes, so to achieve gradient interaction probabilities, edges must be frequently updated or nearly fully-connected graphs with carefully assigned edge weights would need to be constructed and motivated.", "The third class, algebraic frameworks, present analytic methods for determining the state of the network at the end of each iteration rather than relying on stochastic simulation of individual agents (Niyogi and Berwick, 1996, 1997; Yang, 2000; Baxter et al., 2006; Minett and Wang, 2008; Niyogi and Berwick, 2009) .", "Removing that inner loop is a more mathematically elegant approach and avoids dealing unnecessarily with statistics behind random trials.", "Removing that loop speeds up calculation as well, making larger simulations more tractable than with network or swarm frameworks.", "But this power is achieved by sacrificing the social network.", "Up to this point, such models have, to our knowledge, only been defined over perfectly-mixed (i.e., no network effects) populations.", "That assumption is useful for reasoning about the mathematical theory behind language change, but it hinders such models' utility in empirical studies.", "For example, though Baxter et al.", "(2006) and Minett and Wang (2008) implement algebraic models for perfectly mixed populations, they fall back on network models to model network effects.", "Framework for Transmission in Social Networks Algebraic frameworks have their mathematical advantage, but network frameworks provide a richer model for representing real-world population structures and swarm models capture density effects by default.", "An ideal framework would combine the benefits of all three of these.", "Here we do just that.", "We introduce a framework that instantiates Niyogi and Berwick (1996) 's acquisitiondriven formalism where change is handled explicitly as a two-step alternation between individual learners learning and populations interacting.", "It provides an analytic solution to the state of a network structure over which swarm-like behavior can be modeled.", "We begin by conceptualizing the framework in terms of agents traveling probabilistically over a network structure as in Algo.", "1 before introducing the analytic solution.", "There is an individual standing at every node in the graph, and at every iteration, each individual begins at some location and travels along the network's edges, at each step deciding to continue on or to stop and interact with the agent at that node.", "Any two agents with a nonzero weight path between them could potentially interact, so the overall probability of an interaction is a function of the shape of the network and the decay rate of the step probability.", "The shorter and higher weighted the path between two agents, the more likely they are to interact.", "This corresponds to the gradient interaction probabilities of swarm frameworks.", "Social networks are typically conceived of as graph structures with individuals as vertices and the social or geographical connections between individuals as edges, and this allows for a great deal of flexibility.", "If edges are undirected, then all interactions are equal and bidirectional, but if edges are directed, interactions may or may not be.", "Edges can be weighted to represent likelihood of interaction or some measure of social valuation, and this too can vary over time.", "Lastly, it is possible to add and remove nodes themselves to capture births, deaths, or migration.", "The network structure is represented computationally here as an adjacency matrix A.", "In a population of n individuals, this is n Γ— n where each element a ij is the weight of the connection from individual j to individual i.", "The matrix must be column stochastic (all columns sum to 1 and contain only positive elements) so that edge weights can be interpreted as probabilities.", "The special case where the matrix is symmetric (every a ij = a ji ) models undirected edges, and more strongly, the model reduces to perfectly-mixed populations when each a ij = 1 n .", "We define a notion of communities over the nodes of the network in order to add the option to categorize groups of individuals.", "Membership among c communities is identified with an n Γ— c indicator matrix C. Depending on the problem at hand, it is possible to calculate the average behavior of the learners within each community directly without having to calculate the behavior of each individual member.", "Propagation in the Network In a typical network model, the edge weights between nodes in A are interpreted directly as interaction probabilities, meaning that individuals only ever interact with their immediate graph neighbors.", "We take a different approach by allowing the agents to \"travel\" and potentially interact with any other agent whose node is connected by a path of non-zero edges.", "If the number of traveling steps were fixed at k, the probability of each pair interacting would be defined as A k .", "It is more complicated for us since the number of steps traveled is a random variable.", "The probability of j interacting with i (p(ij)) is the probability of them interacting after k steps times the probability of k for all values of k as in Eqn.", "1.", "Combining this intuition with A yields the interaction probabilities for all i, j pairs.", "p(ij) = k p(ij|k steps) p(k steps) (1) The pattern of linguistic variants or grammars (in the formal sense where grammar g is the intensional equivalent of language L g ) within a network unfolds as a dynamical system over the course of many iterations, and learners' positions within the network mediate which ones they eventually acquire.", "In a system with g grammars and n individuals, a n Γ— g row-stochastic matrix G specifies the probability with which each community expresses each grammar.", "Given this notion of interaction and the specification of grammars expressed within a network, it is possible to compute the distribution of grammars presented to each learner.", "This is the learners' linguistic environment and is represented by a matrix E in the same form as G .", "An environment function E n (G t , A) = E t+1 shown in Eqn.", "2 calculates E by first calculating all the interaction probabilities in the network then multiplying those by the grammars which every agent expresses to get the environment E. The Ξ± parameter from the geometric distribution 1 defines the travel decay rate.", "A lower Ξ± defines conceptually more mobile agents.", "More generally, E n is a special case of E(G t , C t , A t ) = E t+1 where the number of communities equals the number of individuals (c = n).", "C becomes the identity matrix without loss of generality, so the network's initial condition does not have to be defined explicitly.", "For any other community definition, an initial condition has to be defined as in Eqn.", "3 which specifies the starting point in the network that each agent conceptually begins traveling from.", "The output of E is a g Γ— c matrix giving the environment of the average agent in each community.", "2 E n (G t , A) = G t Ξ± (I βˆ’ (1 βˆ’ Ξ±)A) βˆ’1 (2) E(G t , C, A) = E n (G t , A)C(C C) βˆ’1 (3) The output of E must be broadcast to g Γ— n, which would result in the loss of some information unless the assumption can be made that each community is internally uniform.", "However, when that assumption can be made, the n Γ— n adjacency matrix admits a c Γ— c equitable partition A Ο€ (Eqn.", "4) (Schaub et al., 2016) which permits an alternate environment function E EP (G t , C, A) shown in Eqn.", "5 that is equivalent to the lossless E n if A.", "If n c, E EP is much faster to calculate because it only inverts a small c Γ— c matrix rather than a large n Γ— n. This makes it feasible to run much larger simulations than what has been done in the past.", "A Ο€ = (C C) βˆ’1 C AC (4) EEP = Ξ±G C (I βˆ’ (1 βˆ’ Ξ±)A Ο€ ) βˆ’1 (C C) βˆ’1 (5) Learning in the Network The environment function describes what inputs E t+1 are available to learners given the language expressed by the mature speakers of the previous age cohort with grammars G t .", "The second component of the framework describes the learning algorithm A(E t+1 ) = G t+1 , how individuals respond to their input environment.", "The resulting G t+1 describes which grammars those learners will eventually contribute to the subsequent generation's environment E t+2 .", "This back-andforth between adults' grammars G and childrens' environment E is the two-step cycle of language change (Fig.", "1) .", "In neutral change, learners would acquire grammars at the rates that they are expressed in their environments, but there is good reason to believe .", ".", ".", "G t β†’ E t+1 β†’ G t+1 .", ".", ".", "G t+i β†’ E t+i+1 .", ".", ".", "Figure 1: Language change as an alternation between G and E matrices that most language change involves differential fitness between competing variants, and most nontrivial learning algorithms yield some kind of fitness (Kroch, 1989; Yang, 2000; Blythe and Croft, 2012, etc.)", ", so A is rarely neutral.", "A neutral and simple advantaged model are both considered in Section 3, and a more complex learning algorithm is described for Section 4.", "Application: Testing Assumptions The general nature of the framework described here renders it suitable for reproducing the results of previous works and evaluating their assumptions.", "To demonstrate this, we reproduce the major result from Kauhanen (2016) , which tested the behavior of neutral change in networks of singlegrammar learners, in order to dissect two of its primary assumptions.", "Implemented in a typical network framework, the original setup contains n = 200 individuals in probabilistically generated centralized networks in which individuals mature categorically to the single most frequent grammar in their input.", "The author found that categorical neutral change produced chaotic paths of change regardless of network shape and that periodically \"rewiring\" some of the network edges smoothed this out.", "Without commenting on rewiring, we find that the combination of n and choice of categorical learners conspire to create the chaotic results.", "We create two communities, both centralized along the lines of the single cluster in Kauhanen (2016) , initialize all members of cluster 1 with grammar g 1 and all members of cluster 2 with grammar g 2 , and additional edges are added between members of clusters 1 and 2 to allow interaction.", "G is converted to an indicator matrix at the end of each learning iteration by rounding values to 0 and 1 in order to model categorical learners who only internalize the most common grammar in their inputs as in the original model.", "In a pair of infinitely large clusters or two clusters where individuals are permitted to learn a probabilistic distribution of grammars, each cluster should homogenize to a 50/50 distribution of g 1 and g 2 after some number of iterations depending on the specifics of the network shape and setting for Ξ± creating the red curves in Fig.", "2 .", "At n = 20000, each of 10 trials roughly follows the path of the predicted curve, but when run at the original n = 200 for 10 trials, this produces the type of chaotic behavior which Kauhanen (2016) attempts to repair.", "The outcome appears to be the result of an assumption made out of convenience (n = 200) rather than a principled decision.", "To further explore the impact of the population size assumption, we experiment on a model of advantaged change, which is typically contrasted with neutral change because of its tendency to produce \"well-behaved\" S-curve change (Blythe and Croft, 2012; Kauhanen, 2016) .", "This time, only a single cluster is created, and the advantaged grammar is initially assigned to 1% of the population.", "As seen in Figure 3 , results are chaotic for n = 200 once again and near predicted for n = 20000.", "This is important because at n = 200, advantaged change is chaotic, and most simulations both rise and fall.", "An experimenter who only studied advantaged change in small population might concluded that it is as ill-behaved as neutral change.", "While the conclusions that Kauhanen (2016) draws appear valid for n = 200, it is not clear to what extent they can be projected onto larger populations.", "This demonstrates the need for carefully choosing one's modeling assumptions and testing them out when possible.", "Application: Mergers in Progress The acquisition of phonological mergers in mixed input settings presents an interesting problem.", "It appears that mergers have an inherent advantage because they tend to spread at the expense of distinctions, and once they begin, they are rarely reversed (Labov, 1994) .", "Yang (2009) 's acquisition model quantifies this advantage as the relatively lower chance of misinterpretation if a listener assumes the merged grammar instead of the nonmerged grammar once a sufficient proportion of the environment is merged.", "Applied to Johnson (2007) 's detailed population study of the frontier of the COT-CAUGHT merger in the small towns along the border between Rhode Island and Massachusetts, this accurately predicts the ratio of merged input for a child to acquire the merged grammar, however when applied to a perfectly mixed population of learners, it fails to model the spread of the merged grammar in the population.", "Yang's model is input-driven, so it is conducive to simulation with minimal assumptions past those drawn from the empirical data.", "We test the behavior of this learning model in a typical population network and demonstrate that it produces a reasonable path of change.", "Background The COT-CAUGHT merger, also called the low back merger describes the phenomenon present in varieties of North American English spoken in eastern New England, western Pennsylvania, the American West, and Canada among others where the vowel in words like cot and the vowel in words like caught have come to be pronounced the same (Labov et al., 2005, pp.", "58-65) .", "The geographical extent of the merger is currently expanding, which might be expected if the merger has a cognitive or social advantage associated with it.", "Johnson (2007) 's study of the merger's frontier on the border Rhode Island and Massachusetts uncovered an interesting social dynamic that illustrates the merger's speed: there are families where the parents and older siblings non-merged, but the younger siblings are.", "The merger has swept through in only a few years and passed between the siblings.", "Yang (2009) seeks to understand why mergers have an advantage from a cognitive perspective, and his model treats the acquisition of mergers as an evolutionary process.", "Learners who receive both merged (M + ) and non-merged (M βˆ’ ) input entertain both a merged (g + ) and non-merged (g βˆ’ ) grammar and reward whichever grammar successfully parses the input.", "This kind of variational learner (Yang, 2000) is essentially an adaptation of the classic evolutionary Linear Reward Punishment model (Bush and Mosteller, 1953) .", "The fitness of each grammar is the probability in the limit that it will fail to parse any given input, and since it is virtually always the case that this probability is different for both grammars, fitness is virtually always asymmetric.", "The variational learner is characterized as follows.", "Given two grammars and an input token s, The learner parses s with g 1 with probability p and with g 2 with probability q = 1 βˆ’ p. p is rewarded according to whether the choice of g successfully parses s (g β†’ s) or it fails to (g s), where Ξ³ is some small constant.", "p = p + Ξ³q, g β†’ s (1 βˆ’ Ξ³)p, g s Given a specific problem, one can calculate a penalty probability C for each g, the proportion of input that would cause g s. The grammar with the lower C has the advantage, so the other one will be driven down in the long run.", "C can be estimated from type frequencies in a corpus, and the model is non-parametric because these values do not depend on Ξ³. lim tβ†’βˆž p t = C 2 C 1 + C 2 lim tβ†’βˆž q t = C 1 C 1 + C 2 To understand the COT-CAUGHT merger empirically, one must reason about what kind of input would trigger a penalty and then calculate the penalty probabilities of the merged grammar C + and non-merged grammar C βˆ’ from a corpus.", "This model considers parsing failure to be the rate of initial misinterpretation, and for a vowel merger, the only inputs that could create an initial misinterpretation are minimal pairs because they become homophones.", "Examples of COT-CAUGHT minimal pairs include cot-caught, Don-Dawn, stock-stalk, odd-awed, collar-caller, and so on.", "The merged g + grammar collapses would-be minimal pairs into homophones, so the penalty rate C + comes down to lexical access.", "Under the observation that more frequent homophones are retrieved first regardless of syntactic context (Caramazza et al., 2001) , g + listeners only suffer initial misinterpretation when the less frequent member of a pair is uttered regardless of the rate of M + .", "If H is the sum token frequency of all minimal pairs and h i o , h i oh are the frequencies of the ith pair's members, then C+ is calculated by Eqn.", "6.", "In contrast, g βˆ’ listeners are sensitive to the phonemic distinction, so they misinterpret M βˆ’ input at the rate of mishearing one vowel for the other (Peterson and Barney, 1952) (second half of Eqn.", "7).", "And given M + input, they misinterpret whenever they hear the phoneme which g βˆ’ does not expect (e.g., a merged speaker pronouncing cot with the CAUGHT vowel) times the probability of not mishearing that vowel (1-) plus times the probability of hearing the right vowel (i.e., the merged speaker pronounces cot with the COT vowel but it is misheard anyway) (first half of Eqn.", "7).", "Since g βˆ’ misinterpretation rates are a function of the rate of M + (p) in the environment, there is a threshold of M + speakers above which the merged grammar has a fitness advantage over the non-merged one.", "C + = 1 H i min(h i o , h i oh ) (6) C βˆ’ = 1 H i p 0 ((1 βˆ’ oh )h i o + oh h i oh ) (7) +q 0 ( oh h i o + oh h i oh ) Calculating this threshold for the frequent minimal pairs that Yang extracts from the Wortschatz project (Biemann et al., 2004 ) corpus 3 and mishearing rates from Peterson and Barney (1952), the Yang model predicts that a learner exposed to at least ∼ 17% COT-CAUGHT-merged input will acquire the merger.", "This threshold represents a strong advantage for M + because it is well under the 50% threshold expected for neutral (non-advantaged) change and it is very close to what was found in Johnson (2007) 's sociolinguistic study.", "It predicts that younger children may have g + while their parents and even older siblings have g βˆ’ if the 17% threshold was crossed in E after the acquisition period of the older sibling but before that of the younger sibling.", "Model Setup All the mechanics behind the learning model reduce to a simple statement: learners acquires g + iff > 17% of their input is M + and they acquire g βˆ’ otherwise.", "However, this kind of categorical learner in a perfectly-mixed population leads to immediate fixation at either g βˆ’ or g + in a single iteration, since the proportion of g + speakers in the population is equivalent to the proportion of M + input in every learner's environment.", "This is not realistic change.", "Clearly, social network structure is at least as important as the learning algorithm in modeling the spread of the merger.", "We model the change in a non-uniform social network of 100 centralized clusters of 75 individuals each.", "75 was chosen as half Dunbar's number, the maximum number of reliable social connections that an adult can maintain (Dunbar, 2010) .", "There are two grammars, g + and g βˆ’ , and learners internalize one or the other according to the 17% threshold of M + in their input.", "One cluster represents the source of the merger and is initialized at 100% g + , while the rest begin 100% g βˆ’ .", "Inter-cluster connections are chosen randomly so that some connections are between central members of the clusters and some are between peripheral members.", "The one merged cluster is connected to half the other clusters representing those at the frontier of the change, and each other cluster is connected to five randomly chosen ones.", "4 This network structure echoes work in sociolinguistics, in particular, Milroy and Milroy (1985) 's notion of strong and weak connections in language change, where weak connections between social clusters are particularly important for propagation of a change.", "Propagation of the merged grammar is calculated by E n because we are interested in the behavior of individuals without loss of precision and because it cannot be assumed that each cluster is internally uniform.", "5 Since the spread of the merger has been rapid enough to detect over a period of a few years, iterations are modeled as short age co-horts rather than full generations in the first experiments by updating only a randomly chosen 10% of nodes at each iteration because only a fraction of the population is learning at any given time.", "A model where every node is updated is investigated as well.", "Results The behavior of this simulation is shown graphically in Figure 4 .", "The fine/colored lines indicate the rate of M + within each initially non-merged cluster, and the bold/black line shows the average rate across all initially non-merged.", "The merger spreads from cluster to cluster in succession over the \"weak\" inter-cluster connections and through each cluster over the 'strong' connections before moving on to the next ones.", "Everett, 1995) members have the merger, a period of rapid diffusion of the merger, then some time where a few laggards resist the merger.", "As a result, most clusters exhibit an S-like shape.", "A few clusters change rapidly because of their especially wellconnected positions in the network, and some lag behind the rest because they are poorly connected to the rest of the network.", "More interestingly, the population-wide average, the population-level data at the kind of granularity that is often studied, yields a smooth S-curve with a shallower slope than the individual clusters.", "The fact that it arises naturally here in a network that conforms with typical network shapes but was otherwise randomly generated is encouraging because the experiment was not set up so that it would produce such a curve, and the steep rate of change in individual clusters is what is expected for a change that is rapid enough to affect siblings differently.", "In the above simulation, only a fraction of nodes were updated at each iteration in order to model a rapid change.", "In order to confirm that this choice is not affecting the results and to test a purer implementation of the framework presented here, we remove that constraint and update every node at each iteration.", "Figure 5 shows what happens over 20 iterations in a network that is otherwise identical but with 2/5 as many inter-cluster connections as the original.", "A qualitatively similar pattern arises, so the choice to update only a fraction of the population is not crucially affecting the results.", "In all experiments so far, social connections were fixed at the first iteration even though connections in real populations tend to change over time.", "To investigate that modeling assumption, we perform another simulation in which connections are randomly updated both within and across clusters at each iteration akin to Kauhanen (2016) 's rewiring.", "The result as shown in Figure 6 is similar to before, with one major difference.", "The individual clusters transition more closely in time because no individual cluster remains poorly connected or especially well connected throughout the entire simulation.", "Finally, we test our assumptions about population size by repeating the experiments on a smaller network of 40 clusters of 18 individuals.", "The results are qualitatively similar, but the S-curve appears to be more sensitive to probabilistic connections in the network.", "To explore this, we present the average network-wide rate of (M + ) across 10 trials, revealing that an S-like curve is formed each time but that its slope varies.", "A few trials never reach 100% because some of the clusters are not connected to the innovative one.", "The slope varies between trials, indicating that the rate of change is a function of both the population structure and the learning algorithm, but the network size does not substantially affect these results.", "Discussion The algebraic-network framework for modeling population-level language change presented here has substantial practical and theoretical advantages over previous ones.", "It is much simpler computationally than previous frameworks because it calculates the statistically expected behavior of each generation analytically and therefore removes the entire inner loop of calculating stochastic inter-agent interactions from the simulation.", "It follows the Niyogi and Berwick (1996) formalism for language change which presents a clean and modular way of reasoning about the problem and promotes the centrality of language acquisition.", "In addition to the core algorithm, the framework offers enough flexibility to represent a wide variety of processes from the highly abstract (e.g., Kauhanen (2016) ) to those grounded in soci-olinguistic and acquisition research (e.g., Yang (2009) ).", "In our investigation of Kauhanen's basic assumptions, we discover how seemingly innocuous decisions about population size and learning conspire to drive simulation results.", "If learners are conceived as categorical learners, population size becomes a deciding factor in the path of change.", "So while the original results are interesting and meaningful, they may only valid for small (on the order of 10 2 ) populations.", "In our simulation of the spread of the COT-CAUGHT merger, we show how a cognitivelymotivated model of acquisition requires a network model in order to represent population-level language change.", "The population is represented as a collection of individual clusters based on sociological work, but the clusters themselves are connected randomly.", "The fact that S-curves arise naturally from these networks underscores their centrality to language change.", "One problem that this line of simulation work has always faced has been the lack of viable comparison between models because every study implements its own learning, network, and interaction models.", "The modular nature of our framework advances against this trend since it is now possible to hold the population model constant while slotting in various learning models to test them against one another and vice-versa.", "Finally, since this framework reduces to Niyogi & Berwick's models in perfectly-mixed populations, it can be used to reason about the formal dynamics of language change as well.", "Without simulation, it would be difficult or impossible to undercover the interplay between acquisition and social structure on the propagation of language change.", "Neither factor alone can account for the theoretical or empirically observed patterns.", "Simulations of this kind which explicitly model both simultaneously is well equipped to provide insights that fieldwork and laboratory work cannot.", "As such, it is an invaluable complement to those more traditional methodologies." ] }
{ "paper_header_number": [ "1", "1.1", "2", "2.2", "2.3", "3", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Framework for Transmission in Social Networks", "Propagation in the Network", "Learning in the Network", "Application: Testing Assumptions", "Application: Mergers in Progress", "Background", "Model Setup", "Results", "Discussion" ] }
GEM-SciDuet-train-106#paper-1281#slide-13
Model for Merger Acquisition Yang 2009
Learners will acquire the merged grammar iff more than ~17% of their environment is merged + Accounts for mergers tendency to spread (Labov 1994) + 17% is close to the merged rate estimated in Johnson 2007 In a perfectly-mixed model, population will immediately fix at 100% g+ or g- Claim: The merged grammar has a processing advantage Claim: Merged listeners have a lower rate of initial misinterpretation Claim: Only minimal pairs are relevant If speaker A- and listener B- are both non-merged, B- misunderstands A- at the rate of mishearing one vowel for the other (A- said // but B- heard //) If A+ speaks to B-, B- initially misunderstands whenever A+ says // when B- expects // and visa-versa If A- or A+ speaks to B+, B+ cannot hear A-s distinctions. Initial misunderstandings come down to lexical access - if the intended meaning is not the most frequent meaning (Carmazza et al 2001)
Learners will acquire the merged grammar iff more than ~17% of their environment is merged + Accounts for mergers tendency to spread (Labov 1994) + 17% is close to the merged rate estimated in Johnson 2007 In a perfectly-mixed model, population will immediately fix at 100% g+ or g- Claim: The merged grammar has a processing advantage Claim: Merged listeners have a lower rate of initial misinterpretation Claim: Only minimal pairs are relevant If speaker A- and listener B- are both non-merged, B- misunderstands A- at the rate of mishearing one vowel for the other (A- said // but B- heard //) If A+ speaks to B-, B- initially misunderstands whenever A+ says // when B- expects // and visa-versa If A- or A+ speaks to B+, B+ cannot hear A-s distinctions. Initial misunderstandings come down to lexical access - if the intended meaning is not the most frequent meaning (Carmazza et al 2001)
[]
GEM-SciDuet-train-106#paper-1281#slide-14
1281
A Framework for Representing Language Acquisition in a Population Setting
Language variation and change are driven both by individuals' internal cognitive processes and by the social structures through which language propagates. A wide range of computational frameworks have been proposed to connect these drivers. We compare the strengths and weaknesses of existing approaches and propose a new analytic framework which combines previous network models' ability to capture realistic social structure with practically and more elegant computational properties. The framework privileges the process of language acquisition and embeds learners in a social network but is modular so that population structure can be combined with different acquisition models. We demonstrate two applications for the framework: a test of practical concerns that arise when modeling acquisition in a population setting and an application of the framework to recent work on phonological mergers in progress.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217 ], "paper_content_text": [ "Introduction The process of language change should be thought of as a two-step cycle in which 1) individuals acquire their native languages from their predecessors then 2) pass them on to their successors.", "Small changes accrue over time this way and create both small-scale interpersonal variation and large-scale typological differences.", "It is easy to draw a strong analogy here between linguistic evolution and biological evolution.", "Both feature classic descent with modification, except while phenotypes are transmitted through genes and acted on by natural selection, language is both transmitted through and constrained by the individual (Cavalli-Sforza and Feldman, 1981; Ritt, 2004, etc.)", ".", "But while evolution, linguistic or otherwise, is driven by forces acting on the individual, it unfolds on the level of populations (Cavalli-Sforza and Feldman, 1981) .", "The influence of communitylevel social factors on the path of language change is a major focus of sociolinguistics (Labov, 2001; Milroy and Milroy, 1985; Rogers Everett, 1995) .", "Ideally, one could observe population-level variation unfold in real time while testing out individual factors, but this is impossible because nobody can travel back in time or fit entire natural environments into a lab.", "Change that has already happened is out of reach, and change in progress is buried in a world of confounds.", "The classic sociolinguistic method instead approaches the problem by inferring causal factors from patterns discovered in field interviews and corpora (Labov, 1994; Labov et al., 2005, etc.)", ".", "This is the primary source of empirical data in the field and the only way to look at language change in a naturalistic setting, but it is limited in that it cannot test cause and effect directly.", "More recently, controlled experimental studies have emerged as a complementary line of research which manipulate causal factors directly (Johnson et al., 1999; Campbell-Kibler, 2009, etc.)", ", but are inherently removed natural time and scale.", "A third approach, the one we build upon here, relies on computational modeling to simulate how sociolinguistic factors might work together in larger populations (Klein, 1966; Blythe and Croft, 2012; Kauhanen, 2016, etc.)", ".", "It has long been argued that language acquisition is the primary cause of language change (Sweet, 1899; Lightfoot, 1979; Niyogi, 1998, etc.)", ".", "In the last few decades, this connection has been modeled computationally (Gibson and Wexler, 1994; Kirby et al., 2000; Yang, 2000, etc.)", ", leading to the strong conclusion that change is the inevitable consequence of mixed linguistic input or finite learning periods (Niyogi and Berwick, 1996) , even if children are \"perfect\" learners.", "An important result connecting the learner and population emphasizes the need for this line of work: the space of paths of change available in populations is formally larger than the paths available to linear chains of iterated learners.", "Niyogi and Berwick (2009) prove formally that even perfectly-mixed (i.e., uniform and homogeneous social network) populations admit phase transitions in the path of change unavailable to chains of single learners commonly implemented in iterated learning (Kirby et al., 2000) .", "This suggests that small-population experimental studies in sociolinguistics and in child language acquisition do not paint the full picture of language change.", "We introduce a new framework for modeling language change in populations.", "It has an outer loop to represent generational progression, but it replaces the inner loop which calculates randomized interactions between agents with a single formula that is defined generally enough to allow the simulation of a wide range of scenarios.", "It builds upon the principled formalism described by Niyogi and Berwick (1996, et seq.)", ", privileging the acquisition model and separating it from the population model.", "The resulting modular framework is described in the following sections.", "First, Section 1.1 presents a survey of previous simulation work followed by a description of the new population model in Section 2.", "Next, Section 3 addresses practical concerns relating population size to assumptions about language acquisition.", "Finally, Section 4 introduces a case study on phonological change which demonstrates the need for appropriate models both of acquisition and populations.", "Related Work Computational models for the propagation of linguistic variation have been employed with a variety of research goals in mind.", "Every paper implements its own framework with few exceptions, so comparison across studies is difficult.", "Additionally, since each model is essentially 'boutique,' it is always possible that models are designed consciously or unconsciously to achieve a specific outcome rather driven by underlying principles.", "We group these frameworks into three classes according to their implementation, swarm, network, and algebraic, and discusses their strengths and weaknesses.", "The first class, called swarm here, models populations as collections of agents placed on a grid.", "They \"swarm\" around randomly according to some movement function, and \"interact\" when they occupy adjacent grid spaces (Satterfield, 2001; Harrison et al., 2002; Ke et al., 2008; Stanford and Kenny, 2013) .", "This tends toward concrete interpretation, for example, more mobile populations are expressed directly by more mobile agents.", "They capture Bloomfield (1933) 's \"principle of density\" which describes the observation that geographically or socially close individuals interact more frequently than those farther away.", "On the other hand, they provide little control over network structure, relying on series of explicit movement constraints in order to direct their agents, and since each one moves randomly at each iteration, these models have potentially thousands of degrees of freedom.", "Such simulations should be run many times if any sort of statistically expected results are to be computed.", "The second class, network frameworks, model speakers as nodes and interaction probabilities as weighted edges on network graphs (Minett and Wang, 2008; Baxter et al., 2009; Fagyal et al., 2010; Blythe and Croft, 2012; Kauhanen, 2016) .", "These frameworks offer precise control over social network structure and can test specific community models from within sociolinguistics.", "However, implementations usually proceed by some kind of iterative probabilistic node-pair selection process, and in this way suffer from the same statistical pitfalls as swarm frameworks.", "In contrast to swarm models, interaction is rigidly restricted to immediately connected nodes, so to achieve gradient interaction probabilities, edges must be frequently updated or nearly fully-connected graphs with carefully assigned edge weights would need to be constructed and motivated.", "The third class, algebraic frameworks, present analytic methods for determining the state of the network at the end of each iteration rather than relying on stochastic simulation of individual agents (Niyogi and Berwick, 1996, 1997; Yang, 2000; Baxter et al., 2006; Minett and Wang, 2008; Niyogi and Berwick, 2009) .", "Removing that inner loop is a more mathematically elegant approach and avoids dealing unnecessarily with statistics behind random trials.", "Removing that loop speeds up calculation as well, making larger simulations more tractable than with network or swarm frameworks.", "But this power is achieved by sacrificing the social network.", "Up to this point, such models have, to our knowledge, only been defined over perfectly-mixed (i.e., no network effects) populations.", "That assumption is useful for reasoning about the mathematical theory behind language change, but it hinders such models' utility in empirical studies.", "For example, though Baxter et al.", "(2006) and Minett and Wang (2008) implement algebraic models for perfectly mixed populations, they fall back on network models to model network effects.", "Framework for Transmission in Social Networks Algebraic frameworks have their mathematical advantage, but network frameworks provide a richer model for representing real-world population structures and swarm models capture density effects by default.", "An ideal framework would combine the benefits of all three of these.", "Here we do just that.", "We introduce a framework that instantiates Niyogi and Berwick (1996) 's acquisitiondriven formalism where change is handled explicitly as a two-step alternation between individual learners learning and populations interacting.", "It provides an analytic solution to the state of a network structure over which swarm-like behavior can be modeled.", "We begin by conceptualizing the framework in terms of agents traveling probabilistically over a network structure as in Algo.", "1 before introducing the analytic solution.", "There is an individual standing at every node in the graph, and at every iteration, each individual begins at some location and travels along the network's edges, at each step deciding to continue on or to stop and interact with the agent at that node.", "Any two agents with a nonzero weight path between them could potentially interact, so the overall probability of an interaction is a function of the shape of the network and the decay rate of the step probability.", "The shorter and higher weighted the path between two agents, the more likely they are to interact.", "This corresponds to the gradient interaction probabilities of swarm frameworks.", "Social networks are typically conceived of as graph structures with individuals as vertices and the social or geographical connections between individuals as edges, and this allows for a great deal of flexibility.", "If edges are undirected, then all interactions are equal and bidirectional, but if edges are directed, interactions may or may not be.", "Edges can be weighted to represent likelihood of interaction or some measure of social valuation, and this too can vary over time.", "Lastly, it is possible to add and remove nodes themselves to capture births, deaths, or migration.", "The network structure is represented computationally here as an adjacency matrix A.", "In a population of n individuals, this is n Γ— n where each element a ij is the weight of the connection from individual j to individual i.", "The matrix must be column stochastic (all columns sum to 1 and contain only positive elements) so that edge weights can be interpreted as probabilities.", "The special case where the matrix is symmetric (every a ij = a ji ) models undirected edges, and more strongly, the model reduces to perfectly-mixed populations when each a ij = 1 n .", "We define a notion of communities over the nodes of the network in order to add the option to categorize groups of individuals.", "Membership among c communities is identified with an n Γ— c indicator matrix C. Depending on the problem at hand, it is possible to calculate the average behavior of the learners within each community directly without having to calculate the behavior of each individual member.", "Propagation in the Network In a typical network model, the edge weights between nodes in A are interpreted directly as interaction probabilities, meaning that individuals only ever interact with their immediate graph neighbors.", "We take a different approach by allowing the agents to \"travel\" and potentially interact with any other agent whose node is connected by a path of non-zero edges.", "If the number of traveling steps were fixed at k, the probability of each pair interacting would be defined as A k .", "It is more complicated for us since the number of steps traveled is a random variable.", "The probability of j interacting with i (p(ij)) is the probability of them interacting after k steps times the probability of k for all values of k as in Eqn.", "1.", "Combining this intuition with A yields the interaction probabilities for all i, j pairs.", "p(ij) = k p(ij|k steps) p(k steps) (1) The pattern of linguistic variants or grammars (in the formal sense where grammar g is the intensional equivalent of language L g ) within a network unfolds as a dynamical system over the course of many iterations, and learners' positions within the network mediate which ones they eventually acquire.", "In a system with g grammars and n individuals, a n Γ— g row-stochastic matrix G specifies the probability with which each community expresses each grammar.", "Given this notion of interaction and the specification of grammars expressed within a network, it is possible to compute the distribution of grammars presented to each learner.", "This is the learners' linguistic environment and is represented by a matrix E in the same form as G .", "An environment function E n (G t , A) = E t+1 shown in Eqn.", "2 calculates E by first calculating all the interaction probabilities in the network then multiplying those by the grammars which every agent expresses to get the environment E. The Ξ± parameter from the geometric distribution 1 defines the travel decay rate.", "A lower Ξ± defines conceptually more mobile agents.", "More generally, E n is a special case of E(G t , C t , A t ) = E t+1 where the number of communities equals the number of individuals (c = n).", "C becomes the identity matrix without loss of generality, so the network's initial condition does not have to be defined explicitly.", "For any other community definition, an initial condition has to be defined as in Eqn.", "3 which specifies the starting point in the network that each agent conceptually begins traveling from.", "The output of E is a g Γ— c matrix giving the environment of the average agent in each community.", "2 E n (G t , A) = G t Ξ± (I βˆ’ (1 βˆ’ Ξ±)A) βˆ’1 (2) E(G t , C, A) = E n (G t , A)C(C C) βˆ’1 (3) The output of E must be broadcast to g Γ— n, which would result in the loss of some information unless the assumption can be made that each community is internally uniform.", "However, when that assumption can be made, the n Γ— n adjacency matrix admits a c Γ— c equitable partition A Ο€ (Eqn.", "4) (Schaub et al., 2016) which permits an alternate environment function E EP (G t , C, A) shown in Eqn.", "5 that is equivalent to the lossless E n if A.", "If n c, E EP is much faster to calculate because it only inverts a small c Γ— c matrix rather than a large n Γ— n. This makes it feasible to run much larger simulations than what has been done in the past.", "A Ο€ = (C C) βˆ’1 C AC (4) EEP = Ξ±G C (I βˆ’ (1 βˆ’ Ξ±)A Ο€ ) βˆ’1 (C C) βˆ’1 (5) Learning in the Network The environment function describes what inputs E t+1 are available to learners given the language expressed by the mature speakers of the previous age cohort with grammars G t .", "The second component of the framework describes the learning algorithm A(E t+1 ) = G t+1 , how individuals respond to their input environment.", "The resulting G t+1 describes which grammars those learners will eventually contribute to the subsequent generation's environment E t+2 .", "This back-andforth between adults' grammars G and childrens' environment E is the two-step cycle of language change (Fig.", "1) .", "In neutral change, learners would acquire grammars at the rates that they are expressed in their environments, but there is good reason to believe .", ".", ".", "G t β†’ E t+1 β†’ G t+1 .", ".", ".", "G t+i β†’ E t+i+1 .", ".", ".", "Figure 1: Language change as an alternation between G and E matrices that most language change involves differential fitness between competing variants, and most nontrivial learning algorithms yield some kind of fitness (Kroch, 1989; Yang, 2000; Blythe and Croft, 2012, etc.)", ", so A is rarely neutral.", "A neutral and simple advantaged model are both considered in Section 3, and a more complex learning algorithm is described for Section 4.", "Application: Testing Assumptions The general nature of the framework described here renders it suitable for reproducing the results of previous works and evaluating their assumptions.", "To demonstrate this, we reproduce the major result from Kauhanen (2016) , which tested the behavior of neutral change in networks of singlegrammar learners, in order to dissect two of its primary assumptions.", "Implemented in a typical network framework, the original setup contains n = 200 individuals in probabilistically generated centralized networks in which individuals mature categorically to the single most frequent grammar in their input.", "The author found that categorical neutral change produced chaotic paths of change regardless of network shape and that periodically \"rewiring\" some of the network edges smoothed this out.", "Without commenting on rewiring, we find that the combination of n and choice of categorical learners conspire to create the chaotic results.", "We create two communities, both centralized along the lines of the single cluster in Kauhanen (2016) , initialize all members of cluster 1 with grammar g 1 and all members of cluster 2 with grammar g 2 , and additional edges are added between members of clusters 1 and 2 to allow interaction.", "G is converted to an indicator matrix at the end of each learning iteration by rounding values to 0 and 1 in order to model categorical learners who only internalize the most common grammar in their inputs as in the original model.", "In a pair of infinitely large clusters or two clusters where individuals are permitted to learn a probabilistic distribution of grammars, each cluster should homogenize to a 50/50 distribution of g 1 and g 2 after some number of iterations depending on the specifics of the network shape and setting for Ξ± creating the red curves in Fig.", "2 .", "At n = 20000, each of 10 trials roughly follows the path of the predicted curve, but when run at the original n = 200 for 10 trials, this produces the type of chaotic behavior which Kauhanen (2016) attempts to repair.", "The outcome appears to be the result of an assumption made out of convenience (n = 200) rather than a principled decision.", "To further explore the impact of the population size assumption, we experiment on a model of advantaged change, which is typically contrasted with neutral change because of its tendency to produce \"well-behaved\" S-curve change (Blythe and Croft, 2012; Kauhanen, 2016) .", "This time, only a single cluster is created, and the advantaged grammar is initially assigned to 1% of the population.", "As seen in Figure 3 , results are chaotic for n = 200 once again and near predicted for n = 20000.", "This is important because at n = 200, advantaged change is chaotic, and most simulations both rise and fall.", "An experimenter who only studied advantaged change in small population might concluded that it is as ill-behaved as neutral change.", "While the conclusions that Kauhanen (2016) draws appear valid for n = 200, it is not clear to what extent they can be projected onto larger populations.", "This demonstrates the need for carefully choosing one's modeling assumptions and testing them out when possible.", "Application: Mergers in Progress The acquisition of phonological mergers in mixed input settings presents an interesting problem.", "It appears that mergers have an inherent advantage because they tend to spread at the expense of distinctions, and once they begin, they are rarely reversed (Labov, 1994) .", "Yang (2009) 's acquisition model quantifies this advantage as the relatively lower chance of misinterpretation if a listener assumes the merged grammar instead of the nonmerged grammar once a sufficient proportion of the environment is merged.", "Applied to Johnson (2007) 's detailed population study of the frontier of the COT-CAUGHT merger in the small towns along the border between Rhode Island and Massachusetts, this accurately predicts the ratio of merged input for a child to acquire the merged grammar, however when applied to a perfectly mixed population of learners, it fails to model the spread of the merged grammar in the population.", "Yang's model is input-driven, so it is conducive to simulation with minimal assumptions past those drawn from the empirical data.", "We test the behavior of this learning model in a typical population network and demonstrate that it produces a reasonable path of change.", "Background The COT-CAUGHT merger, also called the low back merger describes the phenomenon present in varieties of North American English spoken in eastern New England, western Pennsylvania, the American West, and Canada among others where the vowel in words like cot and the vowel in words like caught have come to be pronounced the same (Labov et al., 2005, pp.", "58-65) .", "The geographical extent of the merger is currently expanding, which might be expected if the merger has a cognitive or social advantage associated with it.", "Johnson (2007) 's study of the merger's frontier on the border Rhode Island and Massachusetts uncovered an interesting social dynamic that illustrates the merger's speed: there are families where the parents and older siblings non-merged, but the younger siblings are.", "The merger has swept through in only a few years and passed between the siblings.", "Yang (2009) seeks to understand why mergers have an advantage from a cognitive perspective, and his model treats the acquisition of mergers as an evolutionary process.", "Learners who receive both merged (M + ) and non-merged (M βˆ’ ) input entertain both a merged (g + ) and non-merged (g βˆ’ ) grammar and reward whichever grammar successfully parses the input.", "This kind of variational learner (Yang, 2000) is essentially an adaptation of the classic evolutionary Linear Reward Punishment model (Bush and Mosteller, 1953) .", "The fitness of each grammar is the probability in the limit that it will fail to parse any given input, and since it is virtually always the case that this probability is different for both grammars, fitness is virtually always asymmetric.", "The variational learner is characterized as follows.", "Given two grammars and an input token s, The learner parses s with g 1 with probability p and with g 2 with probability q = 1 βˆ’ p. p is rewarded according to whether the choice of g successfully parses s (g β†’ s) or it fails to (g s), where Ξ³ is some small constant.", "p = p + Ξ³q, g β†’ s (1 βˆ’ Ξ³)p, g s Given a specific problem, one can calculate a penalty probability C for each g, the proportion of input that would cause g s. The grammar with the lower C has the advantage, so the other one will be driven down in the long run.", "C can be estimated from type frequencies in a corpus, and the model is non-parametric because these values do not depend on Ξ³. lim tβ†’βˆž p t = C 2 C 1 + C 2 lim tβ†’βˆž q t = C 1 C 1 + C 2 To understand the COT-CAUGHT merger empirically, one must reason about what kind of input would trigger a penalty and then calculate the penalty probabilities of the merged grammar C + and non-merged grammar C βˆ’ from a corpus.", "This model considers parsing failure to be the rate of initial misinterpretation, and for a vowel merger, the only inputs that could create an initial misinterpretation are minimal pairs because they become homophones.", "Examples of COT-CAUGHT minimal pairs include cot-caught, Don-Dawn, stock-stalk, odd-awed, collar-caller, and so on.", "The merged g + grammar collapses would-be minimal pairs into homophones, so the penalty rate C + comes down to lexical access.", "Under the observation that more frequent homophones are retrieved first regardless of syntactic context (Caramazza et al., 2001) , g + listeners only suffer initial misinterpretation when the less frequent member of a pair is uttered regardless of the rate of M + .", "If H is the sum token frequency of all minimal pairs and h i o , h i oh are the frequencies of the ith pair's members, then C+ is calculated by Eqn.", "6.", "In contrast, g βˆ’ listeners are sensitive to the phonemic distinction, so they misinterpret M βˆ’ input at the rate of mishearing one vowel for the other (Peterson and Barney, 1952) (second half of Eqn.", "7).", "And given M + input, they misinterpret whenever they hear the phoneme which g βˆ’ does not expect (e.g., a merged speaker pronouncing cot with the CAUGHT vowel) times the probability of not mishearing that vowel (1-) plus times the probability of hearing the right vowel (i.e., the merged speaker pronounces cot with the COT vowel but it is misheard anyway) (first half of Eqn.", "7).", "Since g βˆ’ misinterpretation rates are a function of the rate of M + (p) in the environment, there is a threshold of M + speakers above which the merged grammar has a fitness advantage over the non-merged one.", "C + = 1 H i min(h i o , h i oh ) (6) C βˆ’ = 1 H i p 0 ((1 βˆ’ oh )h i o + oh h i oh ) (7) +q 0 ( oh h i o + oh h i oh ) Calculating this threshold for the frequent minimal pairs that Yang extracts from the Wortschatz project (Biemann et al., 2004 ) corpus 3 and mishearing rates from Peterson and Barney (1952), the Yang model predicts that a learner exposed to at least ∼ 17% COT-CAUGHT-merged input will acquire the merger.", "This threshold represents a strong advantage for M + because it is well under the 50% threshold expected for neutral (non-advantaged) change and it is very close to what was found in Johnson (2007) 's sociolinguistic study.", "It predicts that younger children may have g + while their parents and even older siblings have g βˆ’ if the 17% threshold was crossed in E after the acquisition period of the older sibling but before that of the younger sibling.", "Model Setup All the mechanics behind the learning model reduce to a simple statement: learners acquires g + iff > 17% of their input is M + and they acquire g βˆ’ otherwise.", "However, this kind of categorical learner in a perfectly-mixed population leads to immediate fixation at either g βˆ’ or g + in a single iteration, since the proportion of g + speakers in the population is equivalent to the proportion of M + input in every learner's environment.", "This is not realistic change.", "Clearly, social network structure is at least as important as the learning algorithm in modeling the spread of the merger.", "We model the change in a non-uniform social network of 100 centralized clusters of 75 individuals each.", "75 was chosen as half Dunbar's number, the maximum number of reliable social connections that an adult can maintain (Dunbar, 2010) .", "There are two grammars, g + and g βˆ’ , and learners internalize one or the other according to the 17% threshold of M + in their input.", "One cluster represents the source of the merger and is initialized at 100% g + , while the rest begin 100% g βˆ’ .", "Inter-cluster connections are chosen randomly so that some connections are between central members of the clusters and some are between peripheral members.", "The one merged cluster is connected to half the other clusters representing those at the frontier of the change, and each other cluster is connected to five randomly chosen ones.", "4 This network structure echoes work in sociolinguistics, in particular, Milroy and Milroy (1985) 's notion of strong and weak connections in language change, where weak connections between social clusters are particularly important for propagation of a change.", "Propagation of the merged grammar is calculated by E n because we are interested in the behavior of individuals without loss of precision and because it cannot be assumed that each cluster is internally uniform.", "5 Since the spread of the merger has been rapid enough to detect over a period of a few years, iterations are modeled as short age co-horts rather than full generations in the first experiments by updating only a randomly chosen 10% of nodes at each iteration because only a fraction of the population is learning at any given time.", "A model where every node is updated is investigated as well.", "Results The behavior of this simulation is shown graphically in Figure 4 .", "The fine/colored lines indicate the rate of M + within each initially non-merged cluster, and the bold/black line shows the average rate across all initially non-merged.", "The merger spreads from cluster to cluster in succession over the \"weak\" inter-cluster connections and through each cluster over the 'strong' connections before moving on to the next ones.", "Everett, 1995) members have the merger, a period of rapid diffusion of the merger, then some time where a few laggards resist the merger.", "As a result, most clusters exhibit an S-like shape.", "A few clusters change rapidly because of their especially wellconnected positions in the network, and some lag behind the rest because they are poorly connected to the rest of the network.", "More interestingly, the population-wide average, the population-level data at the kind of granularity that is often studied, yields a smooth S-curve with a shallower slope than the individual clusters.", "The fact that it arises naturally here in a network that conforms with typical network shapes but was otherwise randomly generated is encouraging because the experiment was not set up so that it would produce such a curve, and the steep rate of change in individual clusters is what is expected for a change that is rapid enough to affect siblings differently.", "In the above simulation, only a fraction of nodes were updated at each iteration in order to model a rapid change.", "In order to confirm that this choice is not affecting the results and to test a purer implementation of the framework presented here, we remove that constraint and update every node at each iteration.", "Figure 5 shows what happens over 20 iterations in a network that is otherwise identical but with 2/5 as many inter-cluster connections as the original.", "A qualitatively similar pattern arises, so the choice to update only a fraction of the population is not crucially affecting the results.", "In all experiments so far, social connections were fixed at the first iteration even though connections in real populations tend to change over time.", "To investigate that modeling assumption, we perform another simulation in which connections are randomly updated both within and across clusters at each iteration akin to Kauhanen (2016) 's rewiring.", "The result as shown in Figure 6 is similar to before, with one major difference.", "The individual clusters transition more closely in time because no individual cluster remains poorly connected or especially well connected throughout the entire simulation.", "Finally, we test our assumptions about population size by repeating the experiments on a smaller network of 40 clusters of 18 individuals.", "The results are qualitatively similar, but the S-curve appears to be more sensitive to probabilistic connections in the network.", "To explore this, we present the average network-wide rate of (M + ) across 10 trials, revealing that an S-like curve is formed each time but that its slope varies.", "A few trials never reach 100% because some of the clusters are not connected to the innovative one.", "The slope varies between trials, indicating that the rate of change is a function of both the population structure and the learning algorithm, but the network size does not substantially affect these results.", "Discussion The algebraic-network framework for modeling population-level language change presented here has substantial practical and theoretical advantages over previous ones.", "It is much simpler computationally than previous frameworks because it calculates the statistically expected behavior of each generation analytically and therefore removes the entire inner loop of calculating stochastic inter-agent interactions from the simulation.", "It follows the Niyogi and Berwick (1996) formalism for language change which presents a clean and modular way of reasoning about the problem and promotes the centrality of language acquisition.", "In addition to the core algorithm, the framework offers enough flexibility to represent a wide variety of processes from the highly abstract (e.g., Kauhanen (2016) ) to those grounded in soci-olinguistic and acquisition research (e.g., Yang (2009) ).", "In our investigation of Kauhanen's basic assumptions, we discover how seemingly innocuous decisions about population size and learning conspire to drive simulation results.", "If learners are conceived as categorical learners, population size becomes a deciding factor in the path of change.", "So while the original results are interesting and meaningful, they may only valid for small (on the order of 10 2 ) populations.", "In our simulation of the spread of the COT-CAUGHT merger, we show how a cognitivelymotivated model of acquisition requires a network model in order to represent population-level language change.", "The population is represented as a collection of individual clusters based on sociological work, but the clusters themselves are connected randomly.", "The fact that S-curves arise naturally from these networks underscores their centrality to language change.", "One problem that this line of simulation work has always faced has been the lack of viable comparison between models because every study implements its own learning, network, and interaction models.", "The modular nature of our framework advances against this trend since it is now possible to hold the population model constant while slotting in various learning models to test them against one another and vice-versa.", "Finally, since this framework reduces to Niyogi & Berwick's models in perfectly-mixed populations, it can be used to reason about the formal dynamics of language change as well.", "Without simulation, it would be difficult or impossible to undercover the interplay between acquisition and social structure on the propagation of language change.", "Neither factor alone can account for the theoretical or empirically observed patterns.", "Simulations of this kind which explicitly model both simultaneously is well equipped to provide insights that fieldwork and laboratory work cannot.", "As such, it is an invaluable complement to those more traditional methodologies." ] }
{ "paper_header_number": [ "1", "1.1", "2", "2.2", "2.3", "3", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Framework for Transmission in Social Networks", "Propagation in the Network", "Learning in the Network", "Application: Testing Assumptions", "Application: Mergers in Progress", "Background", "Model Setup", "Results", "Discussion" ] }
GEM-SciDuet-train-106#paper-1281#slide-14
Variational Model for Merger Acquisition
Probability of initial misunderstanding depends on minimal pair frequencies mix merged (+) and non-merged (-) speakers in the environment Using minimal pair frequencies estimated from SUBTLEXus and a variational learner, learners will acquire the merged grammar iff more than ~17% of their environment is merged (Yang 2009) Penalty probabilities depend on mi, ni = frequencies of each member of a minimal pair H = i mi ni = probability of mishearing one vowel for the other C+ = (1/H) i min(mi, ni) hearing the less freq word p-(mmi nni)] misinterpreting input
Probability of initial misunderstanding depends on minimal pair frequencies mix merged (+) and non-merged (-) speakers in the environment Using minimal pair frequencies estimated from SUBTLEXus and a variational learner, learners will acquire the merged grammar iff more than ~17% of their environment is merged (Yang 2009) Penalty probabilities depend on mi, ni = frequencies of each member of a minimal pair H = i mi ni = probability of mishearing one vowel for the other C+ = (1/H) i min(mi, ni) hearing the less freq word p-(mmi nni)] misinterpreting input
[]
GEM-SciDuet-train-106#paper-1281#slide-15
1281
A Framework for Representing Language Acquisition in a Population Setting
Language variation and change are driven both by individuals' internal cognitive processes and by the social structures through which language propagates. A wide range of computational frameworks have been proposed to connect these drivers. We compare the strengths and weaknesses of existing approaches and propose a new analytic framework which combines previous network models' ability to capture realistic social structure with practically and more elegant computational properties. The framework privileges the process of language acquisition and embeds learners in a social network but is modular so that population structure can be combined with different acquisition models. We demonstrate two applications for the framework: a test of practical concerns that arise when modeling acquisition in a population setting and an application of the framework to recent work on phonological mergers in progress.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217 ], "paper_content_text": [ "Introduction The process of language change should be thought of as a two-step cycle in which 1) individuals acquire their native languages from their predecessors then 2) pass them on to their successors.", "Small changes accrue over time this way and create both small-scale interpersonal variation and large-scale typological differences.", "It is easy to draw a strong analogy here between linguistic evolution and biological evolution.", "Both feature classic descent with modification, except while phenotypes are transmitted through genes and acted on by natural selection, language is both transmitted through and constrained by the individual (Cavalli-Sforza and Feldman, 1981; Ritt, 2004, etc.)", ".", "But while evolution, linguistic or otherwise, is driven by forces acting on the individual, it unfolds on the level of populations (Cavalli-Sforza and Feldman, 1981) .", "The influence of communitylevel social factors on the path of language change is a major focus of sociolinguistics (Labov, 2001; Milroy and Milroy, 1985; Rogers Everett, 1995) .", "Ideally, one could observe population-level variation unfold in real time while testing out individual factors, but this is impossible because nobody can travel back in time or fit entire natural environments into a lab.", "Change that has already happened is out of reach, and change in progress is buried in a world of confounds.", "The classic sociolinguistic method instead approaches the problem by inferring causal factors from patterns discovered in field interviews and corpora (Labov, 1994; Labov et al., 2005, etc.)", ".", "This is the primary source of empirical data in the field and the only way to look at language change in a naturalistic setting, but it is limited in that it cannot test cause and effect directly.", "More recently, controlled experimental studies have emerged as a complementary line of research which manipulate causal factors directly (Johnson et al., 1999; Campbell-Kibler, 2009, etc.)", ", but are inherently removed natural time and scale.", "A third approach, the one we build upon here, relies on computational modeling to simulate how sociolinguistic factors might work together in larger populations (Klein, 1966; Blythe and Croft, 2012; Kauhanen, 2016, etc.)", ".", "It has long been argued that language acquisition is the primary cause of language change (Sweet, 1899; Lightfoot, 1979; Niyogi, 1998, etc.)", ".", "In the last few decades, this connection has been modeled computationally (Gibson and Wexler, 1994; Kirby et al., 2000; Yang, 2000, etc.)", ", leading to the strong conclusion that change is the inevitable consequence of mixed linguistic input or finite learning periods (Niyogi and Berwick, 1996) , even if children are \"perfect\" learners.", "An important result connecting the learner and population emphasizes the need for this line of work: the space of paths of change available in populations is formally larger than the paths available to linear chains of iterated learners.", "Niyogi and Berwick (2009) prove formally that even perfectly-mixed (i.e., uniform and homogeneous social network) populations admit phase transitions in the path of change unavailable to chains of single learners commonly implemented in iterated learning (Kirby et al., 2000) .", "This suggests that small-population experimental studies in sociolinguistics and in child language acquisition do not paint the full picture of language change.", "We introduce a new framework for modeling language change in populations.", "It has an outer loop to represent generational progression, but it replaces the inner loop which calculates randomized interactions between agents with a single formula that is defined generally enough to allow the simulation of a wide range of scenarios.", "It builds upon the principled formalism described by Niyogi and Berwick (1996, et seq.)", ", privileging the acquisition model and separating it from the population model.", "The resulting modular framework is described in the following sections.", "First, Section 1.1 presents a survey of previous simulation work followed by a description of the new population model in Section 2.", "Next, Section 3 addresses practical concerns relating population size to assumptions about language acquisition.", "Finally, Section 4 introduces a case study on phonological change which demonstrates the need for appropriate models both of acquisition and populations.", "Related Work Computational models for the propagation of linguistic variation have been employed with a variety of research goals in mind.", "Every paper implements its own framework with few exceptions, so comparison across studies is difficult.", "Additionally, since each model is essentially 'boutique,' it is always possible that models are designed consciously or unconsciously to achieve a specific outcome rather driven by underlying principles.", "We group these frameworks into three classes according to their implementation, swarm, network, and algebraic, and discusses their strengths and weaknesses.", "The first class, called swarm here, models populations as collections of agents placed on a grid.", "They \"swarm\" around randomly according to some movement function, and \"interact\" when they occupy adjacent grid spaces (Satterfield, 2001; Harrison et al., 2002; Ke et al., 2008; Stanford and Kenny, 2013) .", "This tends toward concrete interpretation, for example, more mobile populations are expressed directly by more mobile agents.", "They capture Bloomfield (1933) 's \"principle of density\" which describes the observation that geographically or socially close individuals interact more frequently than those farther away.", "On the other hand, they provide little control over network structure, relying on series of explicit movement constraints in order to direct their agents, and since each one moves randomly at each iteration, these models have potentially thousands of degrees of freedom.", "Such simulations should be run many times if any sort of statistically expected results are to be computed.", "The second class, network frameworks, model speakers as nodes and interaction probabilities as weighted edges on network graphs (Minett and Wang, 2008; Baxter et al., 2009; Fagyal et al., 2010; Blythe and Croft, 2012; Kauhanen, 2016) .", "These frameworks offer precise control over social network structure and can test specific community models from within sociolinguistics.", "However, implementations usually proceed by some kind of iterative probabilistic node-pair selection process, and in this way suffer from the same statistical pitfalls as swarm frameworks.", "In contrast to swarm models, interaction is rigidly restricted to immediately connected nodes, so to achieve gradient interaction probabilities, edges must be frequently updated or nearly fully-connected graphs with carefully assigned edge weights would need to be constructed and motivated.", "The third class, algebraic frameworks, present analytic methods for determining the state of the network at the end of each iteration rather than relying on stochastic simulation of individual agents (Niyogi and Berwick, 1996, 1997; Yang, 2000; Baxter et al., 2006; Minett and Wang, 2008; Niyogi and Berwick, 2009) .", "Removing that inner loop is a more mathematically elegant approach and avoids dealing unnecessarily with statistics behind random trials.", "Removing that loop speeds up calculation as well, making larger simulations more tractable than with network or swarm frameworks.", "But this power is achieved by sacrificing the social network.", "Up to this point, such models have, to our knowledge, only been defined over perfectly-mixed (i.e., no network effects) populations.", "That assumption is useful for reasoning about the mathematical theory behind language change, but it hinders such models' utility in empirical studies.", "For example, though Baxter et al.", "(2006) and Minett and Wang (2008) implement algebraic models for perfectly mixed populations, they fall back on network models to model network effects.", "Framework for Transmission in Social Networks Algebraic frameworks have their mathematical advantage, but network frameworks provide a richer model for representing real-world population structures and swarm models capture density effects by default.", "An ideal framework would combine the benefits of all three of these.", "Here we do just that.", "We introduce a framework that instantiates Niyogi and Berwick (1996) 's acquisitiondriven formalism where change is handled explicitly as a two-step alternation between individual learners learning and populations interacting.", "It provides an analytic solution to the state of a network structure over which swarm-like behavior can be modeled.", "We begin by conceptualizing the framework in terms of agents traveling probabilistically over a network structure as in Algo.", "1 before introducing the analytic solution.", "There is an individual standing at every node in the graph, and at every iteration, each individual begins at some location and travels along the network's edges, at each step deciding to continue on or to stop and interact with the agent at that node.", "Any two agents with a nonzero weight path between them could potentially interact, so the overall probability of an interaction is a function of the shape of the network and the decay rate of the step probability.", "The shorter and higher weighted the path between two agents, the more likely they are to interact.", "This corresponds to the gradient interaction probabilities of swarm frameworks.", "Social networks are typically conceived of as graph structures with individuals as vertices and the social or geographical connections between individuals as edges, and this allows for a great deal of flexibility.", "If edges are undirected, then all interactions are equal and bidirectional, but if edges are directed, interactions may or may not be.", "Edges can be weighted to represent likelihood of interaction or some measure of social valuation, and this too can vary over time.", "Lastly, it is possible to add and remove nodes themselves to capture births, deaths, or migration.", "The network structure is represented computationally here as an adjacency matrix A.", "In a population of n individuals, this is n Γ— n where each element a ij is the weight of the connection from individual j to individual i.", "The matrix must be column stochastic (all columns sum to 1 and contain only positive elements) so that edge weights can be interpreted as probabilities.", "The special case where the matrix is symmetric (every a ij = a ji ) models undirected edges, and more strongly, the model reduces to perfectly-mixed populations when each a ij = 1 n .", "We define a notion of communities over the nodes of the network in order to add the option to categorize groups of individuals.", "Membership among c communities is identified with an n Γ— c indicator matrix C. Depending on the problem at hand, it is possible to calculate the average behavior of the learners within each community directly without having to calculate the behavior of each individual member.", "Propagation in the Network In a typical network model, the edge weights between nodes in A are interpreted directly as interaction probabilities, meaning that individuals only ever interact with their immediate graph neighbors.", "We take a different approach by allowing the agents to \"travel\" and potentially interact with any other agent whose node is connected by a path of non-zero edges.", "If the number of traveling steps were fixed at k, the probability of each pair interacting would be defined as A k .", "It is more complicated for us since the number of steps traveled is a random variable.", "The probability of j interacting with i (p(ij)) is the probability of them interacting after k steps times the probability of k for all values of k as in Eqn.", "1.", "Combining this intuition with A yields the interaction probabilities for all i, j pairs.", "p(ij) = k p(ij|k steps) p(k steps) (1) The pattern of linguistic variants or grammars (in the formal sense where grammar g is the intensional equivalent of language L g ) within a network unfolds as a dynamical system over the course of many iterations, and learners' positions within the network mediate which ones they eventually acquire.", "In a system with g grammars and n individuals, a n Γ— g row-stochastic matrix G specifies the probability with which each community expresses each grammar.", "Given this notion of interaction and the specification of grammars expressed within a network, it is possible to compute the distribution of grammars presented to each learner.", "This is the learners' linguistic environment and is represented by a matrix E in the same form as G .", "An environment function E n (G t , A) = E t+1 shown in Eqn.", "2 calculates E by first calculating all the interaction probabilities in the network then multiplying those by the grammars which every agent expresses to get the environment E. The Ξ± parameter from the geometric distribution 1 defines the travel decay rate.", "A lower Ξ± defines conceptually more mobile agents.", "More generally, E n is a special case of E(G t , C t , A t ) = E t+1 where the number of communities equals the number of individuals (c = n).", "C becomes the identity matrix without loss of generality, so the network's initial condition does not have to be defined explicitly.", "For any other community definition, an initial condition has to be defined as in Eqn.", "3 which specifies the starting point in the network that each agent conceptually begins traveling from.", "The output of E is a g Γ— c matrix giving the environment of the average agent in each community.", "2 E n (G t , A) = G t Ξ± (I βˆ’ (1 βˆ’ Ξ±)A) βˆ’1 (2) E(G t , C, A) = E n (G t , A)C(C C) βˆ’1 (3) The output of E must be broadcast to g Γ— n, which would result in the loss of some information unless the assumption can be made that each community is internally uniform.", "However, when that assumption can be made, the n Γ— n adjacency matrix admits a c Γ— c equitable partition A Ο€ (Eqn.", "4) (Schaub et al., 2016) which permits an alternate environment function E EP (G t , C, A) shown in Eqn.", "5 that is equivalent to the lossless E n if A.", "If n c, E EP is much faster to calculate because it only inverts a small c Γ— c matrix rather than a large n Γ— n. This makes it feasible to run much larger simulations than what has been done in the past.", "A Ο€ = (C C) βˆ’1 C AC (4) EEP = Ξ±G C (I βˆ’ (1 βˆ’ Ξ±)A Ο€ ) βˆ’1 (C C) βˆ’1 (5) Learning in the Network The environment function describes what inputs E t+1 are available to learners given the language expressed by the mature speakers of the previous age cohort with grammars G t .", "The second component of the framework describes the learning algorithm A(E t+1 ) = G t+1 , how individuals respond to their input environment.", "The resulting G t+1 describes which grammars those learners will eventually contribute to the subsequent generation's environment E t+2 .", "This back-andforth between adults' grammars G and childrens' environment E is the two-step cycle of language change (Fig.", "1) .", "In neutral change, learners would acquire grammars at the rates that they are expressed in their environments, but there is good reason to believe .", ".", ".", "G t β†’ E t+1 β†’ G t+1 .", ".", ".", "G t+i β†’ E t+i+1 .", ".", ".", "Figure 1: Language change as an alternation between G and E matrices that most language change involves differential fitness between competing variants, and most nontrivial learning algorithms yield some kind of fitness (Kroch, 1989; Yang, 2000; Blythe and Croft, 2012, etc.)", ", so A is rarely neutral.", "A neutral and simple advantaged model are both considered in Section 3, and a more complex learning algorithm is described for Section 4.", "Application: Testing Assumptions The general nature of the framework described here renders it suitable for reproducing the results of previous works and evaluating their assumptions.", "To demonstrate this, we reproduce the major result from Kauhanen (2016) , which tested the behavior of neutral change in networks of singlegrammar learners, in order to dissect two of its primary assumptions.", "Implemented in a typical network framework, the original setup contains n = 200 individuals in probabilistically generated centralized networks in which individuals mature categorically to the single most frequent grammar in their input.", "The author found that categorical neutral change produced chaotic paths of change regardless of network shape and that periodically \"rewiring\" some of the network edges smoothed this out.", "Without commenting on rewiring, we find that the combination of n and choice of categorical learners conspire to create the chaotic results.", "We create two communities, both centralized along the lines of the single cluster in Kauhanen (2016) , initialize all members of cluster 1 with grammar g 1 and all members of cluster 2 with grammar g 2 , and additional edges are added between members of clusters 1 and 2 to allow interaction.", "G is converted to an indicator matrix at the end of each learning iteration by rounding values to 0 and 1 in order to model categorical learners who only internalize the most common grammar in their inputs as in the original model.", "In a pair of infinitely large clusters or two clusters where individuals are permitted to learn a probabilistic distribution of grammars, each cluster should homogenize to a 50/50 distribution of g 1 and g 2 after some number of iterations depending on the specifics of the network shape and setting for Ξ± creating the red curves in Fig.", "2 .", "At n = 20000, each of 10 trials roughly follows the path of the predicted curve, but when run at the original n = 200 for 10 trials, this produces the type of chaotic behavior which Kauhanen (2016) attempts to repair.", "The outcome appears to be the result of an assumption made out of convenience (n = 200) rather than a principled decision.", "To further explore the impact of the population size assumption, we experiment on a model of advantaged change, which is typically contrasted with neutral change because of its tendency to produce \"well-behaved\" S-curve change (Blythe and Croft, 2012; Kauhanen, 2016) .", "This time, only a single cluster is created, and the advantaged grammar is initially assigned to 1% of the population.", "As seen in Figure 3 , results are chaotic for n = 200 once again and near predicted for n = 20000.", "This is important because at n = 200, advantaged change is chaotic, and most simulations both rise and fall.", "An experimenter who only studied advantaged change in small population might concluded that it is as ill-behaved as neutral change.", "While the conclusions that Kauhanen (2016) draws appear valid for n = 200, it is not clear to what extent they can be projected onto larger populations.", "This demonstrates the need for carefully choosing one's modeling assumptions and testing them out when possible.", "Application: Mergers in Progress The acquisition of phonological mergers in mixed input settings presents an interesting problem.", "It appears that mergers have an inherent advantage because they tend to spread at the expense of distinctions, and once they begin, they are rarely reversed (Labov, 1994) .", "Yang (2009) 's acquisition model quantifies this advantage as the relatively lower chance of misinterpretation if a listener assumes the merged grammar instead of the nonmerged grammar once a sufficient proportion of the environment is merged.", "Applied to Johnson (2007) 's detailed population study of the frontier of the COT-CAUGHT merger in the small towns along the border between Rhode Island and Massachusetts, this accurately predicts the ratio of merged input for a child to acquire the merged grammar, however when applied to a perfectly mixed population of learners, it fails to model the spread of the merged grammar in the population.", "Yang's model is input-driven, so it is conducive to simulation with minimal assumptions past those drawn from the empirical data.", "We test the behavior of this learning model in a typical population network and demonstrate that it produces a reasonable path of change.", "Background The COT-CAUGHT merger, also called the low back merger describes the phenomenon present in varieties of North American English spoken in eastern New England, western Pennsylvania, the American West, and Canada among others where the vowel in words like cot and the vowel in words like caught have come to be pronounced the same (Labov et al., 2005, pp.", "58-65) .", "The geographical extent of the merger is currently expanding, which might be expected if the merger has a cognitive or social advantage associated with it.", "Johnson (2007) 's study of the merger's frontier on the border Rhode Island and Massachusetts uncovered an interesting social dynamic that illustrates the merger's speed: there are families where the parents and older siblings non-merged, but the younger siblings are.", "The merger has swept through in only a few years and passed between the siblings.", "Yang (2009) seeks to understand why mergers have an advantage from a cognitive perspective, and his model treats the acquisition of mergers as an evolutionary process.", "Learners who receive both merged (M + ) and non-merged (M βˆ’ ) input entertain both a merged (g + ) and non-merged (g βˆ’ ) grammar and reward whichever grammar successfully parses the input.", "This kind of variational learner (Yang, 2000) is essentially an adaptation of the classic evolutionary Linear Reward Punishment model (Bush and Mosteller, 1953) .", "The fitness of each grammar is the probability in the limit that it will fail to parse any given input, and since it is virtually always the case that this probability is different for both grammars, fitness is virtually always asymmetric.", "The variational learner is characterized as follows.", "Given two grammars and an input token s, The learner parses s with g 1 with probability p and with g 2 with probability q = 1 βˆ’ p. p is rewarded according to whether the choice of g successfully parses s (g β†’ s) or it fails to (g s), where Ξ³ is some small constant.", "p = p + Ξ³q, g β†’ s (1 βˆ’ Ξ³)p, g s Given a specific problem, one can calculate a penalty probability C for each g, the proportion of input that would cause g s. The grammar with the lower C has the advantage, so the other one will be driven down in the long run.", "C can be estimated from type frequencies in a corpus, and the model is non-parametric because these values do not depend on Ξ³. lim tβ†’βˆž p t = C 2 C 1 + C 2 lim tβ†’βˆž q t = C 1 C 1 + C 2 To understand the COT-CAUGHT merger empirically, one must reason about what kind of input would trigger a penalty and then calculate the penalty probabilities of the merged grammar C + and non-merged grammar C βˆ’ from a corpus.", "This model considers parsing failure to be the rate of initial misinterpretation, and for a vowel merger, the only inputs that could create an initial misinterpretation are minimal pairs because they become homophones.", "Examples of COT-CAUGHT minimal pairs include cot-caught, Don-Dawn, stock-stalk, odd-awed, collar-caller, and so on.", "The merged g + grammar collapses would-be minimal pairs into homophones, so the penalty rate C + comes down to lexical access.", "Under the observation that more frequent homophones are retrieved first regardless of syntactic context (Caramazza et al., 2001) , g + listeners only suffer initial misinterpretation when the less frequent member of a pair is uttered regardless of the rate of M + .", "If H is the sum token frequency of all minimal pairs and h i o , h i oh are the frequencies of the ith pair's members, then C+ is calculated by Eqn.", "6.", "In contrast, g βˆ’ listeners are sensitive to the phonemic distinction, so they misinterpret M βˆ’ input at the rate of mishearing one vowel for the other (Peterson and Barney, 1952) (second half of Eqn.", "7).", "And given M + input, they misinterpret whenever they hear the phoneme which g βˆ’ does not expect (e.g., a merged speaker pronouncing cot with the CAUGHT vowel) times the probability of not mishearing that vowel (1-) plus times the probability of hearing the right vowel (i.e., the merged speaker pronounces cot with the COT vowel but it is misheard anyway) (first half of Eqn.", "7).", "Since g βˆ’ misinterpretation rates are a function of the rate of M + (p) in the environment, there is a threshold of M + speakers above which the merged grammar has a fitness advantage over the non-merged one.", "C + = 1 H i min(h i o , h i oh ) (6) C βˆ’ = 1 H i p 0 ((1 βˆ’ oh )h i o + oh h i oh ) (7) +q 0 ( oh h i o + oh h i oh ) Calculating this threshold for the frequent minimal pairs that Yang extracts from the Wortschatz project (Biemann et al., 2004 ) corpus 3 and mishearing rates from Peterson and Barney (1952), the Yang model predicts that a learner exposed to at least ∼ 17% COT-CAUGHT-merged input will acquire the merger.", "This threshold represents a strong advantage for M + because it is well under the 50% threshold expected for neutral (non-advantaged) change and it is very close to what was found in Johnson (2007) 's sociolinguistic study.", "It predicts that younger children may have g + while their parents and even older siblings have g βˆ’ if the 17% threshold was crossed in E after the acquisition period of the older sibling but before that of the younger sibling.", "Model Setup All the mechanics behind the learning model reduce to a simple statement: learners acquires g + iff > 17% of their input is M + and they acquire g βˆ’ otherwise.", "However, this kind of categorical learner in a perfectly-mixed population leads to immediate fixation at either g βˆ’ or g + in a single iteration, since the proportion of g + speakers in the population is equivalent to the proportion of M + input in every learner's environment.", "This is not realistic change.", "Clearly, social network structure is at least as important as the learning algorithm in modeling the spread of the merger.", "We model the change in a non-uniform social network of 100 centralized clusters of 75 individuals each.", "75 was chosen as half Dunbar's number, the maximum number of reliable social connections that an adult can maintain (Dunbar, 2010) .", "There are two grammars, g + and g βˆ’ , and learners internalize one or the other according to the 17% threshold of M + in their input.", "One cluster represents the source of the merger and is initialized at 100% g + , while the rest begin 100% g βˆ’ .", "Inter-cluster connections are chosen randomly so that some connections are between central members of the clusters and some are between peripheral members.", "The one merged cluster is connected to half the other clusters representing those at the frontier of the change, and each other cluster is connected to five randomly chosen ones.", "4 This network structure echoes work in sociolinguistics, in particular, Milroy and Milroy (1985) 's notion of strong and weak connections in language change, where weak connections between social clusters are particularly important for propagation of a change.", "Propagation of the merged grammar is calculated by E n because we are interested in the behavior of individuals without loss of precision and because it cannot be assumed that each cluster is internally uniform.", "5 Since the spread of the merger has been rapid enough to detect over a period of a few years, iterations are modeled as short age co-horts rather than full generations in the first experiments by updating only a randomly chosen 10% of nodes at each iteration because only a fraction of the population is learning at any given time.", "A model where every node is updated is investigated as well.", "Results The behavior of this simulation is shown graphically in Figure 4 .", "The fine/colored lines indicate the rate of M + within each initially non-merged cluster, and the bold/black line shows the average rate across all initially non-merged.", "The merger spreads from cluster to cluster in succession over the \"weak\" inter-cluster connections and through each cluster over the 'strong' connections before moving on to the next ones.", "Everett, 1995) members have the merger, a period of rapid diffusion of the merger, then some time where a few laggards resist the merger.", "As a result, most clusters exhibit an S-like shape.", "A few clusters change rapidly because of their especially wellconnected positions in the network, and some lag behind the rest because they are poorly connected to the rest of the network.", "More interestingly, the population-wide average, the population-level data at the kind of granularity that is often studied, yields a smooth S-curve with a shallower slope than the individual clusters.", "The fact that it arises naturally here in a network that conforms with typical network shapes but was otherwise randomly generated is encouraging because the experiment was not set up so that it would produce such a curve, and the steep rate of change in individual clusters is what is expected for a change that is rapid enough to affect siblings differently.", "In the above simulation, only a fraction of nodes were updated at each iteration in order to model a rapid change.", "In order to confirm that this choice is not affecting the results and to test a purer implementation of the framework presented here, we remove that constraint and update every node at each iteration.", "Figure 5 shows what happens over 20 iterations in a network that is otherwise identical but with 2/5 as many inter-cluster connections as the original.", "A qualitatively similar pattern arises, so the choice to update only a fraction of the population is not crucially affecting the results.", "In all experiments so far, social connections were fixed at the first iteration even though connections in real populations tend to change over time.", "To investigate that modeling assumption, we perform another simulation in which connections are randomly updated both within and across clusters at each iteration akin to Kauhanen (2016) 's rewiring.", "The result as shown in Figure 6 is similar to before, with one major difference.", "The individual clusters transition more closely in time because no individual cluster remains poorly connected or especially well connected throughout the entire simulation.", "Finally, we test our assumptions about population size by repeating the experiments on a smaller network of 40 clusters of 18 individuals.", "The results are qualitatively similar, but the S-curve appears to be more sensitive to probabilistic connections in the network.", "To explore this, we present the average network-wide rate of (M + ) across 10 trials, revealing that an S-like curve is formed each time but that its slope varies.", "A few trials never reach 100% because some of the clusters are not connected to the innovative one.", "The slope varies between trials, indicating that the rate of change is a function of both the population structure and the learning algorithm, but the network size does not substantially affect these results.", "Discussion The algebraic-network framework for modeling population-level language change presented here has substantial practical and theoretical advantages over previous ones.", "It is much simpler computationally than previous frameworks because it calculates the statistically expected behavior of each generation analytically and therefore removes the entire inner loop of calculating stochastic inter-agent interactions from the simulation.", "It follows the Niyogi and Berwick (1996) formalism for language change which presents a clean and modular way of reasoning about the problem and promotes the centrality of language acquisition.", "In addition to the core algorithm, the framework offers enough flexibility to represent a wide variety of processes from the highly abstract (e.g., Kauhanen (2016) ) to those grounded in soci-olinguistic and acquisition research (e.g., Yang (2009) ).", "In our investigation of Kauhanen's basic assumptions, we discover how seemingly innocuous decisions about population size and learning conspire to drive simulation results.", "If learners are conceived as categorical learners, population size becomes a deciding factor in the path of change.", "So while the original results are interesting and meaningful, they may only valid for small (on the order of 10 2 ) populations.", "In our simulation of the spread of the COT-CAUGHT merger, we show how a cognitivelymotivated model of acquisition requires a network model in order to represent population-level language change.", "The population is represented as a collection of individual clusters based on sociological work, but the clusters themselves are connected randomly.", "The fact that S-curves arise naturally from these networks underscores their centrality to language change.", "One problem that this line of simulation work has always faced has been the lack of viable comparison between models because every study implements its own learning, network, and interaction models.", "The modular nature of our framework advances against this trend since it is now possible to hold the population model constant while slotting in various learning models to test them against one another and vice-versa.", "Finally, since this framework reduces to Niyogi & Berwick's models in perfectly-mixed populations, it can be used to reason about the formal dynamics of language change as well.", "Without simulation, it would be difficult or impossible to undercover the interplay between acquisition and social structure on the propagation of language change.", "Neither factor alone can account for the theoretical or empirically observed patterns.", "Simulations of this kind which explicitly model both simultaneously is well equipped to provide insights that fieldwork and laboratory work cannot.", "As such, it is an invaluable complement to those more traditional methodologies." ] }
{ "paper_header_number": [ "1", "1.1", "2", "2.2", "2.3", "3", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Framework for Transmission in Social Networks", "Propagation in the Network", "Learning in the Network", "Application: Testing Assumptions", "Application: Mergers in Progress", "Background", "Model Setup", "Results", "Discussion" ] }
GEM-SciDuet-train-106#paper-1281#slide-15
Acquisition Function
An individual acquires 100% g+ if >17% environment is generated by
An individual acquires 100% g+ if >17% environment is generated by
[]
GEM-SciDuet-train-106#paper-1281#slide-16
1281
A Framework for Representing Language Acquisition in a Population Setting
Language variation and change are driven both by individuals' internal cognitive processes and by the social structures through which language propagates. A wide range of computational frameworks have been proposed to connect these drivers. We compare the strengths and weaknesses of existing approaches and propose a new analytic framework which combines previous network models' ability to capture realistic social structure with practically and more elegant computational properties. The framework privileges the process of language acquisition and embeds learners in a social network but is modular so that population structure can be combined with different acquisition models. We demonstrate two applications for the framework: a test of practical concerns that arise when modeling acquisition in a population setting and an application of the framework to recent work on phonological mergers in progress.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217 ], "paper_content_text": [ "Introduction The process of language change should be thought of as a two-step cycle in which 1) individuals acquire their native languages from their predecessors then 2) pass them on to their successors.", "Small changes accrue over time this way and create both small-scale interpersonal variation and large-scale typological differences.", "It is easy to draw a strong analogy here between linguistic evolution and biological evolution.", "Both feature classic descent with modification, except while phenotypes are transmitted through genes and acted on by natural selection, language is both transmitted through and constrained by the individual (Cavalli-Sforza and Feldman, 1981; Ritt, 2004, etc.)", ".", "But while evolution, linguistic or otherwise, is driven by forces acting on the individual, it unfolds on the level of populations (Cavalli-Sforza and Feldman, 1981) .", "The influence of communitylevel social factors on the path of language change is a major focus of sociolinguistics (Labov, 2001; Milroy and Milroy, 1985; Rogers Everett, 1995) .", "Ideally, one could observe population-level variation unfold in real time while testing out individual factors, but this is impossible because nobody can travel back in time or fit entire natural environments into a lab.", "Change that has already happened is out of reach, and change in progress is buried in a world of confounds.", "The classic sociolinguistic method instead approaches the problem by inferring causal factors from patterns discovered in field interviews and corpora (Labov, 1994; Labov et al., 2005, etc.)", ".", "This is the primary source of empirical data in the field and the only way to look at language change in a naturalistic setting, but it is limited in that it cannot test cause and effect directly.", "More recently, controlled experimental studies have emerged as a complementary line of research which manipulate causal factors directly (Johnson et al., 1999; Campbell-Kibler, 2009, etc.)", ", but are inherently removed natural time and scale.", "A third approach, the one we build upon here, relies on computational modeling to simulate how sociolinguistic factors might work together in larger populations (Klein, 1966; Blythe and Croft, 2012; Kauhanen, 2016, etc.)", ".", "It has long been argued that language acquisition is the primary cause of language change (Sweet, 1899; Lightfoot, 1979; Niyogi, 1998, etc.)", ".", "In the last few decades, this connection has been modeled computationally (Gibson and Wexler, 1994; Kirby et al., 2000; Yang, 2000, etc.)", ", leading to the strong conclusion that change is the inevitable consequence of mixed linguistic input or finite learning periods (Niyogi and Berwick, 1996) , even if children are \"perfect\" learners.", "An important result connecting the learner and population emphasizes the need for this line of work: the space of paths of change available in populations is formally larger than the paths available to linear chains of iterated learners.", "Niyogi and Berwick (2009) prove formally that even perfectly-mixed (i.e., uniform and homogeneous social network) populations admit phase transitions in the path of change unavailable to chains of single learners commonly implemented in iterated learning (Kirby et al., 2000) .", "This suggests that small-population experimental studies in sociolinguistics and in child language acquisition do not paint the full picture of language change.", "We introduce a new framework for modeling language change in populations.", "It has an outer loop to represent generational progression, but it replaces the inner loop which calculates randomized interactions between agents with a single formula that is defined generally enough to allow the simulation of a wide range of scenarios.", "It builds upon the principled formalism described by Niyogi and Berwick (1996, et seq.)", ", privileging the acquisition model and separating it from the population model.", "The resulting modular framework is described in the following sections.", "First, Section 1.1 presents a survey of previous simulation work followed by a description of the new population model in Section 2.", "Next, Section 3 addresses practical concerns relating population size to assumptions about language acquisition.", "Finally, Section 4 introduces a case study on phonological change which demonstrates the need for appropriate models both of acquisition and populations.", "Related Work Computational models for the propagation of linguistic variation have been employed with a variety of research goals in mind.", "Every paper implements its own framework with few exceptions, so comparison across studies is difficult.", "Additionally, since each model is essentially 'boutique,' it is always possible that models are designed consciously or unconsciously to achieve a specific outcome rather driven by underlying principles.", "We group these frameworks into three classes according to their implementation, swarm, network, and algebraic, and discusses their strengths and weaknesses.", "The first class, called swarm here, models populations as collections of agents placed on a grid.", "They \"swarm\" around randomly according to some movement function, and \"interact\" when they occupy adjacent grid spaces (Satterfield, 2001; Harrison et al., 2002; Ke et al., 2008; Stanford and Kenny, 2013) .", "This tends toward concrete interpretation, for example, more mobile populations are expressed directly by more mobile agents.", "They capture Bloomfield (1933) 's \"principle of density\" which describes the observation that geographically or socially close individuals interact more frequently than those farther away.", "On the other hand, they provide little control over network structure, relying on series of explicit movement constraints in order to direct their agents, and since each one moves randomly at each iteration, these models have potentially thousands of degrees of freedom.", "Such simulations should be run many times if any sort of statistically expected results are to be computed.", "The second class, network frameworks, model speakers as nodes and interaction probabilities as weighted edges on network graphs (Minett and Wang, 2008; Baxter et al., 2009; Fagyal et al., 2010; Blythe and Croft, 2012; Kauhanen, 2016) .", "These frameworks offer precise control over social network structure and can test specific community models from within sociolinguistics.", "However, implementations usually proceed by some kind of iterative probabilistic node-pair selection process, and in this way suffer from the same statistical pitfalls as swarm frameworks.", "In contrast to swarm models, interaction is rigidly restricted to immediately connected nodes, so to achieve gradient interaction probabilities, edges must be frequently updated or nearly fully-connected graphs with carefully assigned edge weights would need to be constructed and motivated.", "The third class, algebraic frameworks, present analytic methods for determining the state of the network at the end of each iteration rather than relying on stochastic simulation of individual agents (Niyogi and Berwick, 1996, 1997; Yang, 2000; Baxter et al., 2006; Minett and Wang, 2008; Niyogi and Berwick, 2009) .", "Removing that inner loop is a more mathematically elegant approach and avoids dealing unnecessarily with statistics behind random trials.", "Removing that loop speeds up calculation as well, making larger simulations more tractable than with network or swarm frameworks.", "But this power is achieved by sacrificing the social network.", "Up to this point, such models have, to our knowledge, only been defined over perfectly-mixed (i.e., no network effects) populations.", "That assumption is useful for reasoning about the mathematical theory behind language change, but it hinders such models' utility in empirical studies.", "For example, though Baxter et al.", "(2006) and Minett and Wang (2008) implement algebraic models for perfectly mixed populations, they fall back on network models to model network effects.", "Framework for Transmission in Social Networks Algebraic frameworks have their mathematical advantage, but network frameworks provide a richer model for representing real-world population structures and swarm models capture density effects by default.", "An ideal framework would combine the benefits of all three of these.", "Here we do just that.", "We introduce a framework that instantiates Niyogi and Berwick (1996) 's acquisitiondriven formalism where change is handled explicitly as a two-step alternation between individual learners learning and populations interacting.", "It provides an analytic solution to the state of a network structure over which swarm-like behavior can be modeled.", "We begin by conceptualizing the framework in terms of agents traveling probabilistically over a network structure as in Algo.", "1 before introducing the analytic solution.", "There is an individual standing at every node in the graph, and at every iteration, each individual begins at some location and travels along the network's edges, at each step deciding to continue on or to stop and interact with the agent at that node.", "Any two agents with a nonzero weight path between them could potentially interact, so the overall probability of an interaction is a function of the shape of the network and the decay rate of the step probability.", "The shorter and higher weighted the path between two agents, the more likely they are to interact.", "This corresponds to the gradient interaction probabilities of swarm frameworks.", "Social networks are typically conceived of as graph structures with individuals as vertices and the social or geographical connections between individuals as edges, and this allows for a great deal of flexibility.", "If edges are undirected, then all interactions are equal and bidirectional, but if edges are directed, interactions may or may not be.", "Edges can be weighted to represent likelihood of interaction or some measure of social valuation, and this too can vary over time.", "Lastly, it is possible to add and remove nodes themselves to capture births, deaths, or migration.", "The network structure is represented computationally here as an adjacency matrix A.", "In a population of n individuals, this is n Γ— n where each element a ij is the weight of the connection from individual j to individual i.", "The matrix must be column stochastic (all columns sum to 1 and contain only positive elements) so that edge weights can be interpreted as probabilities.", "The special case where the matrix is symmetric (every a ij = a ji ) models undirected edges, and more strongly, the model reduces to perfectly-mixed populations when each a ij = 1 n .", "We define a notion of communities over the nodes of the network in order to add the option to categorize groups of individuals.", "Membership among c communities is identified with an n Γ— c indicator matrix C. Depending on the problem at hand, it is possible to calculate the average behavior of the learners within each community directly without having to calculate the behavior of each individual member.", "Propagation in the Network In a typical network model, the edge weights between nodes in A are interpreted directly as interaction probabilities, meaning that individuals only ever interact with their immediate graph neighbors.", "We take a different approach by allowing the agents to \"travel\" and potentially interact with any other agent whose node is connected by a path of non-zero edges.", "If the number of traveling steps were fixed at k, the probability of each pair interacting would be defined as A k .", "It is more complicated for us since the number of steps traveled is a random variable.", "The probability of j interacting with i (p(ij)) is the probability of them interacting after k steps times the probability of k for all values of k as in Eqn.", "1.", "Combining this intuition with A yields the interaction probabilities for all i, j pairs.", "p(ij) = k p(ij|k steps) p(k steps) (1) The pattern of linguistic variants or grammars (in the formal sense where grammar g is the intensional equivalent of language L g ) within a network unfolds as a dynamical system over the course of many iterations, and learners' positions within the network mediate which ones they eventually acquire.", "In a system with g grammars and n individuals, a n Γ— g row-stochastic matrix G specifies the probability with which each community expresses each grammar.", "Given this notion of interaction and the specification of grammars expressed within a network, it is possible to compute the distribution of grammars presented to each learner.", "This is the learners' linguistic environment and is represented by a matrix E in the same form as G .", "An environment function E n (G t , A) = E t+1 shown in Eqn.", "2 calculates E by first calculating all the interaction probabilities in the network then multiplying those by the grammars which every agent expresses to get the environment E. The Ξ± parameter from the geometric distribution 1 defines the travel decay rate.", "A lower Ξ± defines conceptually more mobile agents.", "More generally, E n is a special case of E(G t , C t , A t ) = E t+1 where the number of communities equals the number of individuals (c = n).", "C becomes the identity matrix without loss of generality, so the network's initial condition does not have to be defined explicitly.", "For any other community definition, an initial condition has to be defined as in Eqn.", "3 which specifies the starting point in the network that each agent conceptually begins traveling from.", "The output of E is a g Γ— c matrix giving the environment of the average agent in each community.", "2 E n (G t , A) = G t Ξ± (I βˆ’ (1 βˆ’ Ξ±)A) βˆ’1 (2) E(G t , C, A) = E n (G t , A)C(C C) βˆ’1 (3) The output of E must be broadcast to g Γ— n, which would result in the loss of some information unless the assumption can be made that each community is internally uniform.", "However, when that assumption can be made, the n Γ— n adjacency matrix admits a c Γ— c equitable partition A Ο€ (Eqn.", "4) (Schaub et al., 2016) which permits an alternate environment function E EP (G t , C, A) shown in Eqn.", "5 that is equivalent to the lossless E n if A.", "If n c, E EP is much faster to calculate because it only inverts a small c Γ— c matrix rather than a large n Γ— n. This makes it feasible to run much larger simulations than what has been done in the past.", "A Ο€ = (C C) βˆ’1 C AC (4) EEP = Ξ±G C (I βˆ’ (1 βˆ’ Ξ±)A Ο€ ) βˆ’1 (C C) βˆ’1 (5) Learning in the Network The environment function describes what inputs E t+1 are available to learners given the language expressed by the mature speakers of the previous age cohort with grammars G t .", "The second component of the framework describes the learning algorithm A(E t+1 ) = G t+1 , how individuals respond to their input environment.", "The resulting G t+1 describes which grammars those learners will eventually contribute to the subsequent generation's environment E t+2 .", "This back-andforth between adults' grammars G and childrens' environment E is the two-step cycle of language change (Fig.", "1) .", "In neutral change, learners would acquire grammars at the rates that they are expressed in their environments, but there is good reason to believe .", ".", ".", "G t β†’ E t+1 β†’ G t+1 .", ".", ".", "G t+i β†’ E t+i+1 .", ".", ".", "Figure 1: Language change as an alternation between G and E matrices that most language change involves differential fitness between competing variants, and most nontrivial learning algorithms yield some kind of fitness (Kroch, 1989; Yang, 2000; Blythe and Croft, 2012, etc.)", ", so A is rarely neutral.", "A neutral and simple advantaged model are both considered in Section 3, and a more complex learning algorithm is described for Section 4.", "Application: Testing Assumptions The general nature of the framework described here renders it suitable for reproducing the results of previous works and evaluating their assumptions.", "To demonstrate this, we reproduce the major result from Kauhanen (2016) , which tested the behavior of neutral change in networks of singlegrammar learners, in order to dissect two of its primary assumptions.", "Implemented in a typical network framework, the original setup contains n = 200 individuals in probabilistically generated centralized networks in which individuals mature categorically to the single most frequent grammar in their input.", "The author found that categorical neutral change produced chaotic paths of change regardless of network shape and that periodically \"rewiring\" some of the network edges smoothed this out.", "Without commenting on rewiring, we find that the combination of n and choice of categorical learners conspire to create the chaotic results.", "We create two communities, both centralized along the lines of the single cluster in Kauhanen (2016) , initialize all members of cluster 1 with grammar g 1 and all members of cluster 2 with grammar g 2 , and additional edges are added between members of clusters 1 and 2 to allow interaction.", "G is converted to an indicator matrix at the end of each learning iteration by rounding values to 0 and 1 in order to model categorical learners who only internalize the most common grammar in their inputs as in the original model.", "In a pair of infinitely large clusters or two clusters where individuals are permitted to learn a probabilistic distribution of grammars, each cluster should homogenize to a 50/50 distribution of g 1 and g 2 after some number of iterations depending on the specifics of the network shape and setting for Ξ± creating the red curves in Fig.", "2 .", "At n = 20000, each of 10 trials roughly follows the path of the predicted curve, but when run at the original n = 200 for 10 trials, this produces the type of chaotic behavior which Kauhanen (2016) attempts to repair.", "The outcome appears to be the result of an assumption made out of convenience (n = 200) rather than a principled decision.", "To further explore the impact of the population size assumption, we experiment on a model of advantaged change, which is typically contrasted with neutral change because of its tendency to produce \"well-behaved\" S-curve change (Blythe and Croft, 2012; Kauhanen, 2016) .", "This time, only a single cluster is created, and the advantaged grammar is initially assigned to 1% of the population.", "As seen in Figure 3 , results are chaotic for n = 200 once again and near predicted for n = 20000.", "This is important because at n = 200, advantaged change is chaotic, and most simulations both rise and fall.", "An experimenter who only studied advantaged change in small population might concluded that it is as ill-behaved as neutral change.", "While the conclusions that Kauhanen (2016) draws appear valid for n = 200, it is not clear to what extent they can be projected onto larger populations.", "This demonstrates the need for carefully choosing one's modeling assumptions and testing them out when possible.", "Application: Mergers in Progress The acquisition of phonological mergers in mixed input settings presents an interesting problem.", "It appears that mergers have an inherent advantage because they tend to spread at the expense of distinctions, and once they begin, they are rarely reversed (Labov, 1994) .", "Yang (2009) 's acquisition model quantifies this advantage as the relatively lower chance of misinterpretation if a listener assumes the merged grammar instead of the nonmerged grammar once a sufficient proportion of the environment is merged.", "Applied to Johnson (2007) 's detailed population study of the frontier of the COT-CAUGHT merger in the small towns along the border between Rhode Island and Massachusetts, this accurately predicts the ratio of merged input for a child to acquire the merged grammar, however when applied to a perfectly mixed population of learners, it fails to model the spread of the merged grammar in the population.", "Yang's model is input-driven, so it is conducive to simulation with minimal assumptions past those drawn from the empirical data.", "We test the behavior of this learning model in a typical population network and demonstrate that it produces a reasonable path of change.", "Background The COT-CAUGHT merger, also called the low back merger describes the phenomenon present in varieties of North American English spoken in eastern New England, western Pennsylvania, the American West, and Canada among others where the vowel in words like cot and the vowel in words like caught have come to be pronounced the same (Labov et al., 2005, pp.", "58-65) .", "The geographical extent of the merger is currently expanding, which might be expected if the merger has a cognitive or social advantage associated with it.", "Johnson (2007) 's study of the merger's frontier on the border Rhode Island and Massachusetts uncovered an interesting social dynamic that illustrates the merger's speed: there are families where the parents and older siblings non-merged, but the younger siblings are.", "The merger has swept through in only a few years and passed between the siblings.", "Yang (2009) seeks to understand why mergers have an advantage from a cognitive perspective, and his model treats the acquisition of mergers as an evolutionary process.", "Learners who receive both merged (M + ) and non-merged (M βˆ’ ) input entertain both a merged (g + ) and non-merged (g βˆ’ ) grammar and reward whichever grammar successfully parses the input.", "This kind of variational learner (Yang, 2000) is essentially an adaptation of the classic evolutionary Linear Reward Punishment model (Bush and Mosteller, 1953) .", "The fitness of each grammar is the probability in the limit that it will fail to parse any given input, and since it is virtually always the case that this probability is different for both grammars, fitness is virtually always asymmetric.", "The variational learner is characterized as follows.", "Given two grammars and an input token s, The learner parses s with g 1 with probability p and with g 2 with probability q = 1 βˆ’ p. p is rewarded according to whether the choice of g successfully parses s (g β†’ s) or it fails to (g s), where Ξ³ is some small constant.", "p = p + Ξ³q, g β†’ s (1 βˆ’ Ξ³)p, g s Given a specific problem, one can calculate a penalty probability C for each g, the proportion of input that would cause g s. The grammar with the lower C has the advantage, so the other one will be driven down in the long run.", "C can be estimated from type frequencies in a corpus, and the model is non-parametric because these values do not depend on Ξ³. lim tβ†’βˆž p t = C 2 C 1 + C 2 lim tβ†’βˆž q t = C 1 C 1 + C 2 To understand the COT-CAUGHT merger empirically, one must reason about what kind of input would trigger a penalty and then calculate the penalty probabilities of the merged grammar C + and non-merged grammar C βˆ’ from a corpus.", "This model considers parsing failure to be the rate of initial misinterpretation, and for a vowel merger, the only inputs that could create an initial misinterpretation are minimal pairs because they become homophones.", "Examples of COT-CAUGHT minimal pairs include cot-caught, Don-Dawn, stock-stalk, odd-awed, collar-caller, and so on.", "The merged g + grammar collapses would-be minimal pairs into homophones, so the penalty rate C + comes down to lexical access.", "Under the observation that more frequent homophones are retrieved first regardless of syntactic context (Caramazza et al., 2001) , g + listeners only suffer initial misinterpretation when the less frequent member of a pair is uttered regardless of the rate of M + .", "If H is the sum token frequency of all minimal pairs and h i o , h i oh are the frequencies of the ith pair's members, then C+ is calculated by Eqn.", "6.", "In contrast, g βˆ’ listeners are sensitive to the phonemic distinction, so they misinterpret M βˆ’ input at the rate of mishearing one vowel for the other (Peterson and Barney, 1952) (second half of Eqn.", "7).", "And given M + input, they misinterpret whenever they hear the phoneme which g βˆ’ does not expect (e.g., a merged speaker pronouncing cot with the CAUGHT vowel) times the probability of not mishearing that vowel (1-) plus times the probability of hearing the right vowel (i.e., the merged speaker pronounces cot with the COT vowel but it is misheard anyway) (first half of Eqn.", "7).", "Since g βˆ’ misinterpretation rates are a function of the rate of M + (p) in the environment, there is a threshold of M + speakers above which the merged grammar has a fitness advantage over the non-merged one.", "C + = 1 H i min(h i o , h i oh ) (6) C βˆ’ = 1 H i p 0 ((1 βˆ’ oh )h i o + oh h i oh ) (7) +q 0 ( oh h i o + oh h i oh ) Calculating this threshold for the frequent minimal pairs that Yang extracts from the Wortschatz project (Biemann et al., 2004 ) corpus 3 and mishearing rates from Peterson and Barney (1952), the Yang model predicts that a learner exposed to at least ∼ 17% COT-CAUGHT-merged input will acquire the merger.", "This threshold represents a strong advantage for M + because it is well under the 50% threshold expected for neutral (non-advantaged) change and it is very close to what was found in Johnson (2007) 's sociolinguistic study.", "It predicts that younger children may have g + while their parents and even older siblings have g βˆ’ if the 17% threshold was crossed in E after the acquisition period of the older sibling but before that of the younger sibling.", "Model Setup All the mechanics behind the learning model reduce to a simple statement: learners acquires g + iff > 17% of their input is M + and they acquire g βˆ’ otherwise.", "However, this kind of categorical learner in a perfectly-mixed population leads to immediate fixation at either g βˆ’ or g + in a single iteration, since the proportion of g + speakers in the population is equivalent to the proportion of M + input in every learner's environment.", "This is not realistic change.", "Clearly, social network structure is at least as important as the learning algorithm in modeling the spread of the merger.", "We model the change in a non-uniform social network of 100 centralized clusters of 75 individuals each.", "75 was chosen as half Dunbar's number, the maximum number of reliable social connections that an adult can maintain (Dunbar, 2010) .", "There are two grammars, g + and g βˆ’ , and learners internalize one or the other according to the 17% threshold of M + in their input.", "One cluster represents the source of the merger and is initialized at 100% g + , while the rest begin 100% g βˆ’ .", "Inter-cluster connections are chosen randomly so that some connections are between central members of the clusters and some are between peripheral members.", "The one merged cluster is connected to half the other clusters representing those at the frontier of the change, and each other cluster is connected to five randomly chosen ones.", "4 This network structure echoes work in sociolinguistics, in particular, Milroy and Milroy (1985) 's notion of strong and weak connections in language change, where weak connections between social clusters are particularly important for propagation of a change.", "Propagation of the merged grammar is calculated by E n because we are interested in the behavior of individuals without loss of precision and because it cannot be assumed that each cluster is internally uniform.", "5 Since the spread of the merger has been rapid enough to detect over a period of a few years, iterations are modeled as short age co-horts rather than full generations in the first experiments by updating only a randomly chosen 10% of nodes at each iteration because only a fraction of the population is learning at any given time.", "A model where every node is updated is investigated as well.", "Results The behavior of this simulation is shown graphically in Figure 4 .", "The fine/colored lines indicate the rate of M + within each initially non-merged cluster, and the bold/black line shows the average rate across all initially non-merged.", "The merger spreads from cluster to cluster in succession over the \"weak\" inter-cluster connections and through each cluster over the 'strong' connections before moving on to the next ones.", "Everett, 1995) members have the merger, a period of rapid diffusion of the merger, then some time where a few laggards resist the merger.", "As a result, most clusters exhibit an S-like shape.", "A few clusters change rapidly because of their especially wellconnected positions in the network, and some lag behind the rest because they are poorly connected to the rest of the network.", "More interestingly, the population-wide average, the population-level data at the kind of granularity that is often studied, yields a smooth S-curve with a shallower slope than the individual clusters.", "The fact that it arises naturally here in a network that conforms with typical network shapes but was otherwise randomly generated is encouraging because the experiment was not set up so that it would produce such a curve, and the steep rate of change in individual clusters is what is expected for a change that is rapid enough to affect siblings differently.", "In the above simulation, only a fraction of nodes were updated at each iteration in order to model a rapid change.", "In order to confirm that this choice is not affecting the results and to test a purer implementation of the framework presented here, we remove that constraint and update every node at each iteration.", "Figure 5 shows what happens over 20 iterations in a network that is otherwise identical but with 2/5 as many inter-cluster connections as the original.", "A qualitatively similar pattern arises, so the choice to update only a fraction of the population is not crucially affecting the results.", "In all experiments so far, social connections were fixed at the first iteration even though connections in real populations tend to change over time.", "To investigate that modeling assumption, we perform another simulation in which connections are randomly updated both within and across clusters at each iteration akin to Kauhanen (2016) 's rewiring.", "The result as shown in Figure 6 is similar to before, with one major difference.", "The individual clusters transition more closely in time because no individual cluster remains poorly connected or especially well connected throughout the entire simulation.", "Finally, we test our assumptions about population size by repeating the experiments on a smaller network of 40 clusters of 18 individuals.", "The results are qualitatively similar, but the S-curve appears to be more sensitive to probabilistic connections in the network.", "To explore this, we present the average network-wide rate of (M + ) across 10 trials, revealing that an S-like curve is formed each time but that its slope varies.", "A few trials never reach 100% because some of the clusters are not connected to the innovative one.", "The slope varies between trials, indicating that the rate of change is a function of both the population structure and the learning algorithm, but the network size does not substantially affect these results.", "Discussion The algebraic-network framework for modeling population-level language change presented here has substantial practical and theoretical advantages over previous ones.", "It is much simpler computationally than previous frameworks because it calculates the statistically expected behavior of each generation analytically and therefore removes the entire inner loop of calculating stochastic inter-agent interactions from the simulation.", "It follows the Niyogi and Berwick (1996) formalism for language change which presents a clean and modular way of reasoning about the problem and promotes the centrality of language acquisition.", "In addition to the core algorithm, the framework offers enough flexibility to represent a wide variety of processes from the highly abstract (e.g., Kauhanen (2016) ) to those grounded in soci-olinguistic and acquisition research (e.g., Yang (2009) ).", "In our investigation of Kauhanen's basic assumptions, we discover how seemingly innocuous decisions about population size and learning conspire to drive simulation results.", "If learners are conceived as categorical learners, population size becomes a deciding factor in the path of change.", "So while the original results are interesting and meaningful, they may only valid for small (on the order of 10 2 ) populations.", "In our simulation of the spread of the COT-CAUGHT merger, we show how a cognitivelymotivated model of acquisition requires a network model in order to represent population-level language change.", "The population is represented as a collection of individual clusters based on sociological work, but the clusters themselves are connected randomly.", "The fact that S-curves arise naturally from these networks underscores their centrality to language change.", "One problem that this line of simulation work has always faced has been the lack of viable comparison between models because every study implements its own learning, network, and interaction models.", "The modular nature of our framework advances against this trend since it is now possible to hold the population model constant while slotting in various learning models to test them against one another and vice-versa.", "Finally, since this framework reduces to Niyogi & Berwick's models in perfectly-mixed populations, it can be used to reason about the formal dynamics of language change as well.", "Without simulation, it would be difficult or impossible to undercover the interplay between acquisition and social structure on the propagation of language change.", "Neither factor alone can account for the theoretical or empirically observed patterns.", "Simulations of this kind which explicitly model both simultaneously is well equipped to provide insights that fieldwork and laboratory work cannot.", "As such, it is an invaluable complement to those more traditional methodologies." ] }
{ "paper_header_number": [ "1", "1.1", "2", "2.2", "2.3", "3", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Framework for Transmission in Social Networks", "Propagation in the Network", "Learning in the Network", "Application: Testing Assumptions", "Application: Mergers in Progress", "Background", "Model Setup", "Results", "Discussion" ] }
GEM-SciDuet-train-106#paper-1281#slide-16
Network Model
100 clusters of 75 individuals each Each cluster is centralised randomly such that some community members are better connected than others One cluster begins 100% merged The rest start 100% non-merged (Rhode Half the RI clusters are connected to the MA cluster (the Frontier) Two members of each RI cluster are randomly connected to other clusters
100 clusters of 75 individuals each Each cluster is centralised randomly such that some community members are better connected than others One cluster begins 100% merged The rest start 100% non-merged (Rhode Half the RI clusters are connected to the MA cluster (the Frontier) Two members of each RI cluster are randomly connected to other clusters
[]
GEM-SciDuet-train-106#paper-1281#slide-17
1281
A Framework for Representing Language Acquisition in a Population Setting
Language variation and change are driven both by individuals' internal cognitive processes and by the social structures through which language propagates. A wide range of computational frameworks have been proposed to connect these drivers. We compare the strengths and weaknesses of existing approaches and propose a new analytic framework which combines previous network models' ability to capture realistic social structure with practically and more elegant computational properties. The framework privileges the process of language acquisition and embeds learners in a social network but is modular so that population structure can be combined with different acquisition models. We demonstrate two applications for the framework: a test of practical concerns that arise when modeling acquisition in a population setting and an application of the framework to recent work on phonological mergers in progress.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217 ], "paper_content_text": [ "Introduction The process of language change should be thought of as a two-step cycle in which 1) individuals acquire their native languages from their predecessors then 2) pass them on to their successors.", "Small changes accrue over time this way and create both small-scale interpersonal variation and large-scale typological differences.", "It is easy to draw a strong analogy here between linguistic evolution and biological evolution.", "Both feature classic descent with modification, except while phenotypes are transmitted through genes and acted on by natural selection, language is both transmitted through and constrained by the individual (Cavalli-Sforza and Feldman, 1981; Ritt, 2004, etc.)", ".", "But while evolution, linguistic or otherwise, is driven by forces acting on the individual, it unfolds on the level of populations (Cavalli-Sforza and Feldman, 1981) .", "The influence of communitylevel social factors on the path of language change is a major focus of sociolinguistics (Labov, 2001; Milroy and Milroy, 1985; Rogers Everett, 1995) .", "Ideally, one could observe population-level variation unfold in real time while testing out individual factors, but this is impossible because nobody can travel back in time or fit entire natural environments into a lab.", "Change that has already happened is out of reach, and change in progress is buried in a world of confounds.", "The classic sociolinguistic method instead approaches the problem by inferring causal factors from patterns discovered in field interviews and corpora (Labov, 1994; Labov et al., 2005, etc.)", ".", "This is the primary source of empirical data in the field and the only way to look at language change in a naturalistic setting, but it is limited in that it cannot test cause and effect directly.", "More recently, controlled experimental studies have emerged as a complementary line of research which manipulate causal factors directly (Johnson et al., 1999; Campbell-Kibler, 2009, etc.)", ", but are inherently removed natural time and scale.", "A third approach, the one we build upon here, relies on computational modeling to simulate how sociolinguistic factors might work together in larger populations (Klein, 1966; Blythe and Croft, 2012; Kauhanen, 2016, etc.)", ".", "It has long been argued that language acquisition is the primary cause of language change (Sweet, 1899; Lightfoot, 1979; Niyogi, 1998, etc.)", ".", "In the last few decades, this connection has been modeled computationally (Gibson and Wexler, 1994; Kirby et al., 2000; Yang, 2000, etc.)", ", leading to the strong conclusion that change is the inevitable consequence of mixed linguistic input or finite learning periods (Niyogi and Berwick, 1996) , even if children are \"perfect\" learners.", "An important result connecting the learner and population emphasizes the need for this line of work: the space of paths of change available in populations is formally larger than the paths available to linear chains of iterated learners.", "Niyogi and Berwick (2009) prove formally that even perfectly-mixed (i.e., uniform and homogeneous social network) populations admit phase transitions in the path of change unavailable to chains of single learners commonly implemented in iterated learning (Kirby et al., 2000) .", "This suggests that small-population experimental studies in sociolinguistics and in child language acquisition do not paint the full picture of language change.", "We introduce a new framework for modeling language change in populations.", "It has an outer loop to represent generational progression, but it replaces the inner loop which calculates randomized interactions between agents with a single formula that is defined generally enough to allow the simulation of a wide range of scenarios.", "It builds upon the principled formalism described by Niyogi and Berwick (1996, et seq.)", ", privileging the acquisition model and separating it from the population model.", "The resulting modular framework is described in the following sections.", "First, Section 1.1 presents a survey of previous simulation work followed by a description of the new population model in Section 2.", "Next, Section 3 addresses practical concerns relating population size to assumptions about language acquisition.", "Finally, Section 4 introduces a case study on phonological change which demonstrates the need for appropriate models both of acquisition and populations.", "Related Work Computational models for the propagation of linguistic variation have been employed with a variety of research goals in mind.", "Every paper implements its own framework with few exceptions, so comparison across studies is difficult.", "Additionally, since each model is essentially 'boutique,' it is always possible that models are designed consciously or unconsciously to achieve a specific outcome rather driven by underlying principles.", "We group these frameworks into three classes according to their implementation, swarm, network, and algebraic, and discusses their strengths and weaknesses.", "The first class, called swarm here, models populations as collections of agents placed on a grid.", "They \"swarm\" around randomly according to some movement function, and \"interact\" when they occupy adjacent grid spaces (Satterfield, 2001; Harrison et al., 2002; Ke et al., 2008; Stanford and Kenny, 2013) .", "This tends toward concrete interpretation, for example, more mobile populations are expressed directly by more mobile agents.", "They capture Bloomfield (1933) 's \"principle of density\" which describes the observation that geographically or socially close individuals interact more frequently than those farther away.", "On the other hand, they provide little control over network structure, relying on series of explicit movement constraints in order to direct their agents, and since each one moves randomly at each iteration, these models have potentially thousands of degrees of freedom.", "Such simulations should be run many times if any sort of statistically expected results are to be computed.", "The second class, network frameworks, model speakers as nodes and interaction probabilities as weighted edges on network graphs (Minett and Wang, 2008; Baxter et al., 2009; Fagyal et al., 2010; Blythe and Croft, 2012; Kauhanen, 2016) .", "These frameworks offer precise control over social network structure and can test specific community models from within sociolinguistics.", "However, implementations usually proceed by some kind of iterative probabilistic node-pair selection process, and in this way suffer from the same statistical pitfalls as swarm frameworks.", "In contrast to swarm models, interaction is rigidly restricted to immediately connected nodes, so to achieve gradient interaction probabilities, edges must be frequently updated or nearly fully-connected graphs with carefully assigned edge weights would need to be constructed and motivated.", "The third class, algebraic frameworks, present analytic methods for determining the state of the network at the end of each iteration rather than relying on stochastic simulation of individual agents (Niyogi and Berwick, 1996, 1997; Yang, 2000; Baxter et al., 2006; Minett and Wang, 2008; Niyogi and Berwick, 2009) .", "Removing that inner loop is a more mathematically elegant approach and avoids dealing unnecessarily with statistics behind random trials.", "Removing that loop speeds up calculation as well, making larger simulations more tractable than with network or swarm frameworks.", "But this power is achieved by sacrificing the social network.", "Up to this point, such models have, to our knowledge, only been defined over perfectly-mixed (i.e., no network effects) populations.", "That assumption is useful for reasoning about the mathematical theory behind language change, but it hinders such models' utility in empirical studies.", "For example, though Baxter et al.", "(2006) and Minett and Wang (2008) implement algebraic models for perfectly mixed populations, they fall back on network models to model network effects.", "Framework for Transmission in Social Networks Algebraic frameworks have their mathematical advantage, but network frameworks provide a richer model for representing real-world population structures and swarm models capture density effects by default.", "An ideal framework would combine the benefits of all three of these.", "Here we do just that.", "We introduce a framework that instantiates Niyogi and Berwick (1996) 's acquisitiondriven formalism where change is handled explicitly as a two-step alternation between individual learners learning and populations interacting.", "It provides an analytic solution to the state of a network structure over which swarm-like behavior can be modeled.", "We begin by conceptualizing the framework in terms of agents traveling probabilistically over a network structure as in Algo.", "1 before introducing the analytic solution.", "There is an individual standing at every node in the graph, and at every iteration, each individual begins at some location and travels along the network's edges, at each step deciding to continue on or to stop and interact with the agent at that node.", "Any two agents with a nonzero weight path between them could potentially interact, so the overall probability of an interaction is a function of the shape of the network and the decay rate of the step probability.", "The shorter and higher weighted the path between two agents, the more likely they are to interact.", "This corresponds to the gradient interaction probabilities of swarm frameworks.", "Social networks are typically conceived of as graph structures with individuals as vertices and the social or geographical connections between individuals as edges, and this allows for a great deal of flexibility.", "If edges are undirected, then all interactions are equal and bidirectional, but if edges are directed, interactions may or may not be.", "Edges can be weighted to represent likelihood of interaction or some measure of social valuation, and this too can vary over time.", "Lastly, it is possible to add and remove nodes themselves to capture births, deaths, or migration.", "The network structure is represented computationally here as an adjacency matrix A.", "In a population of n individuals, this is n Γ— n where each element a ij is the weight of the connection from individual j to individual i.", "The matrix must be column stochastic (all columns sum to 1 and contain only positive elements) so that edge weights can be interpreted as probabilities.", "The special case where the matrix is symmetric (every a ij = a ji ) models undirected edges, and more strongly, the model reduces to perfectly-mixed populations when each a ij = 1 n .", "We define a notion of communities over the nodes of the network in order to add the option to categorize groups of individuals.", "Membership among c communities is identified with an n Γ— c indicator matrix C. Depending on the problem at hand, it is possible to calculate the average behavior of the learners within each community directly without having to calculate the behavior of each individual member.", "Propagation in the Network In a typical network model, the edge weights between nodes in A are interpreted directly as interaction probabilities, meaning that individuals only ever interact with their immediate graph neighbors.", "We take a different approach by allowing the agents to \"travel\" and potentially interact with any other agent whose node is connected by a path of non-zero edges.", "If the number of traveling steps were fixed at k, the probability of each pair interacting would be defined as A k .", "It is more complicated for us since the number of steps traveled is a random variable.", "The probability of j interacting with i (p(ij)) is the probability of them interacting after k steps times the probability of k for all values of k as in Eqn.", "1.", "Combining this intuition with A yields the interaction probabilities for all i, j pairs.", "p(ij) = k p(ij|k steps) p(k steps) (1) The pattern of linguistic variants or grammars (in the formal sense where grammar g is the intensional equivalent of language L g ) within a network unfolds as a dynamical system over the course of many iterations, and learners' positions within the network mediate which ones they eventually acquire.", "In a system with g grammars and n individuals, a n Γ— g row-stochastic matrix G specifies the probability with which each community expresses each grammar.", "Given this notion of interaction and the specification of grammars expressed within a network, it is possible to compute the distribution of grammars presented to each learner.", "This is the learners' linguistic environment and is represented by a matrix E in the same form as G .", "An environment function E n (G t , A) = E t+1 shown in Eqn.", "2 calculates E by first calculating all the interaction probabilities in the network then multiplying those by the grammars which every agent expresses to get the environment E. The Ξ± parameter from the geometric distribution 1 defines the travel decay rate.", "A lower Ξ± defines conceptually more mobile agents.", "More generally, E n is a special case of E(G t , C t , A t ) = E t+1 where the number of communities equals the number of individuals (c = n).", "C becomes the identity matrix without loss of generality, so the network's initial condition does not have to be defined explicitly.", "For any other community definition, an initial condition has to be defined as in Eqn.", "3 which specifies the starting point in the network that each agent conceptually begins traveling from.", "The output of E is a g Γ— c matrix giving the environment of the average agent in each community.", "2 E n (G t , A) = G t Ξ± (I βˆ’ (1 βˆ’ Ξ±)A) βˆ’1 (2) E(G t , C, A) = E n (G t , A)C(C C) βˆ’1 (3) The output of E must be broadcast to g Γ— n, which would result in the loss of some information unless the assumption can be made that each community is internally uniform.", "However, when that assumption can be made, the n Γ— n adjacency matrix admits a c Γ— c equitable partition A Ο€ (Eqn.", "4) (Schaub et al., 2016) which permits an alternate environment function E EP (G t , C, A) shown in Eqn.", "5 that is equivalent to the lossless E n if A.", "If n c, E EP is much faster to calculate because it only inverts a small c Γ— c matrix rather than a large n Γ— n. This makes it feasible to run much larger simulations than what has been done in the past.", "A Ο€ = (C C) βˆ’1 C AC (4) EEP = Ξ±G C (I βˆ’ (1 βˆ’ Ξ±)A Ο€ ) βˆ’1 (C C) βˆ’1 (5) Learning in the Network The environment function describes what inputs E t+1 are available to learners given the language expressed by the mature speakers of the previous age cohort with grammars G t .", "The second component of the framework describes the learning algorithm A(E t+1 ) = G t+1 , how individuals respond to their input environment.", "The resulting G t+1 describes which grammars those learners will eventually contribute to the subsequent generation's environment E t+2 .", "This back-andforth between adults' grammars G and childrens' environment E is the two-step cycle of language change (Fig.", "1) .", "In neutral change, learners would acquire grammars at the rates that they are expressed in their environments, but there is good reason to believe .", ".", ".", "G t β†’ E t+1 β†’ G t+1 .", ".", ".", "G t+i β†’ E t+i+1 .", ".", ".", "Figure 1: Language change as an alternation between G and E matrices that most language change involves differential fitness between competing variants, and most nontrivial learning algorithms yield some kind of fitness (Kroch, 1989; Yang, 2000; Blythe and Croft, 2012, etc.)", ", so A is rarely neutral.", "A neutral and simple advantaged model are both considered in Section 3, and a more complex learning algorithm is described for Section 4.", "Application: Testing Assumptions The general nature of the framework described here renders it suitable for reproducing the results of previous works and evaluating their assumptions.", "To demonstrate this, we reproduce the major result from Kauhanen (2016) , which tested the behavior of neutral change in networks of singlegrammar learners, in order to dissect two of its primary assumptions.", "Implemented in a typical network framework, the original setup contains n = 200 individuals in probabilistically generated centralized networks in which individuals mature categorically to the single most frequent grammar in their input.", "The author found that categorical neutral change produced chaotic paths of change regardless of network shape and that periodically \"rewiring\" some of the network edges smoothed this out.", "Without commenting on rewiring, we find that the combination of n and choice of categorical learners conspire to create the chaotic results.", "We create two communities, both centralized along the lines of the single cluster in Kauhanen (2016) , initialize all members of cluster 1 with grammar g 1 and all members of cluster 2 with grammar g 2 , and additional edges are added between members of clusters 1 and 2 to allow interaction.", "G is converted to an indicator matrix at the end of each learning iteration by rounding values to 0 and 1 in order to model categorical learners who only internalize the most common grammar in their inputs as in the original model.", "In a pair of infinitely large clusters or two clusters where individuals are permitted to learn a probabilistic distribution of grammars, each cluster should homogenize to a 50/50 distribution of g 1 and g 2 after some number of iterations depending on the specifics of the network shape and setting for Ξ± creating the red curves in Fig.", "2 .", "At n = 20000, each of 10 trials roughly follows the path of the predicted curve, but when run at the original n = 200 for 10 trials, this produces the type of chaotic behavior which Kauhanen (2016) attempts to repair.", "The outcome appears to be the result of an assumption made out of convenience (n = 200) rather than a principled decision.", "To further explore the impact of the population size assumption, we experiment on a model of advantaged change, which is typically contrasted with neutral change because of its tendency to produce \"well-behaved\" S-curve change (Blythe and Croft, 2012; Kauhanen, 2016) .", "This time, only a single cluster is created, and the advantaged grammar is initially assigned to 1% of the population.", "As seen in Figure 3 , results are chaotic for n = 200 once again and near predicted for n = 20000.", "This is important because at n = 200, advantaged change is chaotic, and most simulations both rise and fall.", "An experimenter who only studied advantaged change in small population might concluded that it is as ill-behaved as neutral change.", "While the conclusions that Kauhanen (2016) draws appear valid for n = 200, it is not clear to what extent they can be projected onto larger populations.", "This demonstrates the need for carefully choosing one's modeling assumptions and testing them out when possible.", "Application: Mergers in Progress The acquisition of phonological mergers in mixed input settings presents an interesting problem.", "It appears that mergers have an inherent advantage because they tend to spread at the expense of distinctions, and once they begin, they are rarely reversed (Labov, 1994) .", "Yang (2009) 's acquisition model quantifies this advantage as the relatively lower chance of misinterpretation if a listener assumes the merged grammar instead of the nonmerged grammar once a sufficient proportion of the environment is merged.", "Applied to Johnson (2007) 's detailed population study of the frontier of the COT-CAUGHT merger in the small towns along the border between Rhode Island and Massachusetts, this accurately predicts the ratio of merged input for a child to acquire the merged grammar, however when applied to a perfectly mixed population of learners, it fails to model the spread of the merged grammar in the population.", "Yang's model is input-driven, so it is conducive to simulation with minimal assumptions past those drawn from the empirical data.", "We test the behavior of this learning model in a typical population network and demonstrate that it produces a reasonable path of change.", "Background The COT-CAUGHT merger, also called the low back merger describes the phenomenon present in varieties of North American English spoken in eastern New England, western Pennsylvania, the American West, and Canada among others where the vowel in words like cot and the vowel in words like caught have come to be pronounced the same (Labov et al., 2005, pp.", "58-65) .", "The geographical extent of the merger is currently expanding, which might be expected if the merger has a cognitive or social advantage associated with it.", "Johnson (2007) 's study of the merger's frontier on the border Rhode Island and Massachusetts uncovered an interesting social dynamic that illustrates the merger's speed: there are families where the parents and older siblings non-merged, but the younger siblings are.", "The merger has swept through in only a few years and passed between the siblings.", "Yang (2009) seeks to understand why mergers have an advantage from a cognitive perspective, and his model treats the acquisition of mergers as an evolutionary process.", "Learners who receive both merged (M + ) and non-merged (M βˆ’ ) input entertain both a merged (g + ) and non-merged (g βˆ’ ) grammar and reward whichever grammar successfully parses the input.", "This kind of variational learner (Yang, 2000) is essentially an adaptation of the classic evolutionary Linear Reward Punishment model (Bush and Mosteller, 1953) .", "The fitness of each grammar is the probability in the limit that it will fail to parse any given input, and since it is virtually always the case that this probability is different for both grammars, fitness is virtually always asymmetric.", "The variational learner is characterized as follows.", "Given two grammars and an input token s, The learner parses s with g 1 with probability p and with g 2 with probability q = 1 βˆ’ p. p is rewarded according to whether the choice of g successfully parses s (g β†’ s) or it fails to (g s), where Ξ³ is some small constant.", "p = p + Ξ³q, g β†’ s (1 βˆ’ Ξ³)p, g s Given a specific problem, one can calculate a penalty probability C for each g, the proportion of input that would cause g s. The grammar with the lower C has the advantage, so the other one will be driven down in the long run.", "C can be estimated from type frequencies in a corpus, and the model is non-parametric because these values do not depend on Ξ³. lim tβ†’βˆž p t = C 2 C 1 + C 2 lim tβ†’βˆž q t = C 1 C 1 + C 2 To understand the COT-CAUGHT merger empirically, one must reason about what kind of input would trigger a penalty and then calculate the penalty probabilities of the merged grammar C + and non-merged grammar C βˆ’ from a corpus.", "This model considers parsing failure to be the rate of initial misinterpretation, and for a vowel merger, the only inputs that could create an initial misinterpretation are minimal pairs because they become homophones.", "Examples of COT-CAUGHT minimal pairs include cot-caught, Don-Dawn, stock-stalk, odd-awed, collar-caller, and so on.", "The merged g + grammar collapses would-be minimal pairs into homophones, so the penalty rate C + comes down to lexical access.", "Under the observation that more frequent homophones are retrieved first regardless of syntactic context (Caramazza et al., 2001) , g + listeners only suffer initial misinterpretation when the less frequent member of a pair is uttered regardless of the rate of M + .", "If H is the sum token frequency of all minimal pairs and h i o , h i oh are the frequencies of the ith pair's members, then C+ is calculated by Eqn.", "6.", "In contrast, g βˆ’ listeners are sensitive to the phonemic distinction, so they misinterpret M βˆ’ input at the rate of mishearing one vowel for the other (Peterson and Barney, 1952) (second half of Eqn.", "7).", "And given M + input, they misinterpret whenever they hear the phoneme which g βˆ’ does not expect (e.g., a merged speaker pronouncing cot with the CAUGHT vowel) times the probability of not mishearing that vowel (1-) plus times the probability of hearing the right vowel (i.e., the merged speaker pronounces cot with the COT vowel but it is misheard anyway) (first half of Eqn.", "7).", "Since g βˆ’ misinterpretation rates are a function of the rate of M + (p) in the environment, there is a threshold of M + speakers above which the merged grammar has a fitness advantage over the non-merged one.", "C + = 1 H i min(h i o , h i oh ) (6) C βˆ’ = 1 H i p 0 ((1 βˆ’ oh )h i o + oh h i oh ) (7) +q 0 ( oh h i o + oh h i oh ) Calculating this threshold for the frequent minimal pairs that Yang extracts from the Wortschatz project (Biemann et al., 2004 ) corpus 3 and mishearing rates from Peterson and Barney (1952), the Yang model predicts that a learner exposed to at least ∼ 17% COT-CAUGHT-merged input will acquire the merger.", "This threshold represents a strong advantage for M + because it is well under the 50% threshold expected for neutral (non-advantaged) change and it is very close to what was found in Johnson (2007) 's sociolinguistic study.", "It predicts that younger children may have g + while their parents and even older siblings have g βˆ’ if the 17% threshold was crossed in E after the acquisition period of the older sibling but before that of the younger sibling.", "Model Setup All the mechanics behind the learning model reduce to a simple statement: learners acquires g + iff > 17% of their input is M + and they acquire g βˆ’ otherwise.", "However, this kind of categorical learner in a perfectly-mixed population leads to immediate fixation at either g βˆ’ or g + in a single iteration, since the proportion of g + speakers in the population is equivalent to the proportion of M + input in every learner's environment.", "This is not realistic change.", "Clearly, social network structure is at least as important as the learning algorithm in modeling the spread of the merger.", "We model the change in a non-uniform social network of 100 centralized clusters of 75 individuals each.", "75 was chosen as half Dunbar's number, the maximum number of reliable social connections that an adult can maintain (Dunbar, 2010) .", "There are two grammars, g + and g βˆ’ , and learners internalize one or the other according to the 17% threshold of M + in their input.", "One cluster represents the source of the merger and is initialized at 100% g + , while the rest begin 100% g βˆ’ .", "Inter-cluster connections are chosen randomly so that some connections are between central members of the clusters and some are between peripheral members.", "The one merged cluster is connected to half the other clusters representing those at the frontier of the change, and each other cluster is connected to five randomly chosen ones.", "4 This network structure echoes work in sociolinguistics, in particular, Milroy and Milroy (1985) 's notion of strong and weak connections in language change, where weak connections between social clusters are particularly important for propagation of a change.", "Propagation of the merged grammar is calculated by E n because we are interested in the behavior of individuals without loss of precision and because it cannot be assumed that each cluster is internally uniform.", "5 Since the spread of the merger has been rapid enough to detect over a period of a few years, iterations are modeled as short age co-horts rather than full generations in the first experiments by updating only a randomly chosen 10% of nodes at each iteration because only a fraction of the population is learning at any given time.", "A model where every node is updated is investigated as well.", "Results The behavior of this simulation is shown graphically in Figure 4 .", "The fine/colored lines indicate the rate of M + within each initially non-merged cluster, and the bold/black line shows the average rate across all initially non-merged.", "The merger spreads from cluster to cluster in succession over the \"weak\" inter-cluster connections and through each cluster over the 'strong' connections before moving on to the next ones.", "Everett, 1995) members have the merger, a period of rapid diffusion of the merger, then some time where a few laggards resist the merger.", "As a result, most clusters exhibit an S-like shape.", "A few clusters change rapidly because of their especially wellconnected positions in the network, and some lag behind the rest because they are poorly connected to the rest of the network.", "More interestingly, the population-wide average, the population-level data at the kind of granularity that is often studied, yields a smooth S-curve with a shallower slope than the individual clusters.", "The fact that it arises naturally here in a network that conforms with typical network shapes but was otherwise randomly generated is encouraging because the experiment was not set up so that it would produce such a curve, and the steep rate of change in individual clusters is what is expected for a change that is rapid enough to affect siblings differently.", "In the above simulation, only a fraction of nodes were updated at each iteration in order to model a rapid change.", "In order to confirm that this choice is not affecting the results and to test a purer implementation of the framework presented here, we remove that constraint and update every node at each iteration.", "Figure 5 shows what happens over 20 iterations in a network that is otherwise identical but with 2/5 as many inter-cluster connections as the original.", "A qualitatively similar pattern arises, so the choice to update only a fraction of the population is not crucially affecting the results.", "In all experiments so far, social connections were fixed at the first iteration even though connections in real populations tend to change over time.", "To investigate that modeling assumption, we perform another simulation in which connections are randomly updated both within and across clusters at each iteration akin to Kauhanen (2016) 's rewiring.", "The result as shown in Figure 6 is similar to before, with one major difference.", "The individual clusters transition more closely in time because no individual cluster remains poorly connected or especially well connected throughout the entire simulation.", "Finally, we test our assumptions about population size by repeating the experiments on a smaller network of 40 clusters of 18 individuals.", "The results are qualitatively similar, but the S-curve appears to be more sensitive to probabilistic connections in the network.", "To explore this, we present the average network-wide rate of (M + ) across 10 trials, revealing that an S-like curve is formed each time but that its slope varies.", "A few trials never reach 100% because some of the clusters are not connected to the innovative one.", "The slope varies between trials, indicating that the rate of change is a function of both the population structure and the learning algorithm, but the network size does not substantially affect these results.", "Discussion The algebraic-network framework for modeling population-level language change presented here has substantial practical and theoretical advantages over previous ones.", "It is much simpler computationally than previous frameworks because it calculates the statistically expected behavior of each generation analytically and therefore removes the entire inner loop of calculating stochastic inter-agent interactions from the simulation.", "It follows the Niyogi and Berwick (1996) formalism for language change which presents a clean and modular way of reasoning about the problem and promotes the centrality of language acquisition.", "In addition to the core algorithm, the framework offers enough flexibility to represent a wide variety of processes from the highly abstract (e.g., Kauhanen (2016) ) to those grounded in soci-olinguistic and acquisition research (e.g., Yang (2009) ).", "In our investigation of Kauhanen's basic assumptions, we discover how seemingly innocuous decisions about population size and learning conspire to drive simulation results.", "If learners are conceived as categorical learners, population size becomes a deciding factor in the path of change.", "So while the original results are interesting and meaningful, they may only valid for small (on the order of 10 2 ) populations.", "In our simulation of the spread of the COT-CAUGHT merger, we show how a cognitivelymotivated model of acquisition requires a network model in order to represent population-level language change.", "The population is represented as a collection of individual clusters based on sociological work, but the clusters themselves are connected randomly.", "The fact that S-curves arise naturally from these networks underscores their centrality to language change.", "One problem that this line of simulation work has always faced has been the lack of viable comparison between models because every study implements its own learning, network, and interaction models.", "The modular nature of our framework advances against this trend since it is now possible to hold the population model constant while slotting in various learning models to test them against one another and vice-versa.", "Finally, since this framework reduces to Niyogi & Berwick's models in perfectly-mixed populations, it can be used to reason about the formal dynamics of language change as well.", "Without simulation, it would be difficult or impossible to undercover the interplay between acquisition and social structure on the propagation of language change.", "Neither factor alone can account for the theoretical or empirically observed patterns.", "Simulations of this kind which explicitly model both simultaneously is well equipped to provide insights that fieldwork and laboratory work cannot.", "As such, it is an invaluable complement to those more traditional methodologies." ] }
{ "paper_header_number": [ "1", "1.1", "2", "2.2", "2.3", "3", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Framework for Transmission in Social Networks", "Propagation in the Network", "Learning in the Network", "Application: Testing Assumptions", "Application: Mergers in Progress", "Background", "Model Setup", "Results", "Discussion" ] }
GEM-SciDuet-train-106#paper-1281#slide-17
Merger Rate in Rhode Island over Time
The average merger rate across all Rhode Island clusters follows an The 99 RI community cluster curves are also S-shaped Steep slopes = rapid change Cluster Merger Rates Rhode Island Avg
The average merger rate across all Rhode Island clusters follows an The 99 RI community cluster curves are also S-shaped Steep slopes = rapid change Cluster Merger Rates Rhode Island Avg
[]
GEM-SciDuet-train-106#paper-1281#slide-18
1281
A Framework for Representing Language Acquisition in a Population Setting
Language variation and change are driven both by individuals' internal cognitive processes and by the social structures through which language propagates. A wide range of computational frameworks have been proposed to connect these drivers. We compare the strengths and weaknesses of existing approaches and propose a new analytic framework which combines previous network models' ability to capture realistic social structure with practically and more elegant computational properties. The framework privileges the process of language acquisition and embeds learners in a social network but is modular so that population structure can be combined with different acquisition models. We demonstrate two applications for the framework: a test of practical concerns that arise when modeling acquisition in a population setting and an application of the framework to recent work on phonological mergers in progress.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217 ], "paper_content_text": [ "Introduction The process of language change should be thought of as a two-step cycle in which 1) individuals acquire their native languages from their predecessors then 2) pass them on to their successors.", "Small changes accrue over time this way and create both small-scale interpersonal variation and large-scale typological differences.", "It is easy to draw a strong analogy here between linguistic evolution and biological evolution.", "Both feature classic descent with modification, except while phenotypes are transmitted through genes and acted on by natural selection, language is both transmitted through and constrained by the individual (Cavalli-Sforza and Feldman, 1981; Ritt, 2004, etc.)", ".", "But while evolution, linguistic or otherwise, is driven by forces acting on the individual, it unfolds on the level of populations (Cavalli-Sforza and Feldman, 1981) .", "The influence of communitylevel social factors on the path of language change is a major focus of sociolinguistics (Labov, 2001; Milroy and Milroy, 1985; Rogers Everett, 1995) .", "Ideally, one could observe population-level variation unfold in real time while testing out individual factors, but this is impossible because nobody can travel back in time or fit entire natural environments into a lab.", "Change that has already happened is out of reach, and change in progress is buried in a world of confounds.", "The classic sociolinguistic method instead approaches the problem by inferring causal factors from patterns discovered in field interviews and corpora (Labov, 1994; Labov et al., 2005, etc.)", ".", "This is the primary source of empirical data in the field and the only way to look at language change in a naturalistic setting, but it is limited in that it cannot test cause and effect directly.", "More recently, controlled experimental studies have emerged as a complementary line of research which manipulate causal factors directly (Johnson et al., 1999; Campbell-Kibler, 2009, etc.)", ", but are inherently removed natural time and scale.", "A third approach, the one we build upon here, relies on computational modeling to simulate how sociolinguistic factors might work together in larger populations (Klein, 1966; Blythe and Croft, 2012; Kauhanen, 2016, etc.)", ".", "It has long been argued that language acquisition is the primary cause of language change (Sweet, 1899; Lightfoot, 1979; Niyogi, 1998, etc.)", ".", "In the last few decades, this connection has been modeled computationally (Gibson and Wexler, 1994; Kirby et al., 2000; Yang, 2000, etc.)", ", leading to the strong conclusion that change is the inevitable consequence of mixed linguistic input or finite learning periods (Niyogi and Berwick, 1996) , even if children are \"perfect\" learners.", "An important result connecting the learner and population emphasizes the need for this line of work: the space of paths of change available in populations is formally larger than the paths available to linear chains of iterated learners.", "Niyogi and Berwick (2009) prove formally that even perfectly-mixed (i.e., uniform and homogeneous social network) populations admit phase transitions in the path of change unavailable to chains of single learners commonly implemented in iterated learning (Kirby et al., 2000) .", "This suggests that small-population experimental studies in sociolinguistics and in child language acquisition do not paint the full picture of language change.", "We introduce a new framework for modeling language change in populations.", "It has an outer loop to represent generational progression, but it replaces the inner loop which calculates randomized interactions between agents with a single formula that is defined generally enough to allow the simulation of a wide range of scenarios.", "It builds upon the principled formalism described by Niyogi and Berwick (1996, et seq.)", ", privileging the acquisition model and separating it from the population model.", "The resulting modular framework is described in the following sections.", "First, Section 1.1 presents a survey of previous simulation work followed by a description of the new population model in Section 2.", "Next, Section 3 addresses practical concerns relating population size to assumptions about language acquisition.", "Finally, Section 4 introduces a case study on phonological change which demonstrates the need for appropriate models both of acquisition and populations.", "Related Work Computational models for the propagation of linguistic variation have been employed with a variety of research goals in mind.", "Every paper implements its own framework with few exceptions, so comparison across studies is difficult.", "Additionally, since each model is essentially 'boutique,' it is always possible that models are designed consciously or unconsciously to achieve a specific outcome rather driven by underlying principles.", "We group these frameworks into three classes according to their implementation, swarm, network, and algebraic, and discusses their strengths and weaknesses.", "The first class, called swarm here, models populations as collections of agents placed on a grid.", "They \"swarm\" around randomly according to some movement function, and \"interact\" when they occupy adjacent grid spaces (Satterfield, 2001; Harrison et al., 2002; Ke et al., 2008; Stanford and Kenny, 2013) .", "This tends toward concrete interpretation, for example, more mobile populations are expressed directly by more mobile agents.", "They capture Bloomfield (1933) 's \"principle of density\" which describes the observation that geographically or socially close individuals interact more frequently than those farther away.", "On the other hand, they provide little control over network structure, relying on series of explicit movement constraints in order to direct their agents, and since each one moves randomly at each iteration, these models have potentially thousands of degrees of freedom.", "Such simulations should be run many times if any sort of statistically expected results are to be computed.", "The second class, network frameworks, model speakers as nodes and interaction probabilities as weighted edges on network graphs (Minett and Wang, 2008; Baxter et al., 2009; Fagyal et al., 2010; Blythe and Croft, 2012; Kauhanen, 2016) .", "These frameworks offer precise control over social network structure and can test specific community models from within sociolinguistics.", "However, implementations usually proceed by some kind of iterative probabilistic node-pair selection process, and in this way suffer from the same statistical pitfalls as swarm frameworks.", "In contrast to swarm models, interaction is rigidly restricted to immediately connected nodes, so to achieve gradient interaction probabilities, edges must be frequently updated or nearly fully-connected graphs with carefully assigned edge weights would need to be constructed and motivated.", "The third class, algebraic frameworks, present analytic methods for determining the state of the network at the end of each iteration rather than relying on stochastic simulation of individual agents (Niyogi and Berwick, 1996, 1997; Yang, 2000; Baxter et al., 2006; Minett and Wang, 2008; Niyogi and Berwick, 2009) .", "Removing that inner loop is a more mathematically elegant approach and avoids dealing unnecessarily with statistics behind random trials.", "Removing that loop speeds up calculation as well, making larger simulations more tractable than with network or swarm frameworks.", "But this power is achieved by sacrificing the social network.", "Up to this point, such models have, to our knowledge, only been defined over perfectly-mixed (i.e., no network effects) populations.", "That assumption is useful for reasoning about the mathematical theory behind language change, but it hinders such models' utility in empirical studies.", "For example, though Baxter et al.", "(2006) and Minett and Wang (2008) implement algebraic models for perfectly mixed populations, they fall back on network models to model network effects.", "Framework for Transmission in Social Networks Algebraic frameworks have their mathematical advantage, but network frameworks provide a richer model for representing real-world population structures and swarm models capture density effects by default.", "An ideal framework would combine the benefits of all three of these.", "Here we do just that.", "We introduce a framework that instantiates Niyogi and Berwick (1996) 's acquisitiondriven formalism where change is handled explicitly as a two-step alternation between individual learners learning and populations interacting.", "It provides an analytic solution to the state of a network structure over which swarm-like behavior can be modeled.", "We begin by conceptualizing the framework in terms of agents traveling probabilistically over a network structure as in Algo.", "1 before introducing the analytic solution.", "There is an individual standing at every node in the graph, and at every iteration, each individual begins at some location and travels along the network's edges, at each step deciding to continue on or to stop and interact with the agent at that node.", "Any two agents with a nonzero weight path between them could potentially interact, so the overall probability of an interaction is a function of the shape of the network and the decay rate of the step probability.", "The shorter and higher weighted the path between two agents, the more likely they are to interact.", "This corresponds to the gradient interaction probabilities of swarm frameworks.", "Social networks are typically conceived of as graph structures with individuals as vertices and the social or geographical connections between individuals as edges, and this allows for a great deal of flexibility.", "If edges are undirected, then all interactions are equal and bidirectional, but if edges are directed, interactions may or may not be.", "Edges can be weighted to represent likelihood of interaction or some measure of social valuation, and this too can vary over time.", "Lastly, it is possible to add and remove nodes themselves to capture births, deaths, or migration.", "The network structure is represented computationally here as an adjacency matrix A.", "In a population of n individuals, this is n Γ— n where each element a ij is the weight of the connection from individual j to individual i.", "The matrix must be column stochastic (all columns sum to 1 and contain only positive elements) so that edge weights can be interpreted as probabilities.", "The special case where the matrix is symmetric (every a ij = a ji ) models undirected edges, and more strongly, the model reduces to perfectly-mixed populations when each a ij = 1 n .", "We define a notion of communities over the nodes of the network in order to add the option to categorize groups of individuals.", "Membership among c communities is identified with an n Γ— c indicator matrix C. Depending on the problem at hand, it is possible to calculate the average behavior of the learners within each community directly without having to calculate the behavior of each individual member.", "Propagation in the Network In a typical network model, the edge weights between nodes in A are interpreted directly as interaction probabilities, meaning that individuals only ever interact with their immediate graph neighbors.", "We take a different approach by allowing the agents to \"travel\" and potentially interact with any other agent whose node is connected by a path of non-zero edges.", "If the number of traveling steps were fixed at k, the probability of each pair interacting would be defined as A k .", "It is more complicated for us since the number of steps traveled is a random variable.", "The probability of j interacting with i (p(ij)) is the probability of them interacting after k steps times the probability of k for all values of k as in Eqn.", "1.", "Combining this intuition with A yields the interaction probabilities for all i, j pairs.", "p(ij) = k p(ij|k steps) p(k steps) (1) The pattern of linguistic variants or grammars (in the formal sense where grammar g is the intensional equivalent of language L g ) within a network unfolds as a dynamical system over the course of many iterations, and learners' positions within the network mediate which ones they eventually acquire.", "In a system with g grammars and n individuals, a n Γ— g row-stochastic matrix G specifies the probability with which each community expresses each grammar.", "Given this notion of interaction and the specification of grammars expressed within a network, it is possible to compute the distribution of grammars presented to each learner.", "This is the learners' linguistic environment and is represented by a matrix E in the same form as G .", "An environment function E n (G t , A) = E t+1 shown in Eqn.", "2 calculates E by first calculating all the interaction probabilities in the network then multiplying those by the grammars which every agent expresses to get the environment E. The Ξ± parameter from the geometric distribution 1 defines the travel decay rate.", "A lower Ξ± defines conceptually more mobile agents.", "More generally, E n is a special case of E(G t , C t , A t ) = E t+1 where the number of communities equals the number of individuals (c = n).", "C becomes the identity matrix without loss of generality, so the network's initial condition does not have to be defined explicitly.", "For any other community definition, an initial condition has to be defined as in Eqn.", "3 which specifies the starting point in the network that each agent conceptually begins traveling from.", "The output of E is a g Γ— c matrix giving the environment of the average agent in each community.", "2 E n (G t , A) = G t Ξ± (I βˆ’ (1 βˆ’ Ξ±)A) βˆ’1 (2) E(G t , C, A) = E n (G t , A)C(C C) βˆ’1 (3) The output of E must be broadcast to g Γ— n, which would result in the loss of some information unless the assumption can be made that each community is internally uniform.", "However, when that assumption can be made, the n Γ— n adjacency matrix admits a c Γ— c equitable partition A Ο€ (Eqn.", "4) (Schaub et al., 2016) which permits an alternate environment function E EP (G t , C, A) shown in Eqn.", "5 that is equivalent to the lossless E n if A.", "If n c, E EP is much faster to calculate because it only inverts a small c Γ— c matrix rather than a large n Γ— n. This makes it feasible to run much larger simulations than what has been done in the past.", "A Ο€ = (C C) βˆ’1 C AC (4) EEP = Ξ±G C (I βˆ’ (1 βˆ’ Ξ±)A Ο€ ) βˆ’1 (C C) βˆ’1 (5) Learning in the Network The environment function describes what inputs E t+1 are available to learners given the language expressed by the mature speakers of the previous age cohort with grammars G t .", "The second component of the framework describes the learning algorithm A(E t+1 ) = G t+1 , how individuals respond to their input environment.", "The resulting G t+1 describes which grammars those learners will eventually contribute to the subsequent generation's environment E t+2 .", "This back-andforth between adults' grammars G and childrens' environment E is the two-step cycle of language change (Fig.", "1) .", "In neutral change, learners would acquire grammars at the rates that they are expressed in their environments, but there is good reason to believe .", ".", ".", "G t β†’ E t+1 β†’ G t+1 .", ".", ".", "G t+i β†’ E t+i+1 .", ".", ".", "Figure 1: Language change as an alternation between G and E matrices that most language change involves differential fitness between competing variants, and most nontrivial learning algorithms yield some kind of fitness (Kroch, 1989; Yang, 2000; Blythe and Croft, 2012, etc.)", ", so A is rarely neutral.", "A neutral and simple advantaged model are both considered in Section 3, and a more complex learning algorithm is described for Section 4.", "Application: Testing Assumptions The general nature of the framework described here renders it suitable for reproducing the results of previous works and evaluating their assumptions.", "To demonstrate this, we reproduce the major result from Kauhanen (2016) , which tested the behavior of neutral change in networks of singlegrammar learners, in order to dissect two of its primary assumptions.", "Implemented in a typical network framework, the original setup contains n = 200 individuals in probabilistically generated centralized networks in which individuals mature categorically to the single most frequent grammar in their input.", "The author found that categorical neutral change produced chaotic paths of change regardless of network shape and that periodically \"rewiring\" some of the network edges smoothed this out.", "Without commenting on rewiring, we find that the combination of n and choice of categorical learners conspire to create the chaotic results.", "We create two communities, both centralized along the lines of the single cluster in Kauhanen (2016) , initialize all members of cluster 1 with grammar g 1 and all members of cluster 2 with grammar g 2 , and additional edges are added between members of clusters 1 and 2 to allow interaction.", "G is converted to an indicator matrix at the end of each learning iteration by rounding values to 0 and 1 in order to model categorical learners who only internalize the most common grammar in their inputs as in the original model.", "In a pair of infinitely large clusters or two clusters where individuals are permitted to learn a probabilistic distribution of grammars, each cluster should homogenize to a 50/50 distribution of g 1 and g 2 after some number of iterations depending on the specifics of the network shape and setting for Ξ± creating the red curves in Fig.", "2 .", "At n = 20000, each of 10 trials roughly follows the path of the predicted curve, but when run at the original n = 200 for 10 trials, this produces the type of chaotic behavior which Kauhanen (2016) attempts to repair.", "The outcome appears to be the result of an assumption made out of convenience (n = 200) rather than a principled decision.", "To further explore the impact of the population size assumption, we experiment on a model of advantaged change, which is typically contrasted with neutral change because of its tendency to produce \"well-behaved\" S-curve change (Blythe and Croft, 2012; Kauhanen, 2016) .", "This time, only a single cluster is created, and the advantaged grammar is initially assigned to 1% of the population.", "As seen in Figure 3 , results are chaotic for n = 200 once again and near predicted for n = 20000.", "This is important because at n = 200, advantaged change is chaotic, and most simulations both rise and fall.", "An experimenter who only studied advantaged change in small population might concluded that it is as ill-behaved as neutral change.", "While the conclusions that Kauhanen (2016) draws appear valid for n = 200, it is not clear to what extent they can be projected onto larger populations.", "This demonstrates the need for carefully choosing one's modeling assumptions and testing them out when possible.", "Application: Mergers in Progress The acquisition of phonological mergers in mixed input settings presents an interesting problem.", "It appears that mergers have an inherent advantage because they tend to spread at the expense of distinctions, and once they begin, they are rarely reversed (Labov, 1994) .", "Yang (2009) 's acquisition model quantifies this advantage as the relatively lower chance of misinterpretation if a listener assumes the merged grammar instead of the nonmerged grammar once a sufficient proportion of the environment is merged.", "Applied to Johnson (2007) 's detailed population study of the frontier of the COT-CAUGHT merger in the small towns along the border between Rhode Island and Massachusetts, this accurately predicts the ratio of merged input for a child to acquire the merged grammar, however when applied to a perfectly mixed population of learners, it fails to model the spread of the merged grammar in the population.", "Yang's model is input-driven, so it is conducive to simulation with minimal assumptions past those drawn from the empirical data.", "We test the behavior of this learning model in a typical population network and demonstrate that it produces a reasonable path of change.", "Background The COT-CAUGHT merger, also called the low back merger describes the phenomenon present in varieties of North American English spoken in eastern New England, western Pennsylvania, the American West, and Canada among others where the vowel in words like cot and the vowel in words like caught have come to be pronounced the same (Labov et al., 2005, pp.", "58-65) .", "The geographical extent of the merger is currently expanding, which might be expected if the merger has a cognitive or social advantage associated with it.", "Johnson (2007) 's study of the merger's frontier on the border Rhode Island and Massachusetts uncovered an interesting social dynamic that illustrates the merger's speed: there are families where the parents and older siblings non-merged, but the younger siblings are.", "The merger has swept through in only a few years and passed between the siblings.", "Yang (2009) seeks to understand why mergers have an advantage from a cognitive perspective, and his model treats the acquisition of mergers as an evolutionary process.", "Learners who receive both merged (M + ) and non-merged (M βˆ’ ) input entertain both a merged (g + ) and non-merged (g βˆ’ ) grammar and reward whichever grammar successfully parses the input.", "This kind of variational learner (Yang, 2000) is essentially an adaptation of the classic evolutionary Linear Reward Punishment model (Bush and Mosteller, 1953) .", "The fitness of each grammar is the probability in the limit that it will fail to parse any given input, and since it is virtually always the case that this probability is different for both grammars, fitness is virtually always asymmetric.", "The variational learner is characterized as follows.", "Given two grammars and an input token s, The learner parses s with g 1 with probability p and with g 2 with probability q = 1 βˆ’ p. p is rewarded according to whether the choice of g successfully parses s (g β†’ s) or it fails to (g s), where Ξ³ is some small constant.", "p = p + Ξ³q, g β†’ s (1 βˆ’ Ξ³)p, g s Given a specific problem, one can calculate a penalty probability C for each g, the proportion of input that would cause g s. The grammar with the lower C has the advantage, so the other one will be driven down in the long run.", "C can be estimated from type frequencies in a corpus, and the model is non-parametric because these values do not depend on Ξ³. lim tβ†’βˆž p t = C 2 C 1 + C 2 lim tβ†’βˆž q t = C 1 C 1 + C 2 To understand the COT-CAUGHT merger empirically, one must reason about what kind of input would trigger a penalty and then calculate the penalty probabilities of the merged grammar C + and non-merged grammar C βˆ’ from a corpus.", "This model considers parsing failure to be the rate of initial misinterpretation, and for a vowel merger, the only inputs that could create an initial misinterpretation are minimal pairs because they become homophones.", "Examples of COT-CAUGHT minimal pairs include cot-caught, Don-Dawn, stock-stalk, odd-awed, collar-caller, and so on.", "The merged g + grammar collapses would-be minimal pairs into homophones, so the penalty rate C + comes down to lexical access.", "Under the observation that more frequent homophones are retrieved first regardless of syntactic context (Caramazza et al., 2001) , g + listeners only suffer initial misinterpretation when the less frequent member of a pair is uttered regardless of the rate of M + .", "If H is the sum token frequency of all minimal pairs and h i o , h i oh are the frequencies of the ith pair's members, then C+ is calculated by Eqn.", "6.", "In contrast, g βˆ’ listeners are sensitive to the phonemic distinction, so they misinterpret M βˆ’ input at the rate of mishearing one vowel for the other (Peterson and Barney, 1952) (second half of Eqn.", "7).", "And given M + input, they misinterpret whenever they hear the phoneme which g βˆ’ does not expect (e.g., a merged speaker pronouncing cot with the CAUGHT vowel) times the probability of not mishearing that vowel (1-) plus times the probability of hearing the right vowel (i.e., the merged speaker pronounces cot with the COT vowel but it is misheard anyway) (first half of Eqn.", "7).", "Since g βˆ’ misinterpretation rates are a function of the rate of M + (p) in the environment, there is a threshold of M + speakers above which the merged grammar has a fitness advantage over the non-merged one.", "C + = 1 H i min(h i o , h i oh ) (6) C βˆ’ = 1 H i p 0 ((1 βˆ’ oh )h i o + oh h i oh ) (7) +q 0 ( oh h i o + oh h i oh ) Calculating this threshold for the frequent minimal pairs that Yang extracts from the Wortschatz project (Biemann et al., 2004 ) corpus 3 and mishearing rates from Peterson and Barney (1952), the Yang model predicts that a learner exposed to at least ∼ 17% COT-CAUGHT-merged input will acquire the merger.", "This threshold represents a strong advantage for M + because it is well under the 50% threshold expected for neutral (non-advantaged) change and it is very close to what was found in Johnson (2007) 's sociolinguistic study.", "It predicts that younger children may have g + while their parents and even older siblings have g βˆ’ if the 17% threshold was crossed in E after the acquisition period of the older sibling but before that of the younger sibling.", "Model Setup All the mechanics behind the learning model reduce to a simple statement: learners acquires g + iff > 17% of their input is M + and they acquire g βˆ’ otherwise.", "However, this kind of categorical learner in a perfectly-mixed population leads to immediate fixation at either g βˆ’ or g + in a single iteration, since the proportion of g + speakers in the population is equivalent to the proportion of M + input in every learner's environment.", "This is not realistic change.", "Clearly, social network structure is at least as important as the learning algorithm in modeling the spread of the merger.", "We model the change in a non-uniform social network of 100 centralized clusters of 75 individuals each.", "75 was chosen as half Dunbar's number, the maximum number of reliable social connections that an adult can maintain (Dunbar, 2010) .", "There are two grammars, g + and g βˆ’ , and learners internalize one or the other according to the 17% threshold of M + in their input.", "One cluster represents the source of the merger and is initialized at 100% g + , while the rest begin 100% g βˆ’ .", "Inter-cluster connections are chosen randomly so that some connections are between central members of the clusters and some are between peripheral members.", "The one merged cluster is connected to half the other clusters representing those at the frontier of the change, and each other cluster is connected to five randomly chosen ones.", "4 This network structure echoes work in sociolinguistics, in particular, Milroy and Milroy (1985) 's notion of strong and weak connections in language change, where weak connections between social clusters are particularly important for propagation of a change.", "Propagation of the merged grammar is calculated by E n because we are interested in the behavior of individuals without loss of precision and because it cannot be assumed that each cluster is internally uniform.", "5 Since the spread of the merger has been rapid enough to detect over a period of a few years, iterations are modeled as short age co-horts rather than full generations in the first experiments by updating only a randomly chosen 10% of nodes at each iteration because only a fraction of the population is learning at any given time.", "A model where every node is updated is investigated as well.", "Results The behavior of this simulation is shown graphically in Figure 4 .", "The fine/colored lines indicate the rate of M + within each initially non-merged cluster, and the bold/black line shows the average rate across all initially non-merged.", "The merger spreads from cluster to cluster in succession over the \"weak\" inter-cluster connections and through each cluster over the 'strong' connections before moving on to the next ones.", "Everett, 1995) members have the merger, a period of rapid diffusion of the merger, then some time where a few laggards resist the merger.", "As a result, most clusters exhibit an S-like shape.", "A few clusters change rapidly because of their especially wellconnected positions in the network, and some lag behind the rest because they are poorly connected to the rest of the network.", "More interestingly, the population-wide average, the population-level data at the kind of granularity that is often studied, yields a smooth S-curve with a shallower slope than the individual clusters.", "The fact that it arises naturally here in a network that conforms with typical network shapes but was otherwise randomly generated is encouraging because the experiment was not set up so that it would produce such a curve, and the steep rate of change in individual clusters is what is expected for a change that is rapid enough to affect siblings differently.", "In the above simulation, only a fraction of nodes were updated at each iteration in order to model a rapid change.", "In order to confirm that this choice is not affecting the results and to test a purer implementation of the framework presented here, we remove that constraint and update every node at each iteration.", "Figure 5 shows what happens over 20 iterations in a network that is otherwise identical but with 2/5 as many inter-cluster connections as the original.", "A qualitatively similar pattern arises, so the choice to update only a fraction of the population is not crucially affecting the results.", "In all experiments so far, social connections were fixed at the first iteration even though connections in real populations tend to change over time.", "To investigate that modeling assumption, we perform another simulation in which connections are randomly updated both within and across clusters at each iteration akin to Kauhanen (2016) 's rewiring.", "The result as shown in Figure 6 is similar to before, with one major difference.", "The individual clusters transition more closely in time because no individual cluster remains poorly connected or especially well connected throughout the entire simulation.", "Finally, we test our assumptions about population size by repeating the experiments on a smaller network of 40 clusters of 18 individuals.", "The results are qualitatively similar, but the S-curve appears to be more sensitive to probabilistic connections in the network.", "To explore this, we present the average network-wide rate of (M + ) across 10 trials, revealing that an S-like curve is formed each time but that its slope varies.", "A few trials never reach 100% because some of the clusters are not connected to the innovative one.", "The slope varies between trials, indicating that the rate of change is a function of both the population structure and the learning algorithm, but the network size does not substantially affect these results.", "Discussion The algebraic-network framework for modeling population-level language change presented here has substantial practical and theoretical advantages over previous ones.", "It is much simpler computationally than previous frameworks because it calculates the statistically expected behavior of each generation analytically and therefore removes the entire inner loop of calculating stochastic inter-agent interactions from the simulation.", "It follows the Niyogi and Berwick (1996) formalism for language change which presents a clean and modular way of reasoning about the problem and promotes the centrality of language acquisition.", "In addition to the core algorithm, the framework offers enough flexibility to represent a wide variety of processes from the highly abstract (e.g., Kauhanen (2016) ) to those grounded in soci-olinguistic and acquisition research (e.g., Yang (2009) ).", "In our investigation of Kauhanen's basic assumptions, we discover how seemingly innocuous decisions about population size and learning conspire to drive simulation results.", "If learners are conceived as categorical learners, population size becomes a deciding factor in the path of change.", "So while the original results are interesting and meaningful, they may only valid for small (on the order of 10 2 ) populations.", "In our simulation of the spread of the COT-CAUGHT merger, we show how a cognitivelymotivated model of acquisition requires a network model in order to represent population-level language change.", "The population is represented as a collection of individual clusters based on sociological work, but the clusters themselves are connected randomly.", "The fact that S-curves arise naturally from these networks underscores their centrality to language change.", "One problem that this line of simulation work has always faced has been the lack of viable comparison between models because every study implements its own learning, network, and interaction models.", "The modular nature of our framework advances against this trend since it is now possible to hold the population model constant while slotting in various learning models to test them against one another and vice-versa.", "Finally, since this framework reduces to Niyogi & Berwick's models in perfectly-mixed populations, it can be used to reason about the formal dynamics of language change as well.", "Without simulation, it would be difficult or impossible to undercover the interplay between acquisition and social structure on the propagation of language change.", "Neither factor alone can account for the theoretical or empirically observed patterns.", "Simulations of this kind which explicitly model both simultaneously is well equipped to provide insights that fieldwork and laboratory work cannot.", "As such, it is an invaluable complement to those more traditional methodologies." ] }
{ "paper_header_number": [ "1", "1.1", "2", "2.2", "2.3", "3", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Framework for Transmission in Social Networks", "Propagation in the Network", "Learning in the Network", "Application: Testing Assumptions", "Application: Mergers in Progress", "Background", "Model Setup", "Results", "Discussion" ] }
GEM-SciDuet-train-106#paper-1281#slide-18
Conclusions
Removes the need to simulate interactions Is widely applicable rather than made-to-order Predicts behaviour consistent with the empirical data And with principles of language change
Removes the need to simulate interactions Is widely applicable rather than made-to-order Predicts behaviour consistent with the empirical data And with principles of language change
[]
GEM-SciDuet-train-106#paper-1281#slide-19
1281
A Framework for Representing Language Acquisition in a Population Setting
Language variation and change are driven both by individuals' internal cognitive processes and by the social structures through which language propagates. A wide range of computational frameworks have been proposed to connect these drivers. We compare the strengths and weaknesses of existing approaches and propose a new analytic framework which combines previous network models' ability to capture realistic social structure with practically and more elegant computational properties. The framework privileges the process of language acquisition and embeds learners in a social network but is modular so that population structure can be combined with different acquisition models. We demonstrate two applications for the framework: a test of practical concerns that arise when modeling acquisition in a population setting and an application of the framework to recent work on phonological mergers in progress.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217 ], "paper_content_text": [ "Introduction The process of language change should be thought of as a two-step cycle in which 1) individuals acquire their native languages from their predecessors then 2) pass them on to their successors.", "Small changes accrue over time this way and create both small-scale interpersonal variation and large-scale typological differences.", "It is easy to draw a strong analogy here between linguistic evolution and biological evolution.", "Both feature classic descent with modification, except while phenotypes are transmitted through genes and acted on by natural selection, language is both transmitted through and constrained by the individual (Cavalli-Sforza and Feldman, 1981; Ritt, 2004, etc.)", ".", "But while evolution, linguistic or otherwise, is driven by forces acting on the individual, it unfolds on the level of populations (Cavalli-Sforza and Feldman, 1981) .", "The influence of communitylevel social factors on the path of language change is a major focus of sociolinguistics (Labov, 2001; Milroy and Milroy, 1985; Rogers Everett, 1995) .", "Ideally, one could observe population-level variation unfold in real time while testing out individual factors, but this is impossible because nobody can travel back in time or fit entire natural environments into a lab.", "Change that has already happened is out of reach, and change in progress is buried in a world of confounds.", "The classic sociolinguistic method instead approaches the problem by inferring causal factors from patterns discovered in field interviews and corpora (Labov, 1994; Labov et al., 2005, etc.)", ".", "This is the primary source of empirical data in the field and the only way to look at language change in a naturalistic setting, but it is limited in that it cannot test cause and effect directly.", "More recently, controlled experimental studies have emerged as a complementary line of research which manipulate causal factors directly (Johnson et al., 1999; Campbell-Kibler, 2009, etc.)", ", but are inherently removed natural time and scale.", "A third approach, the one we build upon here, relies on computational modeling to simulate how sociolinguistic factors might work together in larger populations (Klein, 1966; Blythe and Croft, 2012; Kauhanen, 2016, etc.)", ".", "It has long been argued that language acquisition is the primary cause of language change (Sweet, 1899; Lightfoot, 1979; Niyogi, 1998, etc.)", ".", "In the last few decades, this connection has been modeled computationally (Gibson and Wexler, 1994; Kirby et al., 2000; Yang, 2000, etc.)", ", leading to the strong conclusion that change is the inevitable consequence of mixed linguistic input or finite learning periods (Niyogi and Berwick, 1996) , even if children are \"perfect\" learners.", "An important result connecting the learner and population emphasizes the need for this line of work: the space of paths of change available in populations is formally larger than the paths available to linear chains of iterated learners.", "Niyogi and Berwick (2009) prove formally that even perfectly-mixed (i.e., uniform and homogeneous social network) populations admit phase transitions in the path of change unavailable to chains of single learners commonly implemented in iterated learning (Kirby et al., 2000) .", "This suggests that small-population experimental studies in sociolinguistics and in child language acquisition do not paint the full picture of language change.", "We introduce a new framework for modeling language change in populations.", "It has an outer loop to represent generational progression, but it replaces the inner loop which calculates randomized interactions between agents with a single formula that is defined generally enough to allow the simulation of a wide range of scenarios.", "It builds upon the principled formalism described by Niyogi and Berwick (1996, et seq.)", ", privileging the acquisition model and separating it from the population model.", "The resulting modular framework is described in the following sections.", "First, Section 1.1 presents a survey of previous simulation work followed by a description of the new population model in Section 2.", "Next, Section 3 addresses practical concerns relating population size to assumptions about language acquisition.", "Finally, Section 4 introduces a case study on phonological change which demonstrates the need for appropriate models both of acquisition and populations.", "Related Work Computational models for the propagation of linguistic variation have been employed with a variety of research goals in mind.", "Every paper implements its own framework with few exceptions, so comparison across studies is difficult.", "Additionally, since each model is essentially 'boutique,' it is always possible that models are designed consciously or unconsciously to achieve a specific outcome rather driven by underlying principles.", "We group these frameworks into three classes according to their implementation, swarm, network, and algebraic, and discusses their strengths and weaknesses.", "The first class, called swarm here, models populations as collections of agents placed on a grid.", "They \"swarm\" around randomly according to some movement function, and \"interact\" when they occupy adjacent grid spaces (Satterfield, 2001; Harrison et al., 2002; Ke et al., 2008; Stanford and Kenny, 2013) .", "This tends toward concrete interpretation, for example, more mobile populations are expressed directly by more mobile agents.", "They capture Bloomfield (1933) 's \"principle of density\" which describes the observation that geographically or socially close individuals interact more frequently than those farther away.", "On the other hand, they provide little control over network structure, relying on series of explicit movement constraints in order to direct their agents, and since each one moves randomly at each iteration, these models have potentially thousands of degrees of freedom.", "Such simulations should be run many times if any sort of statistically expected results are to be computed.", "The second class, network frameworks, model speakers as nodes and interaction probabilities as weighted edges on network graphs (Minett and Wang, 2008; Baxter et al., 2009; Fagyal et al., 2010; Blythe and Croft, 2012; Kauhanen, 2016) .", "These frameworks offer precise control over social network structure and can test specific community models from within sociolinguistics.", "However, implementations usually proceed by some kind of iterative probabilistic node-pair selection process, and in this way suffer from the same statistical pitfalls as swarm frameworks.", "In contrast to swarm models, interaction is rigidly restricted to immediately connected nodes, so to achieve gradient interaction probabilities, edges must be frequently updated or nearly fully-connected graphs with carefully assigned edge weights would need to be constructed and motivated.", "The third class, algebraic frameworks, present analytic methods for determining the state of the network at the end of each iteration rather than relying on stochastic simulation of individual agents (Niyogi and Berwick, 1996, 1997; Yang, 2000; Baxter et al., 2006; Minett and Wang, 2008; Niyogi and Berwick, 2009) .", "Removing that inner loop is a more mathematically elegant approach and avoids dealing unnecessarily with statistics behind random trials.", "Removing that loop speeds up calculation as well, making larger simulations more tractable than with network or swarm frameworks.", "But this power is achieved by sacrificing the social network.", "Up to this point, such models have, to our knowledge, only been defined over perfectly-mixed (i.e., no network effects) populations.", "That assumption is useful for reasoning about the mathematical theory behind language change, but it hinders such models' utility in empirical studies.", "For example, though Baxter et al.", "(2006) and Minett and Wang (2008) implement algebraic models for perfectly mixed populations, they fall back on network models to model network effects.", "Framework for Transmission in Social Networks Algebraic frameworks have their mathematical advantage, but network frameworks provide a richer model for representing real-world population structures and swarm models capture density effects by default.", "An ideal framework would combine the benefits of all three of these.", "Here we do just that.", "We introduce a framework that instantiates Niyogi and Berwick (1996) 's acquisitiondriven formalism where change is handled explicitly as a two-step alternation between individual learners learning and populations interacting.", "It provides an analytic solution to the state of a network structure over which swarm-like behavior can be modeled.", "We begin by conceptualizing the framework in terms of agents traveling probabilistically over a network structure as in Algo.", "1 before introducing the analytic solution.", "There is an individual standing at every node in the graph, and at every iteration, each individual begins at some location and travels along the network's edges, at each step deciding to continue on or to stop and interact with the agent at that node.", "Any two agents with a nonzero weight path between them could potentially interact, so the overall probability of an interaction is a function of the shape of the network and the decay rate of the step probability.", "The shorter and higher weighted the path between two agents, the more likely they are to interact.", "This corresponds to the gradient interaction probabilities of swarm frameworks.", "Social networks are typically conceived of as graph structures with individuals as vertices and the social or geographical connections between individuals as edges, and this allows for a great deal of flexibility.", "If edges are undirected, then all interactions are equal and bidirectional, but if edges are directed, interactions may or may not be.", "Edges can be weighted to represent likelihood of interaction or some measure of social valuation, and this too can vary over time.", "Lastly, it is possible to add and remove nodes themselves to capture births, deaths, or migration.", "The network structure is represented computationally here as an adjacency matrix A.", "In a population of n individuals, this is n Γ— n where each element a ij is the weight of the connection from individual j to individual i.", "The matrix must be column stochastic (all columns sum to 1 and contain only positive elements) so that edge weights can be interpreted as probabilities.", "The special case where the matrix is symmetric (every a ij = a ji ) models undirected edges, and more strongly, the model reduces to perfectly-mixed populations when each a ij = 1 n .", "We define a notion of communities over the nodes of the network in order to add the option to categorize groups of individuals.", "Membership among c communities is identified with an n Γ— c indicator matrix C. Depending on the problem at hand, it is possible to calculate the average behavior of the learners within each community directly without having to calculate the behavior of each individual member.", "Propagation in the Network In a typical network model, the edge weights between nodes in A are interpreted directly as interaction probabilities, meaning that individuals only ever interact with their immediate graph neighbors.", "We take a different approach by allowing the agents to \"travel\" and potentially interact with any other agent whose node is connected by a path of non-zero edges.", "If the number of traveling steps were fixed at k, the probability of each pair interacting would be defined as A k .", "It is more complicated for us since the number of steps traveled is a random variable.", "The probability of j interacting with i (p(ij)) is the probability of them interacting after k steps times the probability of k for all values of k as in Eqn.", "1.", "Combining this intuition with A yields the interaction probabilities for all i, j pairs.", "p(ij) = k p(ij|k steps) p(k steps) (1) The pattern of linguistic variants or grammars (in the formal sense where grammar g is the intensional equivalent of language L g ) within a network unfolds as a dynamical system over the course of many iterations, and learners' positions within the network mediate which ones they eventually acquire.", "In a system with g grammars and n individuals, a n Γ— g row-stochastic matrix G specifies the probability with which each community expresses each grammar.", "Given this notion of interaction and the specification of grammars expressed within a network, it is possible to compute the distribution of grammars presented to each learner.", "This is the learners' linguistic environment and is represented by a matrix E in the same form as G .", "An environment function E n (G t , A) = E t+1 shown in Eqn.", "2 calculates E by first calculating all the interaction probabilities in the network then multiplying those by the grammars which every agent expresses to get the environment E. The Ξ± parameter from the geometric distribution 1 defines the travel decay rate.", "A lower Ξ± defines conceptually more mobile agents.", "More generally, E n is a special case of E(G t , C t , A t ) = E t+1 where the number of communities equals the number of individuals (c = n).", "C becomes the identity matrix without loss of generality, so the network's initial condition does not have to be defined explicitly.", "For any other community definition, an initial condition has to be defined as in Eqn.", "3 which specifies the starting point in the network that each agent conceptually begins traveling from.", "The output of E is a g Γ— c matrix giving the environment of the average agent in each community.", "2 E n (G t , A) = G t Ξ± (I βˆ’ (1 βˆ’ Ξ±)A) βˆ’1 (2) E(G t , C, A) = E n (G t , A)C(C C) βˆ’1 (3) The output of E must be broadcast to g Γ— n, which would result in the loss of some information unless the assumption can be made that each community is internally uniform.", "However, when that assumption can be made, the n Γ— n adjacency matrix admits a c Γ— c equitable partition A Ο€ (Eqn.", "4) (Schaub et al., 2016) which permits an alternate environment function E EP (G t , C, A) shown in Eqn.", "5 that is equivalent to the lossless E n if A.", "If n c, E EP is much faster to calculate because it only inverts a small c Γ— c matrix rather than a large n Γ— n. This makes it feasible to run much larger simulations than what has been done in the past.", "A Ο€ = (C C) βˆ’1 C AC (4) EEP = Ξ±G C (I βˆ’ (1 βˆ’ Ξ±)A Ο€ ) βˆ’1 (C C) βˆ’1 (5) Learning in the Network The environment function describes what inputs E t+1 are available to learners given the language expressed by the mature speakers of the previous age cohort with grammars G t .", "The second component of the framework describes the learning algorithm A(E t+1 ) = G t+1 , how individuals respond to their input environment.", "The resulting G t+1 describes which grammars those learners will eventually contribute to the subsequent generation's environment E t+2 .", "This back-andforth between adults' grammars G and childrens' environment E is the two-step cycle of language change (Fig.", "1) .", "In neutral change, learners would acquire grammars at the rates that they are expressed in their environments, but there is good reason to believe .", ".", ".", "G t β†’ E t+1 β†’ G t+1 .", ".", ".", "G t+i β†’ E t+i+1 .", ".", ".", "Figure 1: Language change as an alternation between G and E matrices that most language change involves differential fitness between competing variants, and most nontrivial learning algorithms yield some kind of fitness (Kroch, 1989; Yang, 2000; Blythe and Croft, 2012, etc.)", ", so A is rarely neutral.", "A neutral and simple advantaged model are both considered in Section 3, and a more complex learning algorithm is described for Section 4.", "Application: Testing Assumptions The general nature of the framework described here renders it suitable for reproducing the results of previous works and evaluating their assumptions.", "To demonstrate this, we reproduce the major result from Kauhanen (2016) , which tested the behavior of neutral change in networks of singlegrammar learners, in order to dissect two of its primary assumptions.", "Implemented in a typical network framework, the original setup contains n = 200 individuals in probabilistically generated centralized networks in which individuals mature categorically to the single most frequent grammar in their input.", "The author found that categorical neutral change produced chaotic paths of change regardless of network shape and that periodically \"rewiring\" some of the network edges smoothed this out.", "Without commenting on rewiring, we find that the combination of n and choice of categorical learners conspire to create the chaotic results.", "We create two communities, both centralized along the lines of the single cluster in Kauhanen (2016) , initialize all members of cluster 1 with grammar g 1 and all members of cluster 2 with grammar g 2 , and additional edges are added between members of clusters 1 and 2 to allow interaction.", "G is converted to an indicator matrix at the end of each learning iteration by rounding values to 0 and 1 in order to model categorical learners who only internalize the most common grammar in their inputs as in the original model.", "In a pair of infinitely large clusters or two clusters where individuals are permitted to learn a probabilistic distribution of grammars, each cluster should homogenize to a 50/50 distribution of g 1 and g 2 after some number of iterations depending on the specifics of the network shape and setting for Ξ± creating the red curves in Fig.", "2 .", "At n = 20000, each of 10 trials roughly follows the path of the predicted curve, but when run at the original n = 200 for 10 trials, this produces the type of chaotic behavior which Kauhanen (2016) attempts to repair.", "The outcome appears to be the result of an assumption made out of convenience (n = 200) rather than a principled decision.", "To further explore the impact of the population size assumption, we experiment on a model of advantaged change, which is typically contrasted with neutral change because of its tendency to produce \"well-behaved\" S-curve change (Blythe and Croft, 2012; Kauhanen, 2016) .", "This time, only a single cluster is created, and the advantaged grammar is initially assigned to 1% of the population.", "As seen in Figure 3 , results are chaotic for n = 200 once again and near predicted for n = 20000.", "This is important because at n = 200, advantaged change is chaotic, and most simulations both rise and fall.", "An experimenter who only studied advantaged change in small population might concluded that it is as ill-behaved as neutral change.", "While the conclusions that Kauhanen (2016) draws appear valid for n = 200, it is not clear to what extent they can be projected onto larger populations.", "This demonstrates the need for carefully choosing one's modeling assumptions and testing them out when possible.", "Application: Mergers in Progress The acquisition of phonological mergers in mixed input settings presents an interesting problem.", "It appears that mergers have an inherent advantage because they tend to spread at the expense of distinctions, and once they begin, they are rarely reversed (Labov, 1994) .", "Yang (2009) 's acquisition model quantifies this advantage as the relatively lower chance of misinterpretation if a listener assumes the merged grammar instead of the nonmerged grammar once a sufficient proportion of the environment is merged.", "Applied to Johnson (2007) 's detailed population study of the frontier of the COT-CAUGHT merger in the small towns along the border between Rhode Island and Massachusetts, this accurately predicts the ratio of merged input for a child to acquire the merged grammar, however when applied to a perfectly mixed population of learners, it fails to model the spread of the merged grammar in the population.", "Yang's model is input-driven, so it is conducive to simulation with minimal assumptions past those drawn from the empirical data.", "We test the behavior of this learning model in a typical population network and demonstrate that it produces a reasonable path of change.", "Background The COT-CAUGHT merger, also called the low back merger describes the phenomenon present in varieties of North American English spoken in eastern New England, western Pennsylvania, the American West, and Canada among others where the vowel in words like cot and the vowel in words like caught have come to be pronounced the same (Labov et al., 2005, pp.", "58-65) .", "The geographical extent of the merger is currently expanding, which might be expected if the merger has a cognitive or social advantage associated with it.", "Johnson (2007) 's study of the merger's frontier on the border Rhode Island and Massachusetts uncovered an interesting social dynamic that illustrates the merger's speed: there are families where the parents and older siblings non-merged, but the younger siblings are.", "The merger has swept through in only a few years and passed between the siblings.", "Yang (2009) seeks to understand why mergers have an advantage from a cognitive perspective, and his model treats the acquisition of mergers as an evolutionary process.", "Learners who receive both merged (M + ) and non-merged (M βˆ’ ) input entertain both a merged (g + ) and non-merged (g βˆ’ ) grammar and reward whichever grammar successfully parses the input.", "This kind of variational learner (Yang, 2000) is essentially an adaptation of the classic evolutionary Linear Reward Punishment model (Bush and Mosteller, 1953) .", "The fitness of each grammar is the probability in the limit that it will fail to parse any given input, and since it is virtually always the case that this probability is different for both grammars, fitness is virtually always asymmetric.", "The variational learner is characterized as follows.", "Given two grammars and an input token s, The learner parses s with g 1 with probability p and with g 2 with probability q = 1 βˆ’ p. p is rewarded according to whether the choice of g successfully parses s (g β†’ s) or it fails to (g s), where Ξ³ is some small constant.", "p = p + Ξ³q, g β†’ s (1 βˆ’ Ξ³)p, g s Given a specific problem, one can calculate a penalty probability C for each g, the proportion of input that would cause g s. The grammar with the lower C has the advantage, so the other one will be driven down in the long run.", "C can be estimated from type frequencies in a corpus, and the model is non-parametric because these values do not depend on Ξ³. lim tβ†’βˆž p t = C 2 C 1 + C 2 lim tβ†’βˆž q t = C 1 C 1 + C 2 To understand the COT-CAUGHT merger empirically, one must reason about what kind of input would trigger a penalty and then calculate the penalty probabilities of the merged grammar C + and non-merged grammar C βˆ’ from a corpus.", "This model considers parsing failure to be the rate of initial misinterpretation, and for a vowel merger, the only inputs that could create an initial misinterpretation are minimal pairs because they become homophones.", "Examples of COT-CAUGHT minimal pairs include cot-caught, Don-Dawn, stock-stalk, odd-awed, collar-caller, and so on.", "The merged g + grammar collapses would-be minimal pairs into homophones, so the penalty rate C + comes down to lexical access.", "Under the observation that more frequent homophones are retrieved first regardless of syntactic context (Caramazza et al., 2001) , g + listeners only suffer initial misinterpretation when the less frequent member of a pair is uttered regardless of the rate of M + .", "If H is the sum token frequency of all minimal pairs and h i o , h i oh are the frequencies of the ith pair's members, then C+ is calculated by Eqn.", "6.", "In contrast, g βˆ’ listeners are sensitive to the phonemic distinction, so they misinterpret M βˆ’ input at the rate of mishearing one vowel for the other (Peterson and Barney, 1952) (second half of Eqn.", "7).", "And given M + input, they misinterpret whenever they hear the phoneme which g βˆ’ does not expect (e.g., a merged speaker pronouncing cot with the CAUGHT vowel) times the probability of not mishearing that vowel (1-) plus times the probability of hearing the right vowel (i.e., the merged speaker pronounces cot with the COT vowel but it is misheard anyway) (first half of Eqn.", "7).", "Since g βˆ’ misinterpretation rates are a function of the rate of M + (p) in the environment, there is a threshold of M + speakers above which the merged grammar has a fitness advantage over the non-merged one.", "C + = 1 H i min(h i o , h i oh ) (6) C βˆ’ = 1 H i p 0 ((1 βˆ’ oh )h i o + oh h i oh ) (7) +q 0 ( oh h i o + oh h i oh ) Calculating this threshold for the frequent minimal pairs that Yang extracts from the Wortschatz project (Biemann et al., 2004 ) corpus 3 and mishearing rates from Peterson and Barney (1952), the Yang model predicts that a learner exposed to at least ∼ 17% COT-CAUGHT-merged input will acquire the merger.", "This threshold represents a strong advantage for M + because it is well under the 50% threshold expected for neutral (non-advantaged) change and it is very close to what was found in Johnson (2007) 's sociolinguistic study.", "It predicts that younger children may have g + while their parents and even older siblings have g βˆ’ if the 17% threshold was crossed in E after the acquisition period of the older sibling but before that of the younger sibling.", "Model Setup All the mechanics behind the learning model reduce to a simple statement: learners acquires g + iff > 17% of their input is M + and they acquire g βˆ’ otherwise.", "However, this kind of categorical learner in a perfectly-mixed population leads to immediate fixation at either g βˆ’ or g + in a single iteration, since the proportion of g + speakers in the population is equivalent to the proportion of M + input in every learner's environment.", "This is not realistic change.", "Clearly, social network structure is at least as important as the learning algorithm in modeling the spread of the merger.", "We model the change in a non-uniform social network of 100 centralized clusters of 75 individuals each.", "75 was chosen as half Dunbar's number, the maximum number of reliable social connections that an adult can maintain (Dunbar, 2010) .", "There are two grammars, g + and g βˆ’ , and learners internalize one or the other according to the 17% threshold of M + in their input.", "One cluster represents the source of the merger and is initialized at 100% g + , while the rest begin 100% g βˆ’ .", "Inter-cluster connections are chosen randomly so that some connections are between central members of the clusters and some are between peripheral members.", "The one merged cluster is connected to half the other clusters representing those at the frontier of the change, and each other cluster is connected to five randomly chosen ones.", "4 This network structure echoes work in sociolinguistics, in particular, Milroy and Milroy (1985) 's notion of strong and weak connections in language change, where weak connections between social clusters are particularly important for propagation of a change.", "Propagation of the merged grammar is calculated by E n because we are interested in the behavior of individuals without loss of precision and because it cannot be assumed that each cluster is internally uniform.", "5 Since the spread of the merger has been rapid enough to detect over a period of a few years, iterations are modeled as short age co-horts rather than full generations in the first experiments by updating only a randomly chosen 10% of nodes at each iteration because only a fraction of the population is learning at any given time.", "A model where every node is updated is investigated as well.", "Results The behavior of this simulation is shown graphically in Figure 4 .", "The fine/colored lines indicate the rate of M + within each initially non-merged cluster, and the bold/black line shows the average rate across all initially non-merged.", "The merger spreads from cluster to cluster in succession over the \"weak\" inter-cluster connections and through each cluster over the 'strong' connections before moving on to the next ones.", "Everett, 1995) members have the merger, a period of rapid diffusion of the merger, then some time where a few laggards resist the merger.", "As a result, most clusters exhibit an S-like shape.", "A few clusters change rapidly because of their especially wellconnected positions in the network, and some lag behind the rest because they are poorly connected to the rest of the network.", "More interestingly, the population-wide average, the population-level data at the kind of granularity that is often studied, yields a smooth S-curve with a shallower slope than the individual clusters.", "The fact that it arises naturally here in a network that conforms with typical network shapes but was otherwise randomly generated is encouraging because the experiment was not set up so that it would produce such a curve, and the steep rate of change in individual clusters is what is expected for a change that is rapid enough to affect siblings differently.", "In the above simulation, only a fraction of nodes were updated at each iteration in order to model a rapid change.", "In order to confirm that this choice is not affecting the results and to test a purer implementation of the framework presented here, we remove that constraint and update every node at each iteration.", "Figure 5 shows what happens over 20 iterations in a network that is otherwise identical but with 2/5 as many inter-cluster connections as the original.", "A qualitatively similar pattern arises, so the choice to update only a fraction of the population is not crucially affecting the results.", "In all experiments so far, social connections were fixed at the first iteration even though connections in real populations tend to change over time.", "To investigate that modeling assumption, we perform another simulation in which connections are randomly updated both within and across clusters at each iteration akin to Kauhanen (2016) 's rewiring.", "The result as shown in Figure 6 is similar to before, with one major difference.", "The individual clusters transition more closely in time because no individual cluster remains poorly connected or especially well connected throughout the entire simulation.", "Finally, we test our assumptions about population size by repeating the experiments on a smaller network of 40 clusters of 18 individuals.", "The results are qualitatively similar, but the S-curve appears to be more sensitive to probabilistic connections in the network.", "To explore this, we present the average network-wide rate of (M + ) across 10 trials, revealing that an S-like curve is formed each time but that its slope varies.", "A few trials never reach 100% because some of the clusters are not connected to the innovative one.", "The slope varies between trials, indicating that the rate of change is a function of both the population structure and the learning algorithm, but the network size does not substantially affect these results.", "Discussion The algebraic-network framework for modeling population-level language change presented here has substantial practical and theoretical advantages over previous ones.", "It is much simpler computationally than previous frameworks because it calculates the statistically expected behavior of each generation analytically and therefore removes the entire inner loop of calculating stochastic inter-agent interactions from the simulation.", "It follows the Niyogi and Berwick (1996) formalism for language change which presents a clean and modular way of reasoning about the problem and promotes the centrality of language acquisition.", "In addition to the core algorithm, the framework offers enough flexibility to represent a wide variety of processes from the highly abstract (e.g., Kauhanen (2016) ) to those grounded in soci-olinguistic and acquisition research (e.g., Yang (2009) ).", "In our investigation of Kauhanen's basic assumptions, we discover how seemingly innocuous decisions about population size and learning conspire to drive simulation results.", "If learners are conceived as categorical learners, population size becomes a deciding factor in the path of change.", "So while the original results are interesting and meaningful, they may only valid for small (on the order of 10 2 ) populations.", "In our simulation of the spread of the COT-CAUGHT merger, we show how a cognitivelymotivated model of acquisition requires a network model in order to represent population-level language change.", "The population is represented as a collection of individual clusters based on sociological work, but the clusters themselves are connected randomly.", "The fact that S-curves arise naturally from these networks underscores their centrality to language change.", "One problem that this line of simulation work has always faced has been the lack of viable comparison between models because every study implements its own learning, network, and interaction models.", "The modular nature of our framework advances against this trend since it is now possible to hold the population model constant while slotting in various learning models to test them against one another and vice-versa.", "Finally, since this framework reduces to Niyogi & Berwick's models in perfectly-mixed populations, it can be used to reason about the formal dynamics of language change as well.", "Without simulation, it would be difficult or impossible to undercover the interplay between acquisition and social structure on the propagation of language change.", "Neither factor alone can account for the theoretical or empirically observed patterns.", "Simulations of this kind which explicitly model both simultaneously is well equipped to provide insights that fieldwork and laboratory work cannot.", "As such, it is an invaluable complement to those more traditional methodologies." ] }
{ "paper_header_number": [ "1", "1.1", "2", "2.2", "2.3", "3", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Framework for Transmission in Social Networks", "Propagation in the Network", "Learning in the Network", "Application: Testing Assumptions", "Application: Mergers in Progress", "Background", "Model Setup", "Results", "Discussion" ] }
GEM-SciDuet-train-106#paper-1281#slide-19
End
NDSEG Fellowship (US ARO)
NDSEG Fellowship (US ARO)
[]
GEM-SciDuet-train-106#paper-1281#slide-20
1281
A Framework for Representing Language Acquisition in a Population Setting
Language variation and change are driven both by individuals' internal cognitive processes and by the social structures through which language propagates. A wide range of computational frameworks have been proposed to connect these drivers. We compare the strengths and weaknesses of existing approaches and propose a new analytic framework which combines previous network models' ability to capture realistic social structure with practically and more elegant computational properties. The framework privileges the process of language acquisition and embeds learners in a social network but is modular so that population structure can be combined with different acquisition models. We demonstrate two applications for the framework: a test of practical concerns that arise when modeling acquisition in a population setting and an application of the framework to recent work on phonological mergers in progress.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217 ], "paper_content_text": [ "Introduction The process of language change should be thought of as a two-step cycle in which 1) individuals acquire their native languages from their predecessors then 2) pass them on to their successors.", "Small changes accrue over time this way and create both small-scale interpersonal variation and large-scale typological differences.", "It is easy to draw a strong analogy here between linguistic evolution and biological evolution.", "Both feature classic descent with modification, except while phenotypes are transmitted through genes and acted on by natural selection, language is both transmitted through and constrained by the individual (Cavalli-Sforza and Feldman, 1981; Ritt, 2004, etc.)", ".", "But while evolution, linguistic or otherwise, is driven by forces acting on the individual, it unfolds on the level of populations (Cavalli-Sforza and Feldman, 1981) .", "The influence of communitylevel social factors on the path of language change is a major focus of sociolinguistics (Labov, 2001; Milroy and Milroy, 1985; Rogers Everett, 1995) .", "Ideally, one could observe population-level variation unfold in real time while testing out individual factors, but this is impossible because nobody can travel back in time or fit entire natural environments into a lab.", "Change that has already happened is out of reach, and change in progress is buried in a world of confounds.", "The classic sociolinguistic method instead approaches the problem by inferring causal factors from patterns discovered in field interviews and corpora (Labov, 1994; Labov et al., 2005, etc.)", ".", "This is the primary source of empirical data in the field and the only way to look at language change in a naturalistic setting, but it is limited in that it cannot test cause and effect directly.", "More recently, controlled experimental studies have emerged as a complementary line of research which manipulate causal factors directly (Johnson et al., 1999; Campbell-Kibler, 2009, etc.)", ", but are inherently removed natural time and scale.", "A third approach, the one we build upon here, relies on computational modeling to simulate how sociolinguistic factors might work together in larger populations (Klein, 1966; Blythe and Croft, 2012; Kauhanen, 2016, etc.)", ".", "It has long been argued that language acquisition is the primary cause of language change (Sweet, 1899; Lightfoot, 1979; Niyogi, 1998, etc.)", ".", "In the last few decades, this connection has been modeled computationally (Gibson and Wexler, 1994; Kirby et al., 2000; Yang, 2000, etc.)", ", leading to the strong conclusion that change is the inevitable consequence of mixed linguistic input or finite learning periods (Niyogi and Berwick, 1996) , even if children are \"perfect\" learners.", "An important result connecting the learner and population emphasizes the need for this line of work: the space of paths of change available in populations is formally larger than the paths available to linear chains of iterated learners.", "Niyogi and Berwick (2009) prove formally that even perfectly-mixed (i.e., uniform and homogeneous social network) populations admit phase transitions in the path of change unavailable to chains of single learners commonly implemented in iterated learning (Kirby et al., 2000) .", "This suggests that small-population experimental studies in sociolinguistics and in child language acquisition do not paint the full picture of language change.", "We introduce a new framework for modeling language change in populations.", "It has an outer loop to represent generational progression, but it replaces the inner loop which calculates randomized interactions between agents with a single formula that is defined generally enough to allow the simulation of a wide range of scenarios.", "It builds upon the principled formalism described by Niyogi and Berwick (1996, et seq.)", ", privileging the acquisition model and separating it from the population model.", "The resulting modular framework is described in the following sections.", "First, Section 1.1 presents a survey of previous simulation work followed by a description of the new population model in Section 2.", "Next, Section 3 addresses practical concerns relating population size to assumptions about language acquisition.", "Finally, Section 4 introduces a case study on phonological change which demonstrates the need for appropriate models both of acquisition and populations.", "Related Work Computational models for the propagation of linguistic variation have been employed with a variety of research goals in mind.", "Every paper implements its own framework with few exceptions, so comparison across studies is difficult.", "Additionally, since each model is essentially 'boutique,' it is always possible that models are designed consciously or unconsciously to achieve a specific outcome rather driven by underlying principles.", "We group these frameworks into three classes according to their implementation, swarm, network, and algebraic, and discusses their strengths and weaknesses.", "The first class, called swarm here, models populations as collections of agents placed on a grid.", "They \"swarm\" around randomly according to some movement function, and \"interact\" when they occupy adjacent grid spaces (Satterfield, 2001; Harrison et al., 2002; Ke et al., 2008; Stanford and Kenny, 2013) .", "This tends toward concrete interpretation, for example, more mobile populations are expressed directly by more mobile agents.", "They capture Bloomfield (1933) 's \"principle of density\" which describes the observation that geographically or socially close individuals interact more frequently than those farther away.", "On the other hand, they provide little control over network structure, relying on series of explicit movement constraints in order to direct their agents, and since each one moves randomly at each iteration, these models have potentially thousands of degrees of freedom.", "Such simulations should be run many times if any sort of statistically expected results are to be computed.", "The second class, network frameworks, model speakers as nodes and interaction probabilities as weighted edges on network graphs (Minett and Wang, 2008; Baxter et al., 2009; Fagyal et al., 2010; Blythe and Croft, 2012; Kauhanen, 2016) .", "These frameworks offer precise control over social network structure and can test specific community models from within sociolinguistics.", "However, implementations usually proceed by some kind of iterative probabilistic node-pair selection process, and in this way suffer from the same statistical pitfalls as swarm frameworks.", "In contrast to swarm models, interaction is rigidly restricted to immediately connected nodes, so to achieve gradient interaction probabilities, edges must be frequently updated or nearly fully-connected graphs with carefully assigned edge weights would need to be constructed and motivated.", "The third class, algebraic frameworks, present analytic methods for determining the state of the network at the end of each iteration rather than relying on stochastic simulation of individual agents (Niyogi and Berwick, 1996, 1997; Yang, 2000; Baxter et al., 2006; Minett and Wang, 2008; Niyogi and Berwick, 2009) .", "Removing that inner loop is a more mathematically elegant approach and avoids dealing unnecessarily with statistics behind random trials.", "Removing that loop speeds up calculation as well, making larger simulations more tractable than with network or swarm frameworks.", "But this power is achieved by sacrificing the social network.", "Up to this point, such models have, to our knowledge, only been defined over perfectly-mixed (i.e., no network effects) populations.", "That assumption is useful for reasoning about the mathematical theory behind language change, but it hinders such models' utility in empirical studies.", "For example, though Baxter et al.", "(2006) and Minett and Wang (2008) implement algebraic models for perfectly mixed populations, they fall back on network models to model network effects.", "Framework for Transmission in Social Networks Algebraic frameworks have their mathematical advantage, but network frameworks provide a richer model for representing real-world population structures and swarm models capture density effects by default.", "An ideal framework would combine the benefits of all three of these.", "Here we do just that.", "We introduce a framework that instantiates Niyogi and Berwick (1996) 's acquisitiondriven formalism where change is handled explicitly as a two-step alternation between individual learners learning and populations interacting.", "It provides an analytic solution to the state of a network structure over which swarm-like behavior can be modeled.", "We begin by conceptualizing the framework in terms of agents traveling probabilistically over a network structure as in Algo.", "1 before introducing the analytic solution.", "There is an individual standing at every node in the graph, and at every iteration, each individual begins at some location and travels along the network's edges, at each step deciding to continue on or to stop and interact with the agent at that node.", "Any two agents with a nonzero weight path between them could potentially interact, so the overall probability of an interaction is a function of the shape of the network and the decay rate of the step probability.", "The shorter and higher weighted the path between two agents, the more likely they are to interact.", "This corresponds to the gradient interaction probabilities of swarm frameworks.", "Social networks are typically conceived of as graph structures with individuals as vertices and the social or geographical connections between individuals as edges, and this allows for a great deal of flexibility.", "If edges are undirected, then all interactions are equal and bidirectional, but if edges are directed, interactions may or may not be.", "Edges can be weighted to represent likelihood of interaction or some measure of social valuation, and this too can vary over time.", "Lastly, it is possible to add and remove nodes themselves to capture births, deaths, or migration.", "The network structure is represented computationally here as an adjacency matrix A.", "In a population of n individuals, this is n Γ— n where each element a ij is the weight of the connection from individual j to individual i.", "The matrix must be column stochastic (all columns sum to 1 and contain only positive elements) so that edge weights can be interpreted as probabilities.", "The special case where the matrix is symmetric (every a ij = a ji ) models undirected edges, and more strongly, the model reduces to perfectly-mixed populations when each a ij = 1 n .", "We define a notion of communities over the nodes of the network in order to add the option to categorize groups of individuals.", "Membership among c communities is identified with an n Γ— c indicator matrix C. Depending on the problem at hand, it is possible to calculate the average behavior of the learners within each community directly without having to calculate the behavior of each individual member.", "Propagation in the Network In a typical network model, the edge weights between nodes in A are interpreted directly as interaction probabilities, meaning that individuals only ever interact with their immediate graph neighbors.", "We take a different approach by allowing the agents to \"travel\" and potentially interact with any other agent whose node is connected by a path of non-zero edges.", "If the number of traveling steps were fixed at k, the probability of each pair interacting would be defined as A k .", "It is more complicated for us since the number of steps traveled is a random variable.", "The probability of j interacting with i (p(ij)) is the probability of them interacting after k steps times the probability of k for all values of k as in Eqn.", "1.", "Combining this intuition with A yields the interaction probabilities for all i, j pairs.", "p(ij) = k p(ij|k steps) p(k steps) (1) The pattern of linguistic variants or grammars (in the formal sense where grammar g is the intensional equivalent of language L g ) within a network unfolds as a dynamical system over the course of many iterations, and learners' positions within the network mediate which ones they eventually acquire.", "In a system with g grammars and n individuals, a n Γ— g row-stochastic matrix G specifies the probability with which each community expresses each grammar.", "Given this notion of interaction and the specification of grammars expressed within a network, it is possible to compute the distribution of grammars presented to each learner.", "This is the learners' linguistic environment and is represented by a matrix E in the same form as G .", "An environment function E n (G t , A) = E t+1 shown in Eqn.", "2 calculates E by first calculating all the interaction probabilities in the network then multiplying those by the grammars which every agent expresses to get the environment E. The Ξ± parameter from the geometric distribution 1 defines the travel decay rate.", "A lower Ξ± defines conceptually more mobile agents.", "More generally, E n is a special case of E(G t , C t , A t ) = E t+1 where the number of communities equals the number of individuals (c = n).", "C becomes the identity matrix without loss of generality, so the network's initial condition does not have to be defined explicitly.", "For any other community definition, an initial condition has to be defined as in Eqn.", "3 which specifies the starting point in the network that each agent conceptually begins traveling from.", "The output of E is a g Γ— c matrix giving the environment of the average agent in each community.", "2 E n (G t , A) = G t Ξ± (I βˆ’ (1 βˆ’ Ξ±)A) βˆ’1 (2) E(G t , C, A) = E n (G t , A)C(C C) βˆ’1 (3) The output of E must be broadcast to g Γ— n, which would result in the loss of some information unless the assumption can be made that each community is internally uniform.", "However, when that assumption can be made, the n Γ— n adjacency matrix admits a c Γ— c equitable partition A Ο€ (Eqn.", "4) (Schaub et al., 2016) which permits an alternate environment function E EP (G t , C, A) shown in Eqn.", "5 that is equivalent to the lossless E n if A.", "If n c, E EP is much faster to calculate because it only inverts a small c Γ— c matrix rather than a large n Γ— n. This makes it feasible to run much larger simulations than what has been done in the past.", "A Ο€ = (C C) βˆ’1 C AC (4) EEP = Ξ±G C (I βˆ’ (1 βˆ’ Ξ±)A Ο€ ) βˆ’1 (C C) βˆ’1 (5) Learning in the Network The environment function describes what inputs E t+1 are available to learners given the language expressed by the mature speakers of the previous age cohort with grammars G t .", "The second component of the framework describes the learning algorithm A(E t+1 ) = G t+1 , how individuals respond to their input environment.", "The resulting G t+1 describes which grammars those learners will eventually contribute to the subsequent generation's environment E t+2 .", "This back-andforth between adults' grammars G and childrens' environment E is the two-step cycle of language change (Fig.", "1) .", "In neutral change, learners would acquire grammars at the rates that they are expressed in their environments, but there is good reason to believe .", ".", ".", "G t β†’ E t+1 β†’ G t+1 .", ".", ".", "G t+i β†’ E t+i+1 .", ".", ".", "Figure 1: Language change as an alternation between G and E matrices that most language change involves differential fitness between competing variants, and most nontrivial learning algorithms yield some kind of fitness (Kroch, 1989; Yang, 2000; Blythe and Croft, 2012, etc.)", ", so A is rarely neutral.", "A neutral and simple advantaged model are both considered in Section 3, and a more complex learning algorithm is described for Section 4.", "Application: Testing Assumptions The general nature of the framework described here renders it suitable for reproducing the results of previous works and evaluating their assumptions.", "To demonstrate this, we reproduce the major result from Kauhanen (2016) , which tested the behavior of neutral change in networks of singlegrammar learners, in order to dissect two of its primary assumptions.", "Implemented in a typical network framework, the original setup contains n = 200 individuals in probabilistically generated centralized networks in which individuals mature categorically to the single most frequent grammar in their input.", "The author found that categorical neutral change produced chaotic paths of change regardless of network shape and that periodically \"rewiring\" some of the network edges smoothed this out.", "Without commenting on rewiring, we find that the combination of n and choice of categorical learners conspire to create the chaotic results.", "We create two communities, both centralized along the lines of the single cluster in Kauhanen (2016) , initialize all members of cluster 1 with grammar g 1 and all members of cluster 2 with grammar g 2 , and additional edges are added between members of clusters 1 and 2 to allow interaction.", "G is converted to an indicator matrix at the end of each learning iteration by rounding values to 0 and 1 in order to model categorical learners who only internalize the most common grammar in their inputs as in the original model.", "In a pair of infinitely large clusters or two clusters where individuals are permitted to learn a probabilistic distribution of grammars, each cluster should homogenize to a 50/50 distribution of g 1 and g 2 after some number of iterations depending on the specifics of the network shape and setting for Ξ± creating the red curves in Fig.", "2 .", "At n = 20000, each of 10 trials roughly follows the path of the predicted curve, but when run at the original n = 200 for 10 trials, this produces the type of chaotic behavior which Kauhanen (2016) attempts to repair.", "The outcome appears to be the result of an assumption made out of convenience (n = 200) rather than a principled decision.", "To further explore the impact of the population size assumption, we experiment on a model of advantaged change, which is typically contrasted with neutral change because of its tendency to produce \"well-behaved\" S-curve change (Blythe and Croft, 2012; Kauhanen, 2016) .", "This time, only a single cluster is created, and the advantaged grammar is initially assigned to 1% of the population.", "As seen in Figure 3 , results are chaotic for n = 200 once again and near predicted for n = 20000.", "This is important because at n = 200, advantaged change is chaotic, and most simulations both rise and fall.", "An experimenter who only studied advantaged change in small population might concluded that it is as ill-behaved as neutral change.", "While the conclusions that Kauhanen (2016) draws appear valid for n = 200, it is not clear to what extent they can be projected onto larger populations.", "This demonstrates the need for carefully choosing one's modeling assumptions and testing them out when possible.", "Application: Mergers in Progress The acquisition of phonological mergers in mixed input settings presents an interesting problem.", "It appears that mergers have an inherent advantage because they tend to spread at the expense of distinctions, and once they begin, they are rarely reversed (Labov, 1994) .", "Yang (2009) 's acquisition model quantifies this advantage as the relatively lower chance of misinterpretation if a listener assumes the merged grammar instead of the nonmerged grammar once a sufficient proportion of the environment is merged.", "Applied to Johnson (2007) 's detailed population study of the frontier of the COT-CAUGHT merger in the small towns along the border between Rhode Island and Massachusetts, this accurately predicts the ratio of merged input for a child to acquire the merged grammar, however when applied to a perfectly mixed population of learners, it fails to model the spread of the merged grammar in the population.", "Yang's model is input-driven, so it is conducive to simulation with minimal assumptions past those drawn from the empirical data.", "We test the behavior of this learning model in a typical population network and demonstrate that it produces a reasonable path of change.", "Background The COT-CAUGHT merger, also called the low back merger describes the phenomenon present in varieties of North American English spoken in eastern New England, western Pennsylvania, the American West, and Canada among others where the vowel in words like cot and the vowel in words like caught have come to be pronounced the same (Labov et al., 2005, pp.", "58-65) .", "The geographical extent of the merger is currently expanding, which might be expected if the merger has a cognitive or social advantage associated with it.", "Johnson (2007) 's study of the merger's frontier on the border Rhode Island and Massachusetts uncovered an interesting social dynamic that illustrates the merger's speed: there are families where the parents and older siblings non-merged, but the younger siblings are.", "The merger has swept through in only a few years and passed between the siblings.", "Yang (2009) seeks to understand why mergers have an advantage from a cognitive perspective, and his model treats the acquisition of mergers as an evolutionary process.", "Learners who receive both merged (M + ) and non-merged (M βˆ’ ) input entertain both a merged (g + ) and non-merged (g βˆ’ ) grammar and reward whichever grammar successfully parses the input.", "This kind of variational learner (Yang, 2000) is essentially an adaptation of the classic evolutionary Linear Reward Punishment model (Bush and Mosteller, 1953) .", "The fitness of each grammar is the probability in the limit that it will fail to parse any given input, and since it is virtually always the case that this probability is different for both grammars, fitness is virtually always asymmetric.", "The variational learner is characterized as follows.", "Given two grammars and an input token s, The learner parses s with g 1 with probability p and with g 2 with probability q = 1 βˆ’ p. p is rewarded according to whether the choice of g successfully parses s (g β†’ s) or it fails to (g s), where Ξ³ is some small constant.", "p = p + Ξ³q, g β†’ s (1 βˆ’ Ξ³)p, g s Given a specific problem, one can calculate a penalty probability C for each g, the proportion of input that would cause g s. The grammar with the lower C has the advantage, so the other one will be driven down in the long run.", "C can be estimated from type frequencies in a corpus, and the model is non-parametric because these values do not depend on Ξ³. lim tβ†’βˆž p t = C 2 C 1 + C 2 lim tβ†’βˆž q t = C 1 C 1 + C 2 To understand the COT-CAUGHT merger empirically, one must reason about what kind of input would trigger a penalty and then calculate the penalty probabilities of the merged grammar C + and non-merged grammar C βˆ’ from a corpus.", "This model considers parsing failure to be the rate of initial misinterpretation, and for a vowel merger, the only inputs that could create an initial misinterpretation are minimal pairs because they become homophones.", "Examples of COT-CAUGHT minimal pairs include cot-caught, Don-Dawn, stock-stalk, odd-awed, collar-caller, and so on.", "The merged g + grammar collapses would-be minimal pairs into homophones, so the penalty rate C + comes down to lexical access.", "Under the observation that more frequent homophones are retrieved first regardless of syntactic context (Caramazza et al., 2001) , g + listeners only suffer initial misinterpretation when the less frequent member of a pair is uttered regardless of the rate of M + .", "If H is the sum token frequency of all minimal pairs and h i o , h i oh are the frequencies of the ith pair's members, then C+ is calculated by Eqn.", "6.", "In contrast, g βˆ’ listeners are sensitive to the phonemic distinction, so they misinterpret M βˆ’ input at the rate of mishearing one vowel for the other (Peterson and Barney, 1952) (second half of Eqn.", "7).", "And given M + input, they misinterpret whenever they hear the phoneme which g βˆ’ does not expect (e.g., a merged speaker pronouncing cot with the CAUGHT vowel) times the probability of not mishearing that vowel (1-) plus times the probability of hearing the right vowel (i.e., the merged speaker pronounces cot with the COT vowel but it is misheard anyway) (first half of Eqn.", "7).", "Since g βˆ’ misinterpretation rates are a function of the rate of M + (p) in the environment, there is a threshold of M + speakers above which the merged grammar has a fitness advantage over the non-merged one.", "C + = 1 H i min(h i o , h i oh ) (6) C βˆ’ = 1 H i p 0 ((1 βˆ’ oh )h i o + oh h i oh ) (7) +q 0 ( oh h i o + oh h i oh ) Calculating this threshold for the frequent minimal pairs that Yang extracts from the Wortschatz project (Biemann et al., 2004 ) corpus 3 and mishearing rates from Peterson and Barney (1952), the Yang model predicts that a learner exposed to at least ∼ 17% COT-CAUGHT-merged input will acquire the merger.", "This threshold represents a strong advantage for M + because it is well under the 50% threshold expected for neutral (non-advantaged) change and it is very close to what was found in Johnson (2007) 's sociolinguistic study.", "It predicts that younger children may have g + while their parents and even older siblings have g βˆ’ if the 17% threshold was crossed in E after the acquisition period of the older sibling but before that of the younger sibling.", "Model Setup All the mechanics behind the learning model reduce to a simple statement: learners acquires g + iff > 17% of their input is M + and they acquire g βˆ’ otherwise.", "However, this kind of categorical learner in a perfectly-mixed population leads to immediate fixation at either g βˆ’ or g + in a single iteration, since the proportion of g + speakers in the population is equivalent to the proportion of M + input in every learner's environment.", "This is not realistic change.", "Clearly, social network structure is at least as important as the learning algorithm in modeling the spread of the merger.", "We model the change in a non-uniform social network of 100 centralized clusters of 75 individuals each.", "75 was chosen as half Dunbar's number, the maximum number of reliable social connections that an adult can maintain (Dunbar, 2010) .", "There are two grammars, g + and g βˆ’ , and learners internalize one or the other according to the 17% threshold of M + in their input.", "One cluster represents the source of the merger and is initialized at 100% g + , while the rest begin 100% g βˆ’ .", "Inter-cluster connections are chosen randomly so that some connections are between central members of the clusters and some are between peripheral members.", "The one merged cluster is connected to half the other clusters representing those at the frontier of the change, and each other cluster is connected to five randomly chosen ones.", "4 This network structure echoes work in sociolinguistics, in particular, Milroy and Milroy (1985) 's notion of strong and weak connections in language change, where weak connections between social clusters are particularly important for propagation of a change.", "Propagation of the merged grammar is calculated by E n because we are interested in the behavior of individuals without loss of precision and because it cannot be assumed that each cluster is internally uniform.", "5 Since the spread of the merger has been rapid enough to detect over a period of a few years, iterations are modeled as short age co-horts rather than full generations in the first experiments by updating only a randomly chosen 10% of nodes at each iteration because only a fraction of the population is learning at any given time.", "A model where every node is updated is investigated as well.", "Results The behavior of this simulation is shown graphically in Figure 4 .", "The fine/colored lines indicate the rate of M + within each initially non-merged cluster, and the bold/black line shows the average rate across all initially non-merged.", "The merger spreads from cluster to cluster in succession over the \"weak\" inter-cluster connections and through each cluster over the 'strong' connections before moving on to the next ones.", "Everett, 1995) members have the merger, a period of rapid diffusion of the merger, then some time where a few laggards resist the merger.", "As a result, most clusters exhibit an S-like shape.", "A few clusters change rapidly because of their especially wellconnected positions in the network, and some lag behind the rest because they are poorly connected to the rest of the network.", "More interestingly, the population-wide average, the population-level data at the kind of granularity that is often studied, yields a smooth S-curve with a shallower slope than the individual clusters.", "The fact that it arises naturally here in a network that conforms with typical network shapes but was otherwise randomly generated is encouraging because the experiment was not set up so that it would produce such a curve, and the steep rate of change in individual clusters is what is expected for a change that is rapid enough to affect siblings differently.", "In the above simulation, only a fraction of nodes were updated at each iteration in order to model a rapid change.", "In order to confirm that this choice is not affecting the results and to test a purer implementation of the framework presented here, we remove that constraint and update every node at each iteration.", "Figure 5 shows what happens over 20 iterations in a network that is otherwise identical but with 2/5 as many inter-cluster connections as the original.", "A qualitatively similar pattern arises, so the choice to update only a fraction of the population is not crucially affecting the results.", "In all experiments so far, social connections were fixed at the first iteration even though connections in real populations tend to change over time.", "To investigate that modeling assumption, we perform another simulation in which connections are randomly updated both within and across clusters at each iteration akin to Kauhanen (2016) 's rewiring.", "The result as shown in Figure 6 is similar to before, with one major difference.", "The individual clusters transition more closely in time because no individual cluster remains poorly connected or especially well connected throughout the entire simulation.", "Finally, we test our assumptions about population size by repeating the experiments on a smaller network of 40 clusters of 18 individuals.", "The results are qualitatively similar, but the S-curve appears to be more sensitive to probabilistic connections in the network.", "To explore this, we present the average network-wide rate of (M + ) across 10 trials, revealing that an S-like curve is formed each time but that its slope varies.", "A few trials never reach 100% because some of the clusters are not connected to the innovative one.", "The slope varies between trials, indicating that the rate of change is a function of both the population structure and the learning algorithm, but the network size does not substantially affect these results.", "Discussion The algebraic-network framework for modeling population-level language change presented here has substantial practical and theoretical advantages over previous ones.", "It is much simpler computationally than previous frameworks because it calculates the statistically expected behavior of each generation analytically and therefore removes the entire inner loop of calculating stochastic inter-agent interactions from the simulation.", "It follows the Niyogi and Berwick (1996) formalism for language change which presents a clean and modular way of reasoning about the problem and promotes the centrality of language acquisition.", "In addition to the core algorithm, the framework offers enough flexibility to represent a wide variety of processes from the highly abstract (e.g., Kauhanen (2016) ) to those grounded in soci-olinguistic and acquisition research (e.g., Yang (2009) ).", "In our investigation of Kauhanen's basic assumptions, we discover how seemingly innocuous decisions about population size and learning conspire to drive simulation results.", "If learners are conceived as categorical learners, population size becomes a deciding factor in the path of change.", "So while the original results are interesting and meaningful, they may only valid for small (on the order of 10 2 ) populations.", "In our simulation of the spread of the COT-CAUGHT merger, we show how a cognitivelymotivated model of acquisition requires a network model in order to represent population-level language change.", "The population is represented as a collection of individual clusters based on sociological work, but the clusters themselves are connected randomly.", "The fact that S-curves arise naturally from these networks underscores their centrality to language change.", "One problem that this line of simulation work has always faced has been the lack of viable comparison between models because every study implements its own learning, network, and interaction models.", "The modular nature of our framework advances against this trend since it is now possible to hold the population model constant while slotting in various learning models to test them against one another and vice-versa.", "Finally, since this framework reduces to Niyogi & Berwick's models in perfectly-mixed populations, it can be used to reason about the formal dynamics of language change as well.", "Without simulation, it would be difficult or impossible to undercover the interplay between acquisition and social structure on the propagation of language change.", "Neither factor alone can account for the theoretical or empirically observed patterns.", "Simulations of this kind which explicitly model both simultaneously is well equipped to provide insights that fieldwork and laboratory work cannot.", "As such, it is an invaluable complement to those more traditional methodologies." ] }
{ "paper_header_number": [ "1", "1.1", "2", "2.2", "2.3", "3", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Framework for Transmission in Social Networks", "Propagation in the Network", "Learning in the Network", "Application: Testing Assumptions", "Application: Mergers in Progress", "Background", "Model Setup", "Results", "Discussion" ] }
GEM-SciDuet-train-106#paper-1281#slide-20
Variational Learner Yang 2000
Learners consider multiple grammars g1, g2 simultaneously Each g is penalised when it cannot parse an input p if g1 fails The g with lower penalty probability has the advantage p if g1 parses input if g1 fails If mature speakers adopt one grammar categorically, the one with smaller C wins limt pt if C1 C2
Learners consider multiple grammars g1, g2 simultaneously Each g is penalised when it cannot parse an input p if g1 fails The g with lower penalty probability has the advantage p if g1 parses input if g1 fails If mature speakers adopt one grammar categorically, the one with smaller C wins limt pt if C1 C2
[]
GEM-SciDuet-train-106#paper-1281#slide-21
1281
A Framework for Representing Language Acquisition in a Population Setting
Language variation and change are driven both by individuals' internal cognitive processes and by the social structures through which language propagates. A wide range of computational frameworks have been proposed to connect these drivers. We compare the strengths and weaknesses of existing approaches and propose a new analytic framework which combines previous network models' ability to capture realistic social structure with practically and more elegant computational properties. The framework privileges the process of language acquisition and embeds learners in a social network but is modular so that population structure can be combined with different acquisition models. We demonstrate two applications for the framework: a test of practical concerns that arise when modeling acquisition in a population setting and an application of the framework to recent work on phonological mergers in progress.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217 ], "paper_content_text": [ "Introduction The process of language change should be thought of as a two-step cycle in which 1) individuals acquire their native languages from their predecessors then 2) pass them on to their successors.", "Small changes accrue over time this way and create both small-scale interpersonal variation and large-scale typological differences.", "It is easy to draw a strong analogy here between linguistic evolution and biological evolution.", "Both feature classic descent with modification, except while phenotypes are transmitted through genes and acted on by natural selection, language is both transmitted through and constrained by the individual (Cavalli-Sforza and Feldman, 1981; Ritt, 2004, etc.)", ".", "But while evolution, linguistic or otherwise, is driven by forces acting on the individual, it unfolds on the level of populations (Cavalli-Sforza and Feldman, 1981) .", "The influence of communitylevel social factors on the path of language change is a major focus of sociolinguistics (Labov, 2001; Milroy and Milroy, 1985; Rogers Everett, 1995) .", "Ideally, one could observe population-level variation unfold in real time while testing out individual factors, but this is impossible because nobody can travel back in time or fit entire natural environments into a lab.", "Change that has already happened is out of reach, and change in progress is buried in a world of confounds.", "The classic sociolinguistic method instead approaches the problem by inferring causal factors from patterns discovered in field interviews and corpora (Labov, 1994; Labov et al., 2005, etc.)", ".", "This is the primary source of empirical data in the field and the only way to look at language change in a naturalistic setting, but it is limited in that it cannot test cause and effect directly.", "More recently, controlled experimental studies have emerged as a complementary line of research which manipulate causal factors directly (Johnson et al., 1999; Campbell-Kibler, 2009, etc.)", ", but are inherently removed natural time and scale.", "A third approach, the one we build upon here, relies on computational modeling to simulate how sociolinguistic factors might work together in larger populations (Klein, 1966; Blythe and Croft, 2012; Kauhanen, 2016, etc.)", ".", "It has long been argued that language acquisition is the primary cause of language change (Sweet, 1899; Lightfoot, 1979; Niyogi, 1998, etc.)", ".", "In the last few decades, this connection has been modeled computationally (Gibson and Wexler, 1994; Kirby et al., 2000; Yang, 2000, etc.)", ", leading to the strong conclusion that change is the inevitable consequence of mixed linguistic input or finite learning periods (Niyogi and Berwick, 1996) , even if children are \"perfect\" learners.", "An important result connecting the learner and population emphasizes the need for this line of work: the space of paths of change available in populations is formally larger than the paths available to linear chains of iterated learners.", "Niyogi and Berwick (2009) prove formally that even perfectly-mixed (i.e., uniform and homogeneous social network) populations admit phase transitions in the path of change unavailable to chains of single learners commonly implemented in iterated learning (Kirby et al., 2000) .", "This suggests that small-population experimental studies in sociolinguistics and in child language acquisition do not paint the full picture of language change.", "We introduce a new framework for modeling language change in populations.", "It has an outer loop to represent generational progression, but it replaces the inner loop which calculates randomized interactions between agents with a single formula that is defined generally enough to allow the simulation of a wide range of scenarios.", "It builds upon the principled formalism described by Niyogi and Berwick (1996, et seq.)", ", privileging the acquisition model and separating it from the population model.", "The resulting modular framework is described in the following sections.", "First, Section 1.1 presents a survey of previous simulation work followed by a description of the new population model in Section 2.", "Next, Section 3 addresses practical concerns relating population size to assumptions about language acquisition.", "Finally, Section 4 introduces a case study on phonological change which demonstrates the need for appropriate models both of acquisition and populations.", "Related Work Computational models for the propagation of linguistic variation have been employed with a variety of research goals in mind.", "Every paper implements its own framework with few exceptions, so comparison across studies is difficult.", "Additionally, since each model is essentially 'boutique,' it is always possible that models are designed consciously or unconsciously to achieve a specific outcome rather driven by underlying principles.", "We group these frameworks into three classes according to their implementation, swarm, network, and algebraic, and discusses their strengths and weaknesses.", "The first class, called swarm here, models populations as collections of agents placed on a grid.", "They \"swarm\" around randomly according to some movement function, and \"interact\" when they occupy adjacent grid spaces (Satterfield, 2001; Harrison et al., 2002; Ke et al., 2008; Stanford and Kenny, 2013) .", "This tends toward concrete interpretation, for example, more mobile populations are expressed directly by more mobile agents.", "They capture Bloomfield (1933) 's \"principle of density\" which describes the observation that geographically or socially close individuals interact more frequently than those farther away.", "On the other hand, they provide little control over network structure, relying on series of explicit movement constraints in order to direct their agents, and since each one moves randomly at each iteration, these models have potentially thousands of degrees of freedom.", "Such simulations should be run many times if any sort of statistically expected results are to be computed.", "The second class, network frameworks, model speakers as nodes and interaction probabilities as weighted edges on network graphs (Minett and Wang, 2008; Baxter et al., 2009; Fagyal et al., 2010; Blythe and Croft, 2012; Kauhanen, 2016) .", "These frameworks offer precise control over social network structure and can test specific community models from within sociolinguistics.", "However, implementations usually proceed by some kind of iterative probabilistic node-pair selection process, and in this way suffer from the same statistical pitfalls as swarm frameworks.", "In contrast to swarm models, interaction is rigidly restricted to immediately connected nodes, so to achieve gradient interaction probabilities, edges must be frequently updated or nearly fully-connected graphs with carefully assigned edge weights would need to be constructed and motivated.", "The third class, algebraic frameworks, present analytic methods for determining the state of the network at the end of each iteration rather than relying on stochastic simulation of individual agents (Niyogi and Berwick, 1996, 1997; Yang, 2000; Baxter et al., 2006; Minett and Wang, 2008; Niyogi and Berwick, 2009) .", "Removing that inner loop is a more mathematically elegant approach and avoids dealing unnecessarily with statistics behind random trials.", "Removing that loop speeds up calculation as well, making larger simulations more tractable than with network or swarm frameworks.", "But this power is achieved by sacrificing the social network.", "Up to this point, such models have, to our knowledge, only been defined over perfectly-mixed (i.e., no network effects) populations.", "That assumption is useful for reasoning about the mathematical theory behind language change, but it hinders such models' utility in empirical studies.", "For example, though Baxter et al.", "(2006) and Minett and Wang (2008) implement algebraic models for perfectly mixed populations, they fall back on network models to model network effects.", "Framework for Transmission in Social Networks Algebraic frameworks have their mathematical advantage, but network frameworks provide a richer model for representing real-world population structures and swarm models capture density effects by default.", "An ideal framework would combine the benefits of all three of these.", "Here we do just that.", "We introduce a framework that instantiates Niyogi and Berwick (1996) 's acquisitiondriven formalism where change is handled explicitly as a two-step alternation between individual learners learning and populations interacting.", "It provides an analytic solution to the state of a network structure over which swarm-like behavior can be modeled.", "We begin by conceptualizing the framework in terms of agents traveling probabilistically over a network structure as in Algo.", "1 before introducing the analytic solution.", "There is an individual standing at every node in the graph, and at every iteration, each individual begins at some location and travels along the network's edges, at each step deciding to continue on or to stop and interact with the agent at that node.", "Any two agents with a nonzero weight path between them could potentially interact, so the overall probability of an interaction is a function of the shape of the network and the decay rate of the step probability.", "The shorter and higher weighted the path between two agents, the more likely they are to interact.", "This corresponds to the gradient interaction probabilities of swarm frameworks.", "Social networks are typically conceived of as graph structures with individuals as vertices and the social or geographical connections between individuals as edges, and this allows for a great deal of flexibility.", "If edges are undirected, then all interactions are equal and bidirectional, but if edges are directed, interactions may or may not be.", "Edges can be weighted to represent likelihood of interaction or some measure of social valuation, and this too can vary over time.", "Lastly, it is possible to add and remove nodes themselves to capture births, deaths, or migration.", "The network structure is represented computationally here as an adjacency matrix A.", "In a population of n individuals, this is n Γ— n where each element a ij is the weight of the connection from individual j to individual i.", "The matrix must be column stochastic (all columns sum to 1 and contain only positive elements) so that edge weights can be interpreted as probabilities.", "The special case where the matrix is symmetric (every a ij = a ji ) models undirected edges, and more strongly, the model reduces to perfectly-mixed populations when each a ij = 1 n .", "We define a notion of communities over the nodes of the network in order to add the option to categorize groups of individuals.", "Membership among c communities is identified with an n Γ— c indicator matrix C. Depending on the problem at hand, it is possible to calculate the average behavior of the learners within each community directly without having to calculate the behavior of each individual member.", "Propagation in the Network In a typical network model, the edge weights between nodes in A are interpreted directly as interaction probabilities, meaning that individuals only ever interact with their immediate graph neighbors.", "We take a different approach by allowing the agents to \"travel\" and potentially interact with any other agent whose node is connected by a path of non-zero edges.", "If the number of traveling steps were fixed at k, the probability of each pair interacting would be defined as A k .", "It is more complicated for us since the number of steps traveled is a random variable.", "The probability of j interacting with i (p(ij)) is the probability of them interacting after k steps times the probability of k for all values of k as in Eqn.", "1.", "Combining this intuition with A yields the interaction probabilities for all i, j pairs.", "p(ij) = k p(ij|k steps) p(k steps) (1) The pattern of linguistic variants or grammars (in the formal sense where grammar g is the intensional equivalent of language L g ) within a network unfolds as a dynamical system over the course of many iterations, and learners' positions within the network mediate which ones they eventually acquire.", "In a system with g grammars and n individuals, a n Γ— g row-stochastic matrix G specifies the probability with which each community expresses each grammar.", "Given this notion of interaction and the specification of grammars expressed within a network, it is possible to compute the distribution of grammars presented to each learner.", "This is the learners' linguistic environment and is represented by a matrix E in the same form as G .", "An environment function E n (G t , A) = E t+1 shown in Eqn.", "2 calculates E by first calculating all the interaction probabilities in the network then multiplying those by the grammars which every agent expresses to get the environment E. The Ξ± parameter from the geometric distribution 1 defines the travel decay rate.", "A lower Ξ± defines conceptually more mobile agents.", "More generally, E n is a special case of E(G t , C t , A t ) = E t+1 where the number of communities equals the number of individuals (c = n).", "C becomes the identity matrix without loss of generality, so the network's initial condition does not have to be defined explicitly.", "For any other community definition, an initial condition has to be defined as in Eqn.", "3 which specifies the starting point in the network that each agent conceptually begins traveling from.", "The output of E is a g Γ— c matrix giving the environment of the average agent in each community.", "2 E n (G t , A) = G t Ξ± (I βˆ’ (1 βˆ’ Ξ±)A) βˆ’1 (2) E(G t , C, A) = E n (G t , A)C(C C) βˆ’1 (3) The output of E must be broadcast to g Γ— n, which would result in the loss of some information unless the assumption can be made that each community is internally uniform.", "However, when that assumption can be made, the n Γ— n adjacency matrix admits a c Γ— c equitable partition A Ο€ (Eqn.", "4) (Schaub et al., 2016) which permits an alternate environment function E EP (G t , C, A) shown in Eqn.", "5 that is equivalent to the lossless E n if A.", "If n c, E EP is much faster to calculate because it only inverts a small c Γ— c matrix rather than a large n Γ— n. This makes it feasible to run much larger simulations than what has been done in the past.", "A Ο€ = (C C) βˆ’1 C AC (4) EEP = Ξ±G C (I βˆ’ (1 βˆ’ Ξ±)A Ο€ ) βˆ’1 (C C) βˆ’1 (5) Learning in the Network The environment function describes what inputs E t+1 are available to learners given the language expressed by the mature speakers of the previous age cohort with grammars G t .", "The second component of the framework describes the learning algorithm A(E t+1 ) = G t+1 , how individuals respond to their input environment.", "The resulting G t+1 describes which grammars those learners will eventually contribute to the subsequent generation's environment E t+2 .", "This back-andforth between adults' grammars G and childrens' environment E is the two-step cycle of language change (Fig.", "1) .", "In neutral change, learners would acquire grammars at the rates that they are expressed in their environments, but there is good reason to believe .", ".", ".", "G t β†’ E t+1 β†’ G t+1 .", ".", ".", "G t+i β†’ E t+i+1 .", ".", ".", "Figure 1: Language change as an alternation between G and E matrices that most language change involves differential fitness between competing variants, and most nontrivial learning algorithms yield some kind of fitness (Kroch, 1989; Yang, 2000; Blythe and Croft, 2012, etc.)", ", so A is rarely neutral.", "A neutral and simple advantaged model are both considered in Section 3, and a more complex learning algorithm is described for Section 4.", "Application: Testing Assumptions The general nature of the framework described here renders it suitable for reproducing the results of previous works and evaluating their assumptions.", "To demonstrate this, we reproduce the major result from Kauhanen (2016) , which tested the behavior of neutral change in networks of singlegrammar learners, in order to dissect two of its primary assumptions.", "Implemented in a typical network framework, the original setup contains n = 200 individuals in probabilistically generated centralized networks in which individuals mature categorically to the single most frequent grammar in their input.", "The author found that categorical neutral change produced chaotic paths of change regardless of network shape and that periodically \"rewiring\" some of the network edges smoothed this out.", "Without commenting on rewiring, we find that the combination of n and choice of categorical learners conspire to create the chaotic results.", "We create two communities, both centralized along the lines of the single cluster in Kauhanen (2016) , initialize all members of cluster 1 with grammar g 1 and all members of cluster 2 with grammar g 2 , and additional edges are added between members of clusters 1 and 2 to allow interaction.", "G is converted to an indicator matrix at the end of each learning iteration by rounding values to 0 and 1 in order to model categorical learners who only internalize the most common grammar in their inputs as in the original model.", "In a pair of infinitely large clusters or two clusters where individuals are permitted to learn a probabilistic distribution of grammars, each cluster should homogenize to a 50/50 distribution of g 1 and g 2 after some number of iterations depending on the specifics of the network shape and setting for Ξ± creating the red curves in Fig.", "2 .", "At n = 20000, each of 10 trials roughly follows the path of the predicted curve, but when run at the original n = 200 for 10 trials, this produces the type of chaotic behavior which Kauhanen (2016) attempts to repair.", "The outcome appears to be the result of an assumption made out of convenience (n = 200) rather than a principled decision.", "To further explore the impact of the population size assumption, we experiment on a model of advantaged change, which is typically contrasted with neutral change because of its tendency to produce \"well-behaved\" S-curve change (Blythe and Croft, 2012; Kauhanen, 2016) .", "This time, only a single cluster is created, and the advantaged grammar is initially assigned to 1% of the population.", "As seen in Figure 3 , results are chaotic for n = 200 once again and near predicted for n = 20000.", "This is important because at n = 200, advantaged change is chaotic, and most simulations both rise and fall.", "An experimenter who only studied advantaged change in small population might concluded that it is as ill-behaved as neutral change.", "While the conclusions that Kauhanen (2016) draws appear valid for n = 200, it is not clear to what extent they can be projected onto larger populations.", "This demonstrates the need for carefully choosing one's modeling assumptions and testing them out when possible.", "Application: Mergers in Progress The acquisition of phonological mergers in mixed input settings presents an interesting problem.", "It appears that mergers have an inherent advantage because they tend to spread at the expense of distinctions, and once they begin, they are rarely reversed (Labov, 1994) .", "Yang (2009) 's acquisition model quantifies this advantage as the relatively lower chance of misinterpretation if a listener assumes the merged grammar instead of the nonmerged grammar once a sufficient proportion of the environment is merged.", "Applied to Johnson (2007) 's detailed population study of the frontier of the COT-CAUGHT merger in the small towns along the border between Rhode Island and Massachusetts, this accurately predicts the ratio of merged input for a child to acquire the merged grammar, however when applied to a perfectly mixed population of learners, it fails to model the spread of the merged grammar in the population.", "Yang's model is input-driven, so it is conducive to simulation with minimal assumptions past those drawn from the empirical data.", "We test the behavior of this learning model in a typical population network and demonstrate that it produces a reasonable path of change.", "Background The COT-CAUGHT merger, also called the low back merger describes the phenomenon present in varieties of North American English spoken in eastern New England, western Pennsylvania, the American West, and Canada among others where the vowel in words like cot and the vowel in words like caught have come to be pronounced the same (Labov et al., 2005, pp.", "58-65) .", "The geographical extent of the merger is currently expanding, which might be expected if the merger has a cognitive or social advantage associated with it.", "Johnson (2007) 's study of the merger's frontier on the border Rhode Island and Massachusetts uncovered an interesting social dynamic that illustrates the merger's speed: there are families where the parents and older siblings non-merged, but the younger siblings are.", "The merger has swept through in only a few years and passed between the siblings.", "Yang (2009) seeks to understand why mergers have an advantage from a cognitive perspective, and his model treats the acquisition of mergers as an evolutionary process.", "Learners who receive both merged (M + ) and non-merged (M βˆ’ ) input entertain both a merged (g + ) and non-merged (g βˆ’ ) grammar and reward whichever grammar successfully parses the input.", "This kind of variational learner (Yang, 2000) is essentially an adaptation of the classic evolutionary Linear Reward Punishment model (Bush and Mosteller, 1953) .", "The fitness of each grammar is the probability in the limit that it will fail to parse any given input, and since it is virtually always the case that this probability is different for both grammars, fitness is virtually always asymmetric.", "The variational learner is characterized as follows.", "Given two grammars and an input token s, The learner parses s with g 1 with probability p and with g 2 with probability q = 1 βˆ’ p. p is rewarded according to whether the choice of g successfully parses s (g β†’ s) or it fails to (g s), where Ξ³ is some small constant.", "p = p + Ξ³q, g β†’ s (1 βˆ’ Ξ³)p, g s Given a specific problem, one can calculate a penalty probability C for each g, the proportion of input that would cause g s. The grammar with the lower C has the advantage, so the other one will be driven down in the long run.", "C can be estimated from type frequencies in a corpus, and the model is non-parametric because these values do not depend on Ξ³. lim tβ†’βˆž p t = C 2 C 1 + C 2 lim tβ†’βˆž q t = C 1 C 1 + C 2 To understand the COT-CAUGHT merger empirically, one must reason about what kind of input would trigger a penalty and then calculate the penalty probabilities of the merged grammar C + and non-merged grammar C βˆ’ from a corpus.", "This model considers parsing failure to be the rate of initial misinterpretation, and for a vowel merger, the only inputs that could create an initial misinterpretation are minimal pairs because they become homophones.", "Examples of COT-CAUGHT minimal pairs include cot-caught, Don-Dawn, stock-stalk, odd-awed, collar-caller, and so on.", "The merged g + grammar collapses would-be minimal pairs into homophones, so the penalty rate C + comes down to lexical access.", "Under the observation that more frequent homophones are retrieved first regardless of syntactic context (Caramazza et al., 2001) , g + listeners only suffer initial misinterpretation when the less frequent member of a pair is uttered regardless of the rate of M + .", "If H is the sum token frequency of all minimal pairs and h i o , h i oh are the frequencies of the ith pair's members, then C+ is calculated by Eqn.", "6.", "In contrast, g βˆ’ listeners are sensitive to the phonemic distinction, so they misinterpret M βˆ’ input at the rate of mishearing one vowel for the other (Peterson and Barney, 1952) (second half of Eqn.", "7).", "And given M + input, they misinterpret whenever they hear the phoneme which g βˆ’ does not expect (e.g., a merged speaker pronouncing cot with the CAUGHT vowel) times the probability of not mishearing that vowel (1-) plus times the probability of hearing the right vowel (i.e., the merged speaker pronounces cot with the COT vowel but it is misheard anyway) (first half of Eqn.", "7).", "Since g βˆ’ misinterpretation rates are a function of the rate of M + (p) in the environment, there is a threshold of M + speakers above which the merged grammar has a fitness advantage over the non-merged one.", "C + = 1 H i min(h i o , h i oh ) (6) C βˆ’ = 1 H i p 0 ((1 βˆ’ oh )h i o + oh h i oh ) (7) +q 0 ( oh h i o + oh h i oh ) Calculating this threshold for the frequent minimal pairs that Yang extracts from the Wortschatz project (Biemann et al., 2004 ) corpus 3 and mishearing rates from Peterson and Barney (1952), the Yang model predicts that a learner exposed to at least ∼ 17% COT-CAUGHT-merged input will acquire the merger.", "This threshold represents a strong advantage for M + because it is well under the 50% threshold expected for neutral (non-advantaged) change and it is very close to what was found in Johnson (2007) 's sociolinguistic study.", "It predicts that younger children may have g + while their parents and even older siblings have g βˆ’ if the 17% threshold was crossed in E after the acquisition period of the older sibling but before that of the younger sibling.", "Model Setup All the mechanics behind the learning model reduce to a simple statement: learners acquires g + iff > 17% of their input is M + and they acquire g βˆ’ otherwise.", "However, this kind of categorical learner in a perfectly-mixed population leads to immediate fixation at either g βˆ’ or g + in a single iteration, since the proportion of g + speakers in the population is equivalent to the proportion of M + input in every learner's environment.", "This is not realistic change.", "Clearly, social network structure is at least as important as the learning algorithm in modeling the spread of the merger.", "We model the change in a non-uniform social network of 100 centralized clusters of 75 individuals each.", "75 was chosen as half Dunbar's number, the maximum number of reliable social connections that an adult can maintain (Dunbar, 2010) .", "There are two grammars, g + and g βˆ’ , and learners internalize one or the other according to the 17% threshold of M + in their input.", "One cluster represents the source of the merger and is initialized at 100% g + , while the rest begin 100% g βˆ’ .", "Inter-cluster connections are chosen randomly so that some connections are between central members of the clusters and some are between peripheral members.", "The one merged cluster is connected to half the other clusters representing those at the frontier of the change, and each other cluster is connected to five randomly chosen ones.", "4 This network structure echoes work in sociolinguistics, in particular, Milroy and Milroy (1985) 's notion of strong and weak connections in language change, where weak connections between social clusters are particularly important for propagation of a change.", "Propagation of the merged grammar is calculated by E n because we are interested in the behavior of individuals without loss of precision and because it cannot be assumed that each cluster is internally uniform.", "5 Since the spread of the merger has been rapid enough to detect over a period of a few years, iterations are modeled as short age co-horts rather than full generations in the first experiments by updating only a randomly chosen 10% of nodes at each iteration because only a fraction of the population is learning at any given time.", "A model where every node is updated is investigated as well.", "Results The behavior of this simulation is shown graphically in Figure 4 .", "The fine/colored lines indicate the rate of M + within each initially non-merged cluster, and the bold/black line shows the average rate across all initially non-merged.", "The merger spreads from cluster to cluster in succession over the \"weak\" inter-cluster connections and through each cluster over the 'strong' connections before moving on to the next ones.", "Everett, 1995) members have the merger, a period of rapid diffusion of the merger, then some time where a few laggards resist the merger.", "As a result, most clusters exhibit an S-like shape.", "A few clusters change rapidly because of their especially wellconnected positions in the network, and some lag behind the rest because they are poorly connected to the rest of the network.", "More interestingly, the population-wide average, the population-level data at the kind of granularity that is often studied, yields a smooth S-curve with a shallower slope than the individual clusters.", "The fact that it arises naturally here in a network that conforms with typical network shapes but was otherwise randomly generated is encouraging because the experiment was not set up so that it would produce such a curve, and the steep rate of change in individual clusters is what is expected for a change that is rapid enough to affect siblings differently.", "In the above simulation, only a fraction of nodes were updated at each iteration in order to model a rapid change.", "In order to confirm that this choice is not affecting the results and to test a purer implementation of the framework presented here, we remove that constraint and update every node at each iteration.", "Figure 5 shows what happens over 20 iterations in a network that is otherwise identical but with 2/5 as many inter-cluster connections as the original.", "A qualitatively similar pattern arises, so the choice to update only a fraction of the population is not crucially affecting the results.", "In all experiments so far, social connections were fixed at the first iteration even though connections in real populations tend to change over time.", "To investigate that modeling assumption, we perform another simulation in which connections are randomly updated both within and across clusters at each iteration akin to Kauhanen (2016) 's rewiring.", "The result as shown in Figure 6 is similar to before, with one major difference.", "The individual clusters transition more closely in time because no individual cluster remains poorly connected or especially well connected throughout the entire simulation.", "Finally, we test our assumptions about population size by repeating the experiments on a smaller network of 40 clusters of 18 individuals.", "The results are qualitatively similar, but the S-curve appears to be more sensitive to probabilistic connections in the network.", "To explore this, we present the average network-wide rate of (M + ) across 10 trials, revealing that an S-like curve is formed each time but that its slope varies.", "A few trials never reach 100% because some of the clusters are not connected to the innovative one.", "The slope varies between trials, indicating that the rate of change is a function of both the population structure and the learning algorithm, but the network size does not substantially affect these results.", "Discussion The algebraic-network framework for modeling population-level language change presented here has substantial practical and theoretical advantages over previous ones.", "It is much simpler computationally than previous frameworks because it calculates the statistically expected behavior of each generation analytically and therefore removes the entire inner loop of calculating stochastic inter-agent interactions from the simulation.", "It follows the Niyogi and Berwick (1996) formalism for language change which presents a clean and modular way of reasoning about the problem and promotes the centrality of language acquisition.", "In addition to the core algorithm, the framework offers enough flexibility to represent a wide variety of processes from the highly abstract (e.g., Kauhanen (2016) ) to those grounded in soci-olinguistic and acquisition research (e.g., Yang (2009) ).", "In our investigation of Kauhanen's basic assumptions, we discover how seemingly innocuous decisions about population size and learning conspire to drive simulation results.", "If learners are conceived as categorical learners, population size becomes a deciding factor in the path of change.", "So while the original results are interesting and meaningful, they may only valid for small (on the order of 10 2 ) populations.", "In our simulation of the spread of the COT-CAUGHT merger, we show how a cognitivelymotivated model of acquisition requires a network model in order to represent population-level language change.", "The population is represented as a collection of individual clusters based on sociological work, but the clusters themselves are connected randomly.", "The fact that S-curves arise naturally from these networks underscores their centrality to language change.", "One problem that this line of simulation work has always faced has been the lack of viable comparison between models because every study implements its own learning, network, and interaction models.", "The modular nature of our framework advances against this trend since it is now possible to hold the population model constant while slotting in various learning models to test them against one another and vice-versa.", "Finally, since this framework reduces to Niyogi & Berwick's models in perfectly-mixed populations, it can be used to reason about the formal dynamics of language change as well.", "Without simulation, it would be difficult or impossible to undercover the interplay between acquisition and social structure on the propagation of language change.", "Neither factor alone can account for the theoretical or empirically observed patterns.", "Simulations of this kind which explicitly model both simultaneously is well equipped to provide insights that fieldwork and laboratory work cannot.", "As such, it is an invaluable complement to those more traditional methodologies." ] }
{ "paper_header_number": [ "1", "1.1", "2", "2.2", "2.3", "3", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Framework for Transmission in Social Networks", "Propagation in the Network", "Learning in the Network", "Application: Testing Assumptions", "Application: Mergers in Progress", "Background", "Model Setup", "Results", "Discussion" ] }
GEM-SciDuet-train-106#paper-1281#slide-21
Results Updating Connections
Social connections change constantly Rewire the edges (recalculate A) at every iteration The outcome is similar, but clusters tipping points are temporally closer No cluster remains particularly well or poorly connected for long Cluster Merger Rates Rhode Island Avg
Social connections change constantly Rewire the edges (recalculate A) at every iteration The outcome is similar, but clusters tipping points are temporally closer No cluster remains particularly well or poorly connected for long Cluster Merger Rates Rhode Island Avg
[]
GEM-SciDuet-train-106#paper-1281#slide-22
1281
A Framework for Representing Language Acquisition in a Population Setting
Language variation and change are driven both by individuals' internal cognitive processes and by the social structures through which language propagates. A wide range of computational frameworks have been proposed to connect these drivers. We compare the strengths and weaknesses of existing approaches and propose a new analytic framework which combines previous network models' ability to capture realistic social structure with practically and more elegant computational properties. The framework privileges the process of language acquisition and embeds learners in a social network but is modular so that population structure can be combined with different acquisition models. We demonstrate two applications for the framework: a test of practical concerns that arise when modeling acquisition in a population setting and an application of the framework to recent work on phonological mergers in progress.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217 ], "paper_content_text": [ "Introduction The process of language change should be thought of as a two-step cycle in which 1) individuals acquire their native languages from their predecessors then 2) pass them on to their successors.", "Small changes accrue over time this way and create both small-scale interpersonal variation and large-scale typological differences.", "It is easy to draw a strong analogy here between linguistic evolution and biological evolution.", "Both feature classic descent with modification, except while phenotypes are transmitted through genes and acted on by natural selection, language is both transmitted through and constrained by the individual (Cavalli-Sforza and Feldman, 1981; Ritt, 2004, etc.)", ".", "But while evolution, linguistic or otherwise, is driven by forces acting on the individual, it unfolds on the level of populations (Cavalli-Sforza and Feldman, 1981) .", "The influence of communitylevel social factors on the path of language change is a major focus of sociolinguistics (Labov, 2001; Milroy and Milroy, 1985; Rogers Everett, 1995) .", "Ideally, one could observe population-level variation unfold in real time while testing out individual factors, but this is impossible because nobody can travel back in time or fit entire natural environments into a lab.", "Change that has already happened is out of reach, and change in progress is buried in a world of confounds.", "The classic sociolinguistic method instead approaches the problem by inferring causal factors from patterns discovered in field interviews and corpora (Labov, 1994; Labov et al., 2005, etc.)", ".", "This is the primary source of empirical data in the field and the only way to look at language change in a naturalistic setting, but it is limited in that it cannot test cause and effect directly.", "More recently, controlled experimental studies have emerged as a complementary line of research which manipulate causal factors directly (Johnson et al., 1999; Campbell-Kibler, 2009, etc.)", ", but are inherently removed natural time and scale.", "A third approach, the one we build upon here, relies on computational modeling to simulate how sociolinguistic factors might work together in larger populations (Klein, 1966; Blythe and Croft, 2012; Kauhanen, 2016, etc.)", ".", "It has long been argued that language acquisition is the primary cause of language change (Sweet, 1899; Lightfoot, 1979; Niyogi, 1998, etc.)", ".", "In the last few decades, this connection has been modeled computationally (Gibson and Wexler, 1994; Kirby et al., 2000; Yang, 2000, etc.)", ", leading to the strong conclusion that change is the inevitable consequence of mixed linguistic input or finite learning periods (Niyogi and Berwick, 1996) , even if children are \"perfect\" learners.", "An important result connecting the learner and population emphasizes the need for this line of work: the space of paths of change available in populations is formally larger than the paths available to linear chains of iterated learners.", "Niyogi and Berwick (2009) prove formally that even perfectly-mixed (i.e., uniform and homogeneous social network) populations admit phase transitions in the path of change unavailable to chains of single learners commonly implemented in iterated learning (Kirby et al., 2000) .", "This suggests that small-population experimental studies in sociolinguistics and in child language acquisition do not paint the full picture of language change.", "We introduce a new framework for modeling language change in populations.", "It has an outer loop to represent generational progression, but it replaces the inner loop which calculates randomized interactions between agents with a single formula that is defined generally enough to allow the simulation of a wide range of scenarios.", "It builds upon the principled formalism described by Niyogi and Berwick (1996, et seq.)", ", privileging the acquisition model and separating it from the population model.", "The resulting modular framework is described in the following sections.", "First, Section 1.1 presents a survey of previous simulation work followed by a description of the new population model in Section 2.", "Next, Section 3 addresses practical concerns relating population size to assumptions about language acquisition.", "Finally, Section 4 introduces a case study on phonological change which demonstrates the need for appropriate models both of acquisition and populations.", "Related Work Computational models for the propagation of linguistic variation have been employed with a variety of research goals in mind.", "Every paper implements its own framework with few exceptions, so comparison across studies is difficult.", "Additionally, since each model is essentially 'boutique,' it is always possible that models are designed consciously or unconsciously to achieve a specific outcome rather driven by underlying principles.", "We group these frameworks into three classes according to their implementation, swarm, network, and algebraic, and discusses their strengths and weaknesses.", "The first class, called swarm here, models populations as collections of agents placed on a grid.", "They \"swarm\" around randomly according to some movement function, and \"interact\" when they occupy adjacent grid spaces (Satterfield, 2001; Harrison et al., 2002; Ke et al., 2008; Stanford and Kenny, 2013) .", "This tends toward concrete interpretation, for example, more mobile populations are expressed directly by more mobile agents.", "They capture Bloomfield (1933) 's \"principle of density\" which describes the observation that geographically or socially close individuals interact more frequently than those farther away.", "On the other hand, they provide little control over network structure, relying on series of explicit movement constraints in order to direct their agents, and since each one moves randomly at each iteration, these models have potentially thousands of degrees of freedom.", "Such simulations should be run many times if any sort of statistically expected results are to be computed.", "The second class, network frameworks, model speakers as nodes and interaction probabilities as weighted edges on network graphs (Minett and Wang, 2008; Baxter et al., 2009; Fagyal et al., 2010; Blythe and Croft, 2012; Kauhanen, 2016) .", "These frameworks offer precise control over social network structure and can test specific community models from within sociolinguistics.", "However, implementations usually proceed by some kind of iterative probabilistic node-pair selection process, and in this way suffer from the same statistical pitfalls as swarm frameworks.", "In contrast to swarm models, interaction is rigidly restricted to immediately connected nodes, so to achieve gradient interaction probabilities, edges must be frequently updated or nearly fully-connected graphs with carefully assigned edge weights would need to be constructed and motivated.", "The third class, algebraic frameworks, present analytic methods for determining the state of the network at the end of each iteration rather than relying on stochastic simulation of individual agents (Niyogi and Berwick, 1996, 1997; Yang, 2000; Baxter et al., 2006; Minett and Wang, 2008; Niyogi and Berwick, 2009) .", "Removing that inner loop is a more mathematically elegant approach and avoids dealing unnecessarily with statistics behind random trials.", "Removing that loop speeds up calculation as well, making larger simulations more tractable than with network or swarm frameworks.", "But this power is achieved by sacrificing the social network.", "Up to this point, such models have, to our knowledge, only been defined over perfectly-mixed (i.e., no network effects) populations.", "That assumption is useful for reasoning about the mathematical theory behind language change, but it hinders such models' utility in empirical studies.", "For example, though Baxter et al.", "(2006) and Minett and Wang (2008) implement algebraic models for perfectly mixed populations, they fall back on network models to model network effects.", "Framework for Transmission in Social Networks Algebraic frameworks have their mathematical advantage, but network frameworks provide a richer model for representing real-world population structures and swarm models capture density effects by default.", "An ideal framework would combine the benefits of all three of these.", "Here we do just that.", "We introduce a framework that instantiates Niyogi and Berwick (1996) 's acquisitiondriven formalism where change is handled explicitly as a two-step alternation between individual learners learning and populations interacting.", "It provides an analytic solution to the state of a network structure over which swarm-like behavior can be modeled.", "We begin by conceptualizing the framework in terms of agents traveling probabilistically over a network structure as in Algo.", "1 before introducing the analytic solution.", "There is an individual standing at every node in the graph, and at every iteration, each individual begins at some location and travels along the network's edges, at each step deciding to continue on or to stop and interact with the agent at that node.", "Any two agents with a nonzero weight path between them could potentially interact, so the overall probability of an interaction is a function of the shape of the network and the decay rate of the step probability.", "The shorter and higher weighted the path between two agents, the more likely they are to interact.", "This corresponds to the gradient interaction probabilities of swarm frameworks.", "Social networks are typically conceived of as graph structures with individuals as vertices and the social or geographical connections between individuals as edges, and this allows for a great deal of flexibility.", "If edges are undirected, then all interactions are equal and bidirectional, but if edges are directed, interactions may or may not be.", "Edges can be weighted to represent likelihood of interaction or some measure of social valuation, and this too can vary over time.", "Lastly, it is possible to add and remove nodes themselves to capture births, deaths, or migration.", "The network structure is represented computationally here as an adjacency matrix A.", "In a population of n individuals, this is n Γ— n where each element a ij is the weight of the connection from individual j to individual i.", "The matrix must be column stochastic (all columns sum to 1 and contain only positive elements) so that edge weights can be interpreted as probabilities.", "The special case where the matrix is symmetric (every a ij = a ji ) models undirected edges, and more strongly, the model reduces to perfectly-mixed populations when each a ij = 1 n .", "We define a notion of communities over the nodes of the network in order to add the option to categorize groups of individuals.", "Membership among c communities is identified with an n Γ— c indicator matrix C. Depending on the problem at hand, it is possible to calculate the average behavior of the learners within each community directly without having to calculate the behavior of each individual member.", "Propagation in the Network In a typical network model, the edge weights between nodes in A are interpreted directly as interaction probabilities, meaning that individuals only ever interact with their immediate graph neighbors.", "We take a different approach by allowing the agents to \"travel\" and potentially interact with any other agent whose node is connected by a path of non-zero edges.", "If the number of traveling steps were fixed at k, the probability of each pair interacting would be defined as A k .", "It is more complicated for us since the number of steps traveled is a random variable.", "The probability of j interacting with i (p(ij)) is the probability of them interacting after k steps times the probability of k for all values of k as in Eqn.", "1.", "Combining this intuition with A yields the interaction probabilities for all i, j pairs.", "p(ij) = k p(ij|k steps) p(k steps) (1) The pattern of linguistic variants or grammars (in the formal sense where grammar g is the intensional equivalent of language L g ) within a network unfolds as a dynamical system over the course of many iterations, and learners' positions within the network mediate which ones they eventually acquire.", "In a system with g grammars and n individuals, a n Γ— g row-stochastic matrix G specifies the probability with which each community expresses each grammar.", "Given this notion of interaction and the specification of grammars expressed within a network, it is possible to compute the distribution of grammars presented to each learner.", "This is the learners' linguistic environment and is represented by a matrix E in the same form as G .", "An environment function E n (G t , A) = E t+1 shown in Eqn.", "2 calculates E by first calculating all the interaction probabilities in the network then multiplying those by the grammars which every agent expresses to get the environment E. The Ξ± parameter from the geometric distribution 1 defines the travel decay rate.", "A lower Ξ± defines conceptually more mobile agents.", "More generally, E n is a special case of E(G t , C t , A t ) = E t+1 where the number of communities equals the number of individuals (c = n).", "C becomes the identity matrix without loss of generality, so the network's initial condition does not have to be defined explicitly.", "For any other community definition, an initial condition has to be defined as in Eqn.", "3 which specifies the starting point in the network that each agent conceptually begins traveling from.", "The output of E is a g Γ— c matrix giving the environment of the average agent in each community.", "2 E n (G t , A) = G t Ξ± (I βˆ’ (1 βˆ’ Ξ±)A) βˆ’1 (2) E(G t , C, A) = E n (G t , A)C(C C) βˆ’1 (3) The output of E must be broadcast to g Γ— n, which would result in the loss of some information unless the assumption can be made that each community is internally uniform.", "However, when that assumption can be made, the n Γ— n adjacency matrix admits a c Γ— c equitable partition A Ο€ (Eqn.", "4) (Schaub et al., 2016) which permits an alternate environment function E EP (G t , C, A) shown in Eqn.", "5 that is equivalent to the lossless E n if A.", "If n c, E EP is much faster to calculate because it only inverts a small c Γ— c matrix rather than a large n Γ— n. This makes it feasible to run much larger simulations than what has been done in the past.", "A Ο€ = (C C) βˆ’1 C AC (4) EEP = Ξ±G C (I βˆ’ (1 βˆ’ Ξ±)A Ο€ ) βˆ’1 (C C) βˆ’1 (5) Learning in the Network The environment function describes what inputs E t+1 are available to learners given the language expressed by the mature speakers of the previous age cohort with grammars G t .", "The second component of the framework describes the learning algorithm A(E t+1 ) = G t+1 , how individuals respond to their input environment.", "The resulting G t+1 describes which grammars those learners will eventually contribute to the subsequent generation's environment E t+2 .", "This back-andforth between adults' grammars G and childrens' environment E is the two-step cycle of language change (Fig.", "1) .", "In neutral change, learners would acquire grammars at the rates that they are expressed in their environments, but there is good reason to believe .", ".", ".", "G t β†’ E t+1 β†’ G t+1 .", ".", ".", "G t+i β†’ E t+i+1 .", ".", ".", "Figure 1: Language change as an alternation between G and E matrices that most language change involves differential fitness between competing variants, and most nontrivial learning algorithms yield some kind of fitness (Kroch, 1989; Yang, 2000; Blythe and Croft, 2012, etc.)", ", so A is rarely neutral.", "A neutral and simple advantaged model are both considered in Section 3, and a more complex learning algorithm is described for Section 4.", "Application: Testing Assumptions The general nature of the framework described here renders it suitable for reproducing the results of previous works and evaluating their assumptions.", "To demonstrate this, we reproduce the major result from Kauhanen (2016) , which tested the behavior of neutral change in networks of singlegrammar learners, in order to dissect two of its primary assumptions.", "Implemented in a typical network framework, the original setup contains n = 200 individuals in probabilistically generated centralized networks in which individuals mature categorically to the single most frequent grammar in their input.", "The author found that categorical neutral change produced chaotic paths of change regardless of network shape and that periodically \"rewiring\" some of the network edges smoothed this out.", "Without commenting on rewiring, we find that the combination of n and choice of categorical learners conspire to create the chaotic results.", "We create two communities, both centralized along the lines of the single cluster in Kauhanen (2016) , initialize all members of cluster 1 with grammar g 1 and all members of cluster 2 with grammar g 2 , and additional edges are added between members of clusters 1 and 2 to allow interaction.", "G is converted to an indicator matrix at the end of each learning iteration by rounding values to 0 and 1 in order to model categorical learners who only internalize the most common grammar in their inputs as in the original model.", "In a pair of infinitely large clusters or two clusters where individuals are permitted to learn a probabilistic distribution of grammars, each cluster should homogenize to a 50/50 distribution of g 1 and g 2 after some number of iterations depending on the specifics of the network shape and setting for Ξ± creating the red curves in Fig.", "2 .", "At n = 20000, each of 10 trials roughly follows the path of the predicted curve, but when run at the original n = 200 for 10 trials, this produces the type of chaotic behavior which Kauhanen (2016) attempts to repair.", "The outcome appears to be the result of an assumption made out of convenience (n = 200) rather than a principled decision.", "To further explore the impact of the population size assumption, we experiment on a model of advantaged change, which is typically contrasted with neutral change because of its tendency to produce \"well-behaved\" S-curve change (Blythe and Croft, 2012; Kauhanen, 2016) .", "This time, only a single cluster is created, and the advantaged grammar is initially assigned to 1% of the population.", "As seen in Figure 3 , results are chaotic for n = 200 once again and near predicted for n = 20000.", "This is important because at n = 200, advantaged change is chaotic, and most simulations both rise and fall.", "An experimenter who only studied advantaged change in small population might concluded that it is as ill-behaved as neutral change.", "While the conclusions that Kauhanen (2016) draws appear valid for n = 200, it is not clear to what extent they can be projected onto larger populations.", "This demonstrates the need for carefully choosing one's modeling assumptions and testing them out when possible.", "Application: Mergers in Progress The acquisition of phonological mergers in mixed input settings presents an interesting problem.", "It appears that mergers have an inherent advantage because they tend to spread at the expense of distinctions, and once they begin, they are rarely reversed (Labov, 1994) .", "Yang (2009) 's acquisition model quantifies this advantage as the relatively lower chance of misinterpretation if a listener assumes the merged grammar instead of the nonmerged grammar once a sufficient proportion of the environment is merged.", "Applied to Johnson (2007) 's detailed population study of the frontier of the COT-CAUGHT merger in the small towns along the border between Rhode Island and Massachusetts, this accurately predicts the ratio of merged input for a child to acquire the merged grammar, however when applied to a perfectly mixed population of learners, it fails to model the spread of the merged grammar in the population.", "Yang's model is input-driven, so it is conducive to simulation with minimal assumptions past those drawn from the empirical data.", "We test the behavior of this learning model in a typical population network and demonstrate that it produces a reasonable path of change.", "Background The COT-CAUGHT merger, also called the low back merger describes the phenomenon present in varieties of North American English spoken in eastern New England, western Pennsylvania, the American West, and Canada among others where the vowel in words like cot and the vowel in words like caught have come to be pronounced the same (Labov et al., 2005, pp.", "58-65) .", "The geographical extent of the merger is currently expanding, which might be expected if the merger has a cognitive or social advantage associated with it.", "Johnson (2007) 's study of the merger's frontier on the border Rhode Island and Massachusetts uncovered an interesting social dynamic that illustrates the merger's speed: there are families where the parents and older siblings non-merged, but the younger siblings are.", "The merger has swept through in only a few years and passed between the siblings.", "Yang (2009) seeks to understand why mergers have an advantage from a cognitive perspective, and his model treats the acquisition of mergers as an evolutionary process.", "Learners who receive both merged (M + ) and non-merged (M βˆ’ ) input entertain both a merged (g + ) and non-merged (g βˆ’ ) grammar and reward whichever grammar successfully parses the input.", "This kind of variational learner (Yang, 2000) is essentially an adaptation of the classic evolutionary Linear Reward Punishment model (Bush and Mosteller, 1953) .", "The fitness of each grammar is the probability in the limit that it will fail to parse any given input, and since it is virtually always the case that this probability is different for both grammars, fitness is virtually always asymmetric.", "The variational learner is characterized as follows.", "Given two grammars and an input token s, The learner parses s with g 1 with probability p and with g 2 with probability q = 1 βˆ’ p. p is rewarded according to whether the choice of g successfully parses s (g β†’ s) or it fails to (g s), where Ξ³ is some small constant.", "p = p + Ξ³q, g β†’ s (1 βˆ’ Ξ³)p, g s Given a specific problem, one can calculate a penalty probability C for each g, the proportion of input that would cause g s. The grammar with the lower C has the advantage, so the other one will be driven down in the long run.", "C can be estimated from type frequencies in a corpus, and the model is non-parametric because these values do not depend on Ξ³. lim tβ†’βˆž p t = C 2 C 1 + C 2 lim tβ†’βˆž q t = C 1 C 1 + C 2 To understand the COT-CAUGHT merger empirically, one must reason about what kind of input would trigger a penalty and then calculate the penalty probabilities of the merged grammar C + and non-merged grammar C βˆ’ from a corpus.", "This model considers parsing failure to be the rate of initial misinterpretation, and for a vowel merger, the only inputs that could create an initial misinterpretation are minimal pairs because they become homophones.", "Examples of COT-CAUGHT minimal pairs include cot-caught, Don-Dawn, stock-stalk, odd-awed, collar-caller, and so on.", "The merged g + grammar collapses would-be minimal pairs into homophones, so the penalty rate C + comes down to lexical access.", "Under the observation that more frequent homophones are retrieved first regardless of syntactic context (Caramazza et al., 2001) , g + listeners only suffer initial misinterpretation when the less frequent member of a pair is uttered regardless of the rate of M + .", "If H is the sum token frequency of all minimal pairs and h i o , h i oh are the frequencies of the ith pair's members, then C+ is calculated by Eqn.", "6.", "In contrast, g βˆ’ listeners are sensitive to the phonemic distinction, so they misinterpret M βˆ’ input at the rate of mishearing one vowel for the other (Peterson and Barney, 1952) (second half of Eqn.", "7).", "And given M + input, they misinterpret whenever they hear the phoneme which g βˆ’ does not expect (e.g., a merged speaker pronouncing cot with the CAUGHT vowel) times the probability of not mishearing that vowel (1-) plus times the probability of hearing the right vowel (i.e., the merged speaker pronounces cot with the COT vowel but it is misheard anyway) (first half of Eqn.", "7).", "Since g βˆ’ misinterpretation rates are a function of the rate of M + (p) in the environment, there is a threshold of M + speakers above which the merged grammar has a fitness advantage over the non-merged one.", "C + = 1 H i min(h i o , h i oh ) (6) C βˆ’ = 1 H i p 0 ((1 βˆ’ oh )h i o + oh h i oh ) (7) +q 0 ( oh h i o + oh h i oh ) Calculating this threshold for the frequent minimal pairs that Yang extracts from the Wortschatz project (Biemann et al., 2004 ) corpus 3 and mishearing rates from Peterson and Barney (1952), the Yang model predicts that a learner exposed to at least ∼ 17% COT-CAUGHT-merged input will acquire the merger.", "This threshold represents a strong advantage for M + because it is well under the 50% threshold expected for neutral (non-advantaged) change and it is very close to what was found in Johnson (2007) 's sociolinguistic study.", "It predicts that younger children may have g + while their parents and even older siblings have g βˆ’ if the 17% threshold was crossed in E after the acquisition period of the older sibling but before that of the younger sibling.", "Model Setup All the mechanics behind the learning model reduce to a simple statement: learners acquires g + iff > 17% of their input is M + and they acquire g βˆ’ otherwise.", "However, this kind of categorical learner in a perfectly-mixed population leads to immediate fixation at either g βˆ’ or g + in a single iteration, since the proportion of g + speakers in the population is equivalent to the proportion of M + input in every learner's environment.", "This is not realistic change.", "Clearly, social network structure is at least as important as the learning algorithm in modeling the spread of the merger.", "We model the change in a non-uniform social network of 100 centralized clusters of 75 individuals each.", "75 was chosen as half Dunbar's number, the maximum number of reliable social connections that an adult can maintain (Dunbar, 2010) .", "There are two grammars, g + and g βˆ’ , and learners internalize one or the other according to the 17% threshold of M + in their input.", "One cluster represents the source of the merger and is initialized at 100% g + , while the rest begin 100% g βˆ’ .", "Inter-cluster connections are chosen randomly so that some connections are between central members of the clusters and some are between peripheral members.", "The one merged cluster is connected to half the other clusters representing those at the frontier of the change, and each other cluster is connected to five randomly chosen ones.", "4 This network structure echoes work in sociolinguistics, in particular, Milroy and Milroy (1985) 's notion of strong and weak connections in language change, where weak connections between social clusters are particularly important for propagation of a change.", "Propagation of the merged grammar is calculated by E n because we are interested in the behavior of individuals without loss of precision and because it cannot be assumed that each cluster is internally uniform.", "5 Since the spread of the merger has been rapid enough to detect over a period of a few years, iterations are modeled as short age co-horts rather than full generations in the first experiments by updating only a randomly chosen 10% of nodes at each iteration because only a fraction of the population is learning at any given time.", "A model where every node is updated is investigated as well.", "Results The behavior of this simulation is shown graphically in Figure 4 .", "The fine/colored lines indicate the rate of M + within each initially non-merged cluster, and the bold/black line shows the average rate across all initially non-merged.", "The merger spreads from cluster to cluster in succession over the \"weak\" inter-cluster connections and through each cluster over the 'strong' connections before moving on to the next ones.", "Everett, 1995) members have the merger, a period of rapid diffusion of the merger, then some time where a few laggards resist the merger.", "As a result, most clusters exhibit an S-like shape.", "A few clusters change rapidly because of their especially wellconnected positions in the network, and some lag behind the rest because they are poorly connected to the rest of the network.", "More interestingly, the population-wide average, the population-level data at the kind of granularity that is often studied, yields a smooth S-curve with a shallower slope than the individual clusters.", "The fact that it arises naturally here in a network that conforms with typical network shapes but was otherwise randomly generated is encouraging because the experiment was not set up so that it would produce such a curve, and the steep rate of change in individual clusters is what is expected for a change that is rapid enough to affect siblings differently.", "In the above simulation, only a fraction of nodes were updated at each iteration in order to model a rapid change.", "In order to confirm that this choice is not affecting the results and to test a purer implementation of the framework presented here, we remove that constraint and update every node at each iteration.", "Figure 5 shows what happens over 20 iterations in a network that is otherwise identical but with 2/5 as many inter-cluster connections as the original.", "A qualitatively similar pattern arises, so the choice to update only a fraction of the population is not crucially affecting the results.", "In all experiments so far, social connections were fixed at the first iteration even though connections in real populations tend to change over time.", "To investigate that modeling assumption, we perform another simulation in which connections are randomly updated both within and across clusters at each iteration akin to Kauhanen (2016) 's rewiring.", "The result as shown in Figure 6 is similar to before, with one major difference.", "The individual clusters transition more closely in time because no individual cluster remains poorly connected or especially well connected throughout the entire simulation.", "Finally, we test our assumptions about population size by repeating the experiments on a smaller network of 40 clusters of 18 individuals.", "The results are qualitatively similar, but the S-curve appears to be more sensitive to probabilistic connections in the network.", "To explore this, we present the average network-wide rate of (M + ) across 10 trials, revealing that an S-like curve is formed each time but that its slope varies.", "A few trials never reach 100% because some of the clusters are not connected to the innovative one.", "The slope varies between trials, indicating that the rate of change is a function of both the population structure and the learning algorithm, but the network size does not substantially affect these results.", "Discussion The algebraic-network framework for modeling population-level language change presented here has substantial practical and theoretical advantages over previous ones.", "It is much simpler computationally than previous frameworks because it calculates the statistically expected behavior of each generation analytically and therefore removes the entire inner loop of calculating stochastic inter-agent interactions from the simulation.", "It follows the Niyogi and Berwick (1996) formalism for language change which presents a clean and modular way of reasoning about the problem and promotes the centrality of language acquisition.", "In addition to the core algorithm, the framework offers enough flexibility to represent a wide variety of processes from the highly abstract (e.g., Kauhanen (2016) ) to those grounded in soci-olinguistic and acquisition research (e.g., Yang (2009) ).", "In our investigation of Kauhanen's basic assumptions, we discover how seemingly innocuous decisions about population size and learning conspire to drive simulation results.", "If learners are conceived as categorical learners, population size becomes a deciding factor in the path of change.", "So while the original results are interesting and meaningful, they may only valid for small (on the order of 10 2 ) populations.", "In our simulation of the spread of the COT-CAUGHT merger, we show how a cognitivelymotivated model of acquisition requires a network model in order to represent population-level language change.", "The population is represented as a collection of individual clusters based on sociological work, but the clusters themselves are connected randomly.", "The fact that S-curves arise naturally from these networks underscores their centrality to language change.", "One problem that this line of simulation work has always faced has been the lack of viable comparison between models because every study implements its own learning, network, and interaction models.", "The modular nature of our framework advances against this trend since it is now possible to hold the population model constant while slotting in various learning models to test them against one another and vice-versa.", "Finally, since this framework reduces to Niyogi & Berwick's models in perfectly-mixed populations, it can be used to reason about the formal dynamics of language change as well.", "Without simulation, it would be difficult or impossible to undercover the interplay between acquisition and social structure on the propagation of language change.", "Neither factor alone can account for the theoretical or empirically observed patterns.", "Simulations of this kind which explicitly model both simultaneously is well equipped to provide insights that fieldwork and laboratory work cannot.", "As such, it is an invaluable complement to those more traditional methodologies." ] }
{ "paper_header_number": [ "1", "1.1", "2", "2.2", "2.3", "3", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Framework for Transmission in Social Networks", "Propagation in the Network", "Learning in the Network", "Application: Testing Assumptions", "Application: Mergers in Progress", "Background", "Model Setup", "Results", "Discussion" ] }
GEM-SciDuet-train-106#paper-1281#slide-22
Fractional Updating
The merger spreads rapidly enough to distinguish older and younger siblings Only a fraction of the population is of the correct age at any moment Update only 10% of random nodes at every iteration Cluster Merger Rates Rhode Island Avg Similar outcome with wider spread between cluster tipping points Simulation took about 5x as long because
The merger spreads rapidly enough to distinguish older and younger siblings Only a fraction of the population is of the correct age at any moment Update only 10% of random nodes at every iteration Cluster Merger Rates Rhode Island Avg Similar outcome with wider spread between cluster tipping points Simulation took about 5x as long because
[]
GEM-SciDuet-train-106#paper-1281#slide-23
1281
A Framework for Representing Language Acquisition in a Population Setting
Language variation and change are driven both by individuals' internal cognitive processes and by the social structures through which language propagates. A wide range of computational frameworks have been proposed to connect these drivers. We compare the strengths and weaknesses of existing approaches and propose a new analytic framework which combines previous network models' ability to capture realistic social structure with practically and more elegant computational properties. The framework privileges the process of language acquisition and embeds learners in a social network but is modular so that population structure can be combined with different acquisition models. We demonstrate two applications for the framework: a test of practical concerns that arise when modeling acquisition in a population setting and an application of the framework to recent work on phonological mergers in progress.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217 ], "paper_content_text": [ "Introduction The process of language change should be thought of as a two-step cycle in which 1) individuals acquire their native languages from their predecessors then 2) pass them on to their successors.", "Small changes accrue over time this way and create both small-scale interpersonal variation and large-scale typological differences.", "It is easy to draw a strong analogy here between linguistic evolution and biological evolution.", "Both feature classic descent with modification, except while phenotypes are transmitted through genes and acted on by natural selection, language is both transmitted through and constrained by the individual (Cavalli-Sforza and Feldman, 1981; Ritt, 2004, etc.)", ".", "But while evolution, linguistic or otherwise, is driven by forces acting on the individual, it unfolds on the level of populations (Cavalli-Sforza and Feldman, 1981) .", "The influence of communitylevel social factors on the path of language change is a major focus of sociolinguistics (Labov, 2001; Milroy and Milroy, 1985; Rogers Everett, 1995) .", "Ideally, one could observe population-level variation unfold in real time while testing out individual factors, but this is impossible because nobody can travel back in time or fit entire natural environments into a lab.", "Change that has already happened is out of reach, and change in progress is buried in a world of confounds.", "The classic sociolinguistic method instead approaches the problem by inferring causal factors from patterns discovered in field interviews and corpora (Labov, 1994; Labov et al., 2005, etc.)", ".", "This is the primary source of empirical data in the field and the only way to look at language change in a naturalistic setting, but it is limited in that it cannot test cause and effect directly.", "More recently, controlled experimental studies have emerged as a complementary line of research which manipulate causal factors directly (Johnson et al., 1999; Campbell-Kibler, 2009, etc.)", ", but are inherently removed natural time and scale.", "A third approach, the one we build upon here, relies on computational modeling to simulate how sociolinguistic factors might work together in larger populations (Klein, 1966; Blythe and Croft, 2012; Kauhanen, 2016, etc.)", ".", "It has long been argued that language acquisition is the primary cause of language change (Sweet, 1899; Lightfoot, 1979; Niyogi, 1998, etc.)", ".", "In the last few decades, this connection has been modeled computationally (Gibson and Wexler, 1994; Kirby et al., 2000; Yang, 2000, etc.)", ", leading to the strong conclusion that change is the inevitable consequence of mixed linguistic input or finite learning periods (Niyogi and Berwick, 1996) , even if children are \"perfect\" learners.", "An important result connecting the learner and population emphasizes the need for this line of work: the space of paths of change available in populations is formally larger than the paths available to linear chains of iterated learners.", "Niyogi and Berwick (2009) prove formally that even perfectly-mixed (i.e., uniform and homogeneous social network) populations admit phase transitions in the path of change unavailable to chains of single learners commonly implemented in iterated learning (Kirby et al., 2000) .", "This suggests that small-population experimental studies in sociolinguistics and in child language acquisition do not paint the full picture of language change.", "We introduce a new framework for modeling language change in populations.", "It has an outer loop to represent generational progression, but it replaces the inner loop which calculates randomized interactions between agents with a single formula that is defined generally enough to allow the simulation of a wide range of scenarios.", "It builds upon the principled formalism described by Niyogi and Berwick (1996, et seq.)", ", privileging the acquisition model and separating it from the population model.", "The resulting modular framework is described in the following sections.", "First, Section 1.1 presents a survey of previous simulation work followed by a description of the new population model in Section 2.", "Next, Section 3 addresses practical concerns relating population size to assumptions about language acquisition.", "Finally, Section 4 introduces a case study on phonological change which demonstrates the need for appropriate models both of acquisition and populations.", "Related Work Computational models for the propagation of linguistic variation have been employed with a variety of research goals in mind.", "Every paper implements its own framework with few exceptions, so comparison across studies is difficult.", "Additionally, since each model is essentially 'boutique,' it is always possible that models are designed consciously or unconsciously to achieve a specific outcome rather driven by underlying principles.", "We group these frameworks into three classes according to their implementation, swarm, network, and algebraic, and discusses their strengths and weaknesses.", "The first class, called swarm here, models populations as collections of agents placed on a grid.", "They \"swarm\" around randomly according to some movement function, and \"interact\" when they occupy adjacent grid spaces (Satterfield, 2001; Harrison et al., 2002; Ke et al., 2008; Stanford and Kenny, 2013) .", "This tends toward concrete interpretation, for example, more mobile populations are expressed directly by more mobile agents.", "They capture Bloomfield (1933) 's \"principle of density\" which describes the observation that geographically or socially close individuals interact more frequently than those farther away.", "On the other hand, they provide little control over network structure, relying on series of explicit movement constraints in order to direct their agents, and since each one moves randomly at each iteration, these models have potentially thousands of degrees of freedom.", "Such simulations should be run many times if any sort of statistically expected results are to be computed.", "The second class, network frameworks, model speakers as nodes and interaction probabilities as weighted edges on network graphs (Minett and Wang, 2008; Baxter et al., 2009; Fagyal et al., 2010; Blythe and Croft, 2012; Kauhanen, 2016) .", "These frameworks offer precise control over social network structure and can test specific community models from within sociolinguistics.", "However, implementations usually proceed by some kind of iterative probabilistic node-pair selection process, and in this way suffer from the same statistical pitfalls as swarm frameworks.", "In contrast to swarm models, interaction is rigidly restricted to immediately connected nodes, so to achieve gradient interaction probabilities, edges must be frequently updated or nearly fully-connected graphs with carefully assigned edge weights would need to be constructed and motivated.", "The third class, algebraic frameworks, present analytic methods for determining the state of the network at the end of each iteration rather than relying on stochastic simulation of individual agents (Niyogi and Berwick, 1996, 1997; Yang, 2000; Baxter et al., 2006; Minett and Wang, 2008; Niyogi and Berwick, 2009) .", "Removing that inner loop is a more mathematically elegant approach and avoids dealing unnecessarily with statistics behind random trials.", "Removing that loop speeds up calculation as well, making larger simulations more tractable than with network or swarm frameworks.", "But this power is achieved by sacrificing the social network.", "Up to this point, such models have, to our knowledge, only been defined over perfectly-mixed (i.e., no network effects) populations.", "That assumption is useful for reasoning about the mathematical theory behind language change, but it hinders such models' utility in empirical studies.", "For example, though Baxter et al.", "(2006) and Minett and Wang (2008) implement algebraic models for perfectly mixed populations, they fall back on network models to model network effects.", "Framework for Transmission in Social Networks Algebraic frameworks have their mathematical advantage, but network frameworks provide a richer model for representing real-world population structures and swarm models capture density effects by default.", "An ideal framework would combine the benefits of all three of these.", "Here we do just that.", "We introduce a framework that instantiates Niyogi and Berwick (1996) 's acquisitiondriven formalism where change is handled explicitly as a two-step alternation between individual learners learning and populations interacting.", "It provides an analytic solution to the state of a network structure over which swarm-like behavior can be modeled.", "We begin by conceptualizing the framework in terms of agents traveling probabilistically over a network structure as in Algo.", "1 before introducing the analytic solution.", "There is an individual standing at every node in the graph, and at every iteration, each individual begins at some location and travels along the network's edges, at each step deciding to continue on or to stop and interact with the agent at that node.", "Any two agents with a nonzero weight path between them could potentially interact, so the overall probability of an interaction is a function of the shape of the network and the decay rate of the step probability.", "The shorter and higher weighted the path between two agents, the more likely they are to interact.", "This corresponds to the gradient interaction probabilities of swarm frameworks.", "Social networks are typically conceived of as graph structures with individuals as vertices and the social or geographical connections between individuals as edges, and this allows for a great deal of flexibility.", "If edges are undirected, then all interactions are equal and bidirectional, but if edges are directed, interactions may or may not be.", "Edges can be weighted to represent likelihood of interaction or some measure of social valuation, and this too can vary over time.", "Lastly, it is possible to add and remove nodes themselves to capture births, deaths, or migration.", "The network structure is represented computationally here as an adjacency matrix A.", "In a population of n individuals, this is n Γ— n where each element a ij is the weight of the connection from individual j to individual i.", "The matrix must be column stochastic (all columns sum to 1 and contain only positive elements) so that edge weights can be interpreted as probabilities.", "The special case where the matrix is symmetric (every a ij = a ji ) models undirected edges, and more strongly, the model reduces to perfectly-mixed populations when each a ij = 1 n .", "We define a notion of communities over the nodes of the network in order to add the option to categorize groups of individuals.", "Membership among c communities is identified with an n Γ— c indicator matrix C. Depending on the problem at hand, it is possible to calculate the average behavior of the learners within each community directly without having to calculate the behavior of each individual member.", "Propagation in the Network In a typical network model, the edge weights between nodes in A are interpreted directly as interaction probabilities, meaning that individuals only ever interact with their immediate graph neighbors.", "We take a different approach by allowing the agents to \"travel\" and potentially interact with any other agent whose node is connected by a path of non-zero edges.", "If the number of traveling steps were fixed at k, the probability of each pair interacting would be defined as A k .", "It is more complicated for us since the number of steps traveled is a random variable.", "The probability of j interacting with i (p(ij)) is the probability of them interacting after k steps times the probability of k for all values of k as in Eqn.", "1.", "Combining this intuition with A yields the interaction probabilities for all i, j pairs.", "p(ij) = k p(ij|k steps) p(k steps) (1) The pattern of linguistic variants or grammars (in the formal sense where grammar g is the intensional equivalent of language L g ) within a network unfolds as a dynamical system over the course of many iterations, and learners' positions within the network mediate which ones they eventually acquire.", "In a system with g grammars and n individuals, a n Γ— g row-stochastic matrix G specifies the probability with which each community expresses each grammar.", "Given this notion of interaction and the specification of grammars expressed within a network, it is possible to compute the distribution of grammars presented to each learner.", "This is the learners' linguistic environment and is represented by a matrix E in the same form as G .", "An environment function E n (G t , A) = E t+1 shown in Eqn.", "2 calculates E by first calculating all the interaction probabilities in the network then multiplying those by the grammars which every agent expresses to get the environment E. The Ξ± parameter from the geometric distribution 1 defines the travel decay rate.", "A lower Ξ± defines conceptually more mobile agents.", "More generally, E n is a special case of E(G t , C t , A t ) = E t+1 where the number of communities equals the number of individuals (c = n).", "C becomes the identity matrix without loss of generality, so the network's initial condition does not have to be defined explicitly.", "For any other community definition, an initial condition has to be defined as in Eqn.", "3 which specifies the starting point in the network that each agent conceptually begins traveling from.", "The output of E is a g Γ— c matrix giving the environment of the average agent in each community.", "2 E n (G t , A) = G t Ξ± (I βˆ’ (1 βˆ’ Ξ±)A) βˆ’1 (2) E(G t , C, A) = E n (G t , A)C(C C) βˆ’1 (3) The output of E must be broadcast to g Γ— n, which would result in the loss of some information unless the assumption can be made that each community is internally uniform.", "However, when that assumption can be made, the n Γ— n adjacency matrix admits a c Γ— c equitable partition A Ο€ (Eqn.", "4) (Schaub et al., 2016) which permits an alternate environment function E EP (G t , C, A) shown in Eqn.", "5 that is equivalent to the lossless E n if A.", "If n c, E EP is much faster to calculate because it only inverts a small c Γ— c matrix rather than a large n Γ— n. This makes it feasible to run much larger simulations than what has been done in the past.", "A Ο€ = (C C) βˆ’1 C AC (4) EEP = Ξ±G C (I βˆ’ (1 βˆ’ Ξ±)A Ο€ ) βˆ’1 (C C) βˆ’1 (5) Learning in the Network The environment function describes what inputs E t+1 are available to learners given the language expressed by the mature speakers of the previous age cohort with grammars G t .", "The second component of the framework describes the learning algorithm A(E t+1 ) = G t+1 , how individuals respond to their input environment.", "The resulting G t+1 describes which grammars those learners will eventually contribute to the subsequent generation's environment E t+2 .", "This back-andforth between adults' grammars G and childrens' environment E is the two-step cycle of language change (Fig.", "1) .", "In neutral change, learners would acquire grammars at the rates that they are expressed in their environments, but there is good reason to believe .", ".", ".", "G t β†’ E t+1 β†’ G t+1 .", ".", ".", "G t+i β†’ E t+i+1 .", ".", ".", "Figure 1: Language change as an alternation between G and E matrices that most language change involves differential fitness between competing variants, and most nontrivial learning algorithms yield some kind of fitness (Kroch, 1989; Yang, 2000; Blythe and Croft, 2012, etc.)", ", so A is rarely neutral.", "A neutral and simple advantaged model are both considered in Section 3, and a more complex learning algorithm is described for Section 4.", "Application: Testing Assumptions The general nature of the framework described here renders it suitable for reproducing the results of previous works and evaluating their assumptions.", "To demonstrate this, we reproduce the major result from Kauhanen (2016) , which tested the behavior of neutral change in networks of singlegrammar learners, in order to dissect two of its primary assumptions.", "Implemented in a typical network framework, the original setup contains n = 200 individuals in probabilistically generated centralized networks in which individuals mature categorically to the single most frequent grammar in their input.", "The author found that categorical neutral change produced chaotic paths of change regardless of network shape and that periodically \"rewiring\" some of the network edges smoothed this out.", "Without commenting on rewiring, we find that the combination of n and choice of categorical learners conspire to create the chaotic results.", "We create two communities, both centralized along the lines of the single cluster in Kauhanen (2016) , initialize all members of cluster 1 with grammar g 1 and all members of cluster 2 with grammar g 2 , and additional edges are added between members of clusters 1 and 2 to allow interaction.", "G is converted to an indicator matrix at the end of each learning iteration by rounding values to 0 and 1 in order to model categorical learners who only internalize the most common grammar in their inputs as in the original model.", "In a pair of infinitely large clusters or two clusters where individuals are permitted to learn a probabilistic distribution of grammars, each cluster should homogenize to a 50/50 distribution of g 1 and g 2 after some number of iterations depending on the specifics of the network shape and setting for Ξ± creating the red curves in Fig.", "2 .", "At n = 20000, each of 10 trials roughly follows the path of the predicted curve, but when run at the original n = 200 for 10 trials, this produces the type of chaotic behavior which Kauhanen (2016) attempts to repair.", "The outcome appears to be the result of an assumption made out of convenience (n = 200) rather than a principled decision.", "To further explore the impact of the population size assumption, we experiment on a model of advantaged change, which is typically contrasted with neutral change because of its tendency to produce \"well-behaved\" S-curve change (Blythe and Croft, 2012; Kauhanen, 2016) .", "This time, only a single cluster is created, and the advantaged grammar is initially assigned to 1% of the population.", "As seen in Figure 3 , results are chaotic for n = 200 once again and near predicted for n = 20000.", "This is important because at n = 200, advantaged change is chaotic, and most simulations both rise and fall.", "An experimenter who only studied advantaged change in small population might concluded that it is as ill-behaved as neutral change.", "While the conclusions that Kauhanen (2016) draws appear valid for n = 200, it is not clear to what extent they can be projected onto larger populations.", "This demonstrates the need for carefully choosing one's modeling assumptions and testing them out when possible.", "Application: Mergers in Progress The acquisition of phonological mergers in mixed input settings presents an interesting problem.", "It appears that mergers have an inherent advantage because they tend to spread at the expense of distinctions, and once they begin, they are rarely reversed (Labov, 1994) .", "Yang (2009) 's acquisition model quantifies this advantage as the relatively lower chance of misinterpretation if a listener assumes the merged grammar instead of the nonmerged grammar once a sufficient proportion of the environment is merged.", "Applied to Johnson (2007) 's detailed population study of the frontier of the COT-CAUGHT merger in the small towns along the border between Rhode Island and Massachusetts, this accurately predicts the ratio of merged input for a child to acquire the merged grammar, however when applied to a perfectly mixed population of learners, it fails to model the spread of the merged grammar in the population.", "Yang's model is input-driven, so it is conducive to simulation with minimal assumptions past those drawn from the empirical data.", "We test the behavior of this learning model in a typical population network and demonstrate that it produces a reasonable path of change.", "Background The COT-CAUGHT merger, also called the low back merger describes the phenomenon present in varieties of North American English spoken in eastern New England, western Pennsylvania, the American West, and Canada among others where the vowel in words like cot and the vowel in words like caught have come to be pronounced the same (Labov et al., 2005, pp.", "58-65) .", "The geographical extent of the merger is currently expanding, which might be expected if the merger has a cognitive or social advantage associated with it.", "Johnson (2007) 's study of the merger's frontier on the border Rhode Island and Massachusetts uncovered an interesting social dynamic that illustrates the merger's speed: there are families where the parents and older siblings non-merged, but the younger siblings are.", "The merger has swept through in only a few years and passed between the siblings.", "Yang (2009) seeks to understand why mergers have an advantage from a cognitive perspective, and his model treats the acquisition of mergers as an evolutionary process.", "Learners who receive both merged (M + ) and non-merged (M βˆ’ ) input entertain both a merged (g + ) and non-merged (g βˆ’ ) grammar and reward whichever grammar successfully parses the input.", "This kind of variational learner (Yang, 2000) is essentially an adaptation of the classic evolutionary Linear Reward Punishment model (Bush and Mosteller, 1953) .", "The fitness of each grammar is the probability in the limit that it will fail to parse any given input, and since it is virtually always the case that this probability is different for both grammars, fitness is virtually always asymmetric.", "The variational learner is characterized as follows.", "Given two grammars and an input token s, The learner parses s with g 1 with probability p and with g 2 with probability q = 1 βˆ’ p. p is rewarded according to whether the choice of g successfully parses s (g β†’ s) or it fails to (g s), where Ξ³ is some small constant.", "p = p + Ξ³q, g β†’ s (1 βˆ’ Ξ³)p, g s Given a specific problem, one can calculate a penalty probability C for each g, the proportion of input that would cause g s. The grammar with the lower C has the advantage, so the other one will be driven down in the long run.", "C can be estimated from type frequencies in a corpus, and the model is non-parametric because these values do not depend on Ξ³. lim tβ†’βˆž p t = C 2 C 1 + C 2 lim tβ†’βˆž q t = C 1 C 1 + C 2 To understand the COT-CAUGHT merger empirically, one must reason about what kind of input would trigger a penalty and then calculate the penalty probabilities of the merged grammar C + and non-merged grammar C βˆ’ from a corpus.", "This model considers parsing failure to be the rate of initial misinterpretation, and for a vowel merger, the only inputs that could create an initial misinterpretation are minimal pairs because they become homophones.", "Examples of COT-CAUGHT minimal pairs include cot-caught, Don-Dawn, stock-stalk, odd-awed, collar-caller, and so on.", "The merged g + grammar collapses would-be minimal pairs into homophones, so the penalty rate C + comes down to lexical access.", "Under the observation that more frequent homophones are retrieved first regardless of syntactic context (Caramazza et al., 2001) , g + listeners only suffer initial misinterpretation when the less frequent member of a pair is uttered regardless of the rate of M + .", "If H is the sum token frequency of all minimal pairs and h i o , h i oh are the frequencies of the ith pair's members, then C+ is calculated by Eqn.", "6.", "In contrast, g βˆ’ listeners are sensitive to the phonemic distinction, so they misinterpret M βˆ’ input at the rate of mishearing one vowel for the other (Peterson and Barney, 1952) (second half of Eqn.", "7).", "And given M + input, they misinterpret whenever they hear the phoneme which g βˆ’ does not expect (e.g., a merged speaker pronouncing cot with the CAUGHT vowel) times the probability of not mishearing that vowel (1-) plus times the probability of hearing the right vowel (i.e., the merged speaker pronounces cot with the COT vowel but it is misheard anyway) (first half of Eqn.", "7).", "Since g βˆ’ misinterpretation rates are a function of the rate of M + (p) in the environment, there is a threshold of M + speakers above which the merged grammar has a fitness advantage over the non-merged one.", "C + = 1 H i min(h i o , h i oh ) (6) C βˆ’ = 1 H i p 0 ((1 βˆ’ oh )h i o + oh h i oh ) (7) +q 0 ( oh h i o + oh h i oh ) Calculating this threshold for the frequent minimal pairs that Yang extracts from the Wortschatz project (Biemann et al., 2004 ) corpus 3 and mishearing rates from Peterson and Barney (1952), the Yang model predicts that a learner exposed to at least ∼ 17% COT-CAUGHT-merged input will acquire the merger.", "This threshold represents a strong advantage for M + because it is well under the 50% threshold expected for neutral (non-advantaged) change and it is very close to what was found in Johnson (2007) 's sociolinguistic study.", "It predicts that younger children may have g + while their parents and even older siblings have g βˆ’ if the 17% threshold was crossed in E after the acquisition period of the older sibling but before that of the younger sibling.", "Model Setup All the mechanics behind the learning model reduce to a simple statement: learners acquires g + iff > 17% of their input is M + and they acquire g βˆ’ otherwise.", "However, this kind of categorical learner in a perfectly-mixed population leads to immediate fixation at either g βˆ’ or g + in a single iteration, since the proportion of g + speakers in the population is equivalent to the proportion of M + input in every learner's environment.", "This is not realistic change.", "Clearly, social network structure is at least as important as the learning algorithm in modeling the spread of the merger.", "We model the change in a non-uniform social network of 100 centralized clusters of 75 individuals each.", "75 was chosen as half Dunbar's number, the maximum number of reliable social connections that an adult can maintain (Dunbar, 2010) .", "There are two grammars, g + and g βˆ’ , and learners internalize one or the other according to the 17% threshold of M + in their input.", "One cluster represents the source of the merger and is initialized at 100% g + , while the rest begin 100% g βˆ’ .", "Inter-cluster connections are chosen randomly so that some connections are between central members of the clusters and some are between peripheral members.", "The one merged cluster is connected to half the other clusters representing those at the frontier of the change, and each other cluster is connected to five randomly chosen ones.", "4 This network structure echoes work in sociolinguistics, in particular, Milroy and Milroy (1985) 's notion of strong and weak connections in language change, where weak connections between social clusters are particularly important for propagation of a change.", "Propagation of the merged grammar is calculated by E n because we are interested in the behavior of individuals without loss of precision and because it cannot be assumed that each cluster is internally uniform.", "5 Since the spread of the merger has been rapid enough to detect over a period of a few years, iterations are modeled as short age co-horts rather than full generations in the first experiments by updating only a randomly chosen 10% of nodes at each iteration because only a fraction of the population is learning at any given time.", "A model where every node is updated is investigated as well.", "Results The behavior of this simulation is shown graphically in Figure 4 .", "The fine/colored lines indicate the rate of M + within each initially non-merged cluster, and the bold/black line shows the average rate across all initially non-merged.", "The merger spreads from cluster to cluster in succession over the \"weak\" inter-cluster connections and through each cluster over the 'strong' connections before moving on to the next ones.", "Everett, 1995) members have the merger, a period of rapid diffusion of the merger, then some time where a few laggards resist the merger.", "As a result, most clusters exhibit an S-like shape.", "A few clusters change rapidly because of their especially wellconnected positions in the network, and some lag behind the rest because they are poorly connected to the rest of the network.", "More interestingly, the population-wide average, the population-level data at the kind of granularity that is often studied, yields a smooth S-curve with a shallower slope than the individual clusters.", "The fact that it arises naturally here in a network that conforms with typical network shapes but was otherwise randomly generated is encouraging because the experiment was not set up so that it would produce such a curve, and the steep rate of change in individual clusters is what is expected for a change that is rapid enough to affect siblings differently.", "In the above simulation, only a fraction of nodes were updated at each iteration in order to model a rapid change.", "In order to confirm that this choice is not affecting the results and to test a purer implementation of the framework presented here, we remove that constraint and update every node at each iteration.", "Figure 5 shows what happens over 20 iterations in a network that is otherwise identical but with 2/5 as many inter-cluster connections as the original.", "A qualitatively similar pattern arises, so the choice to update only a fraction of the population is not crucially affecting the results.", "In all experiments so far, social connections were fixed at the first iteration even though connections in real populations tend to change over time.", "To investigate that modeling assumption, we perform another simulation in which connections are randomly updated both within and across clusters at each iteration akin to Kauhanen (2016) 's rewiring.", "The result as shown in Figure 6 is similar to before, with one major difference.", "The individual clusters transition more closely in time because no individual cluster remains poorly connected or especially well connected throughout the entire simulation.", "Finally, we test our assumptions about population size by repeating the experiments on a smaller network of 40 clusters of 18 individuals.", "The results are qualitatively similar, but the S-curve appears to be more sensitive to probabilistic connections in the network.", "To explore this, we present the average network-wide rate of (M + ) across 10 trials, revealing that an S-like curve is formed each time but that its slope varies.", "A few trials never reach 100% because some of the clusters are not connected to the innovative one.", "The slope varies between trials, indicating that the rate of change is a function of both the population structure and the learning algorithm, but the network size does not substantially affect these results.", "Discussion The algebraic-network framework for modeling population-level language change presented here has substantial practical and theoretical advantages over previous ones.", "It is much simpler computationally than previous frameworks because it calculates the statistically expected behavior of each generation analytically and therefore removes the entire inner loop of calculating stochastic inter-agent interactions from the simulation.", "It follows the Niyogi and Berwick (1996) formalism for language change which presents a clean and modular way of reasoning about the problem and promotes the centrality of language acquisition.", "In addition to the core algorithm, the framework offers enough flexibility to represent a wide variety of processes from the highly abstract (e.g., Kauhanen (2016) ) to those grounded in soci-olinguistic and acquisition research (e.g., Yang (2009) ).", "In our investigation of Kauhanen's basic assumptions, we discover how seemingly innocuous decisions about population size and learning conspire to drive simulation results.", "If learners are conceived as categorical learners, population size becomes a deciding factor in the path of change.", "So while the original results are interesting and meaningful, they may only valid for small (on the order of 10 2 ) populations.", "In our simulation of the spread of the COT-CAUGHT merger, we show how a cognitivelymotivated model of acquisition requires a network model in order to represent population-level language change.", "The population is represented as a collection of individual clusters based on sociological work, but the clusters themselves are connected randomly.", "The fact that S-curves arise naturally from these networks underscores their centrality to language change.", "One problem that this line of simulation work has always faced has been the lack of viable comparison between models because every study implements its own learning, network, and interaction models.", "The modular nature of our framework advances against this trend since it is now possible to hold the population model constant while slotting in various learning models to test them against one another and vice-versa.", "Finally, since this framework reduces to Niyogi & Berwick's models in perfectly-mixed populations, it can be used to reason about the formal dynamics of language change as well.", "Without simulation, it would be difficult or impossible to undercover the interplay between acquisition and social structure on the propagation of language change.", "Neither factor alone can account for the theoretical or empirically observed patterns.", "Simulations of this kind which explicitly model both simultaneously is well equipped to provide insights that fieldwork and laboratory work cannot.", "As such, it is an invaluable complement to those more traditional methodologies." ] }
{ "paper_header_number": [ "1", "1.1", "2", "2.2", "2.3", "3", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Framework for Transmission in Social Networks", "Propagation in the Network", "Learning in the Network", "Application: Testing Assumptions", "Application: Mergers in Progress", "Background", "Model Setup", "Results", "Discussion" ] }
GEM-SciDuet-train-106#paper-1281#slide-23
Results Network Size
Tested our network size assumptions Repeat the experiment with 40 clusters of 18 individuals each The S-shape is less S-shaped Individual clusters shows step pattern Cluster Merger Rates Rhode Island Avg
Tested our network size assumptions Repeat the experiment with 40 clusters of 18 individuals each The S-shape is less S-shaped Individual clusters shows step pattern Cluster Merger Rates Rhode Island Avg
[]
GEM-SciDuet-train-106#paper-1281#slide-24
1281
A Framework for Representing Language Acquisition in a Population Setting
Language variation and change are driven both by individuals' internal cognitive processes and by the social structures through which language propagates. A wide range of computational frameworks have been proposed to connect these drivers. We compare the strengths and weaknesses of existing approaches and propose a new analytic framework which combines previous network models' ability to capture realistic social structure with practically and more elegant computational properties. The framework privileges the process of language acquisition and embeds learners in a social network but is modular so that population structure can be combined with different acquisition models. We demonstrate two applications for the framework: a test of practical concerns that arise when modeling acquisition in a population setting and an application of the framework to recent work on phonological mergers in progress.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217 ], "paper_content_text": [ "Introduction The process of language change should be thought of as a two-step cycle in which 1) individuals acquire their native languages from their predecessors then 2) pass them on to their successors.", "Small changes accrue over time this way and create both small-scale interpersonal variation and large-scale typological differences.", "It is easy to draw a strong analogy here between linguistic evolution and biological evolution.", "Both feature classic descent with modification, except while phenotypes are transmitted through genes and acted on by natural selection, language is both transmitted through and constrained by the individual (Cavalli-Sforza and Feldman, 1981; Ritt, 2004, etc.)", ".", "But while evolution, linguistic or otherwise, is driven by forces acting on the individual, it unfolds on the level of populations (Cavalli-Sforza and Feldman, 1981) .", "The influence of communitylevel social factors on the path of language change is a major focus of sociolinguistics (Labov, 2001; Milroy and Milroy, 1985; Rogers Everett, 1995) .", "Ideally, one could observe population-level variation unfold in real time while testing out individual factors, but this is impossible because nobody can travel back in time or fit entire natural environments into a lab.", "Change that has already happened is out of reach, and change in progress is buried in a world of confounds.", "The classic sociolinguistic method instead approaches the problem by inferring causal factors from patterns discovered in field interviews and corpora (Labov, 1994; Labov et al., 2005, etc.)", ".", "This is the primary source of empirical data in the field and the only way to look at language change in a naturalistic setting, but it is limited in that it cannot test cause and effect directly.", "More recently, controlled experimental studies have emerged as a complementary line of research which manipulate causal factors directly (Johnson et al., 1999; Campbell-Kibler, 2009, etc.)", ", but are inherently removed natural time and scale.", "A third approach, the one we build upon here, relies on computational modeling to simulate how sociolinguistic factors might work together in larger populations (Klein, 1966; Blythe and Croft, 2012; Kauhanen, 2016, etc.)", ".", "It has long been argued that language acquisition is the primary cause of language change (Sweet, 1899; Lightfoot, 1979; Niyogi, 1998, etc.)", ".", "In the last few decades, this connection has been modeled computationally (Gibson and Wexler, 1994; Kirby et al., 2000; Yang, 2000, etc.)", ", leading to the strong conclusion that change is the inevitable consequence of mixed linguistic input or finite learning periods (Niyogi and Berwick, 1996) , even if children are \"perfect\" learners.", "An important result connecting the learner and population emphasizes the need for this line of work: the space of paths of change available in populations is formally larger than the paths available to linear chains of iterated learners.", "Niyogi and Berwick (2009) prove formally that even perfectly-mixed (i.e., uniform and homogeneous social network) populations admit phase transitions in the path of change unavailable to chains of single learners commonly implemented in iterated learning (Kirby et al., 2000) .", "This suggests that small-population experimental studies in sociolinguistics and in child language acquisition do not paint the full picture of language change.", "We introduce a new framework for modeling language change in populations.", "It has an outer loop to represent generational progression, but it replaces the inner loop which calculates randomized interactions between agents with a single formula that is defined generally enough to allow the simulation of a wide range of scenarios.", "It builds upon the principled formalism described by Niyogi and Berwick (1996, et seq.)", ", privileging the acquisition model and separating it from the population model.", "The resulting modular framework is described in the following sections.", "First, Section 1.1 presents a survey of previous simulation work followed by a description of the new population model in Section 2.", "Next, Section 3 addresses practical concerns relating population size to assumptions about language acquisition.", "Finally, Section 4 introduces a case study on phonological change which demonstrates the need for appropriate models both of acquisition and populations.", "Related Work Computational models for the propagation of linguistic variation have been employed with a variety of research goals in mind.", "Every paper implements its own framework with few exceptions, so comparison across studies is difficult.", "Additionally, since each model is essentially 'boutique,' it is always possible that models are designed consciously or unconsciously to achieve a specific outcome rather driven by underlying principles.", "We group these frameworks into three classes according to their implementation, swarm, network, and algebraic, and discusses their strengths and weaknesses.", "The first class, called swarm here, models populations as collections of agents placed on a grid.", "They \"swarm\" around randomly according to some movement function, and \"interact\" when they occupy adjacent grid spaces (Satterfield, 2001; Harrison et al., 2002; Ke et al., 2008; Stanford and Kenny, 2013) .", "This tends toward concrete interpretation, for example, more mobile populations are expressed directly by more mobile agents.", "They capture Bloomfield (1933) 's \"principle of density\" which describes the observation that geographically or socially close individuals interact more frequently than those farther away.", "On the other hand, they provide little control over network structure, relying on series of explicit movement constraints in order to direct their agents, and since each one moves randomly at each iteration, these models have potentially thousands of degrees of freedom.", "Such simulations should be run many times if any sort of statistically expected results are to be computed.", "The second class, network frameworks, model speakers as nodes and interaction probabilities as weighted edges on network graphs (Minett and Wang, 2008; Baxter et al., 2009; Fagyal et al., 2010; Blythe and Croft, 2012; Kauhanen, 2016) .", "These frameworks offer precise control over social network structure and can test specific community models from within sociolinguistics.", "However, implementations usually proceed by some kind of iterative probabilistic node-pair selection process, and in this way suffer from the same statistical pitfalls as swarm frameworks.", "In contrast to swarm models, interaction is rigidly restricted to immediately connected nodes, so to achieve gradient interaction probabilities, edges must be frequently updated or nearly fully-connected graphs with carefully assigned edge weights would need to be constructed and motivated.", "The third class, algebraic frameworks, present analytic methods for determining the state of the network at the end of each iteration rather than relying on stochastic simulation of individual agents (Niyogi and Berwick, 1996, 1997; Yang, 2000; Baxter et al., 2006; Minett and Wang, 2008; Niyogi and Berwick, 2009) .", "Removing that inner loop is a more mathematically elegant approach and avoids dealing unnecessarily with statistics behind random trials.", "Removing that loop speeds up calculation as well, making larger simulations more tractable than with network or swarm frameworks.", "But this power is achieved by sacrificing the social network.", "Up to this point, such models have, to our knowledge, only been defined over perfectly-mixed (i.e., no network effects) populations.", "That assumption is useful for reasoning about the mathematical theory behind language change, but it hinders such models' utility in empirical studies.", "For example, though Baxter et al.", "(2006) and Minett and Wang (2008) implement algebraic models for perfectly mixed populations, they fall back on network models to model network effects.", "Framework for Transmission in Social Networks Algebraic frameworks have their mathematical advantage, but network frameworks provide a richer model for representing real-world population structures and swarm models capture density effects by default.", "An ideal framework would combine the benefits of all three of these.", "Here we do just that.", "We introduce a framework that instantiates Niyogi and Berwick (1996) 's acquisitiondriven formalism where change is handled explicitly as a two-step alternation between individual learners learning and populations interacting.", "It provides an analytic solution to the state of a network structure over which swarm-like behavior can be modeled.", "We begin by conceptualizing the framework in terms of agents traveling probabilistically over a network structure as in Algo.", "1 before introducing the analytic solution.", "There is an individual standing at every node in the graph, and at every iteration, each individual begins at some location and travels along the network's edges, at each step deciding to continue on or to stop and interact with the agent at that node.", "Any two agents with a nonzero weight path between them could potentially interact, so the overall probability of an interaction is a function of the shape of the network and the decay rate of the step probability.", "The shorter and higher weighted the path between two agents, the more likely they are to interact.", "This corresponds to the gradient interaction probabilities of swarm frameworks.", "Social networks are typically conceived of as graph structures with individuals as vertices and the social or geographical connections between individuals as edges, and this allows for a great deal of flexibility.", "If edges are undirected, then all interactions are equal and bidirectional, but if edges are directed, interactions may or may not be.", "Edges can be weighted to represent likelihood of interaction or some measure of social valuation, and this too can vary over time.", "Lastly, it is possible to add and remove nodes themselves to capture births, deaths, or migration.", "The network structure is represented computationally here as an adjacency matrix A.", "In a population of n individuals, this is n Γ— n where each element a ij is the weight of the connection from individual j to individual i.", "The matrix must be column stochastic (all columns sum to 1 and contain only positive elements) so that edge weights can be interpreted as probabilities.", "The special case where the matrix is symmetric (every a ij = a ji ) models undirected edges, and more strongly, the model reduces to perfectly-mixed populations when each a ij = 1 n .", "We define a notion of communities over the nodes of the network in order to add the option to categorize groups of individuals.", "Membership among c communities is identified with an n Γ— c indicator matrix C. Depending on the problem at hand, it is possible to calculate the average behavior of the learners within each community directly without having to calculate the behavior of each individual member.", "Propagation in the Network In a typical network model, the edge weights between nodes in A are interpreted directly as interaction probabilities, meaning that individuals only ever interact with their immediate graph neighbors.", "We take a different approach by allowing the agents to \"travel\" and potentially interact with any other agent whose node is connected by a path of non-zero edges.", "If the number of traveling steps were fixed at k, the probability of each pair interacting would be defined as A k .", "It is more complicated for us since the number of steps traveled is a random variable.", "The probability of j interacting with i (p(ij)) is the probability of them interacting after k steps times the probability of k for all values of k as in Eqn.", "1.", "Combining this intuition with A yields the interaction probabilities for all i, j pairs.", "p(ij) = k p(ij|k steps) p(k steps) (1) The pattern of linguistic variants or grammars (in the formal sense where grammar g is the intensional equivalent of language L g ) within a network unfolds as a dynamical system over the course of many iterations, and learners' positions within the network mediate which ones they eventually acquire.", "In a system with g grammars and n individuals, a n Γ— g row-stochastic matrix G specifies the probability with which each community expresses each grammar.", "Given this notion of interaction and the specification of grammars expressed within a network, it is possible to compute the distribution of grammars presented to each learner.", "This is the learners' linguistic environment and is represented by a matrix E in the same form as G .", "An environment function E n (G t , A) = E t+1 shown in Eqn.", "2 calculates E by first calculating all the interaction probabilities in the network then multiplying those by the grammars which every agent expresses to get the environment E. The Ξ± parameter from the geometric distribution 1 defines the travel decay rate.", "A lower Ξ± defines conceptually more mobile agents.", "More generally, E n is a special case of E(G t , C t , A t ) = E t+1 where the number of communities equals the number of individuals (c = n).", "C becomes the identity matrix without loss of generality, so the network's initial condition does not have to be defined explicitly.", "For any other community definition, an initial condition has to be defined as in Eqn.", "3 which specifies the starting point in the network that each agent conceptually begins traveling from.", "The output of E is a g Γ— c matrix giving the environment of the average agent in each community.", "2 E n (G t , A) = G t Ξ± (I βˆ’ (1 βˆ’ Ξ±)A) βˆ’1 (2) E(G t , C, A) = E n (G t , A)C(C C) βˆ’1 (3) The output of E must be broadcast to g Γ— n, which would result in the loss of some information unless the assumption can be made that each community is internally uniform.", "However, when that assumption can be made, the n Γ— n adjacency matrix admits a c Γ— c equitable partition A Ο€ (Eqn.", "4) (Schaub et al., 2016) which permits an alternate environment function E EP (G t , C, A) shown in Eqn.", "5 that is equivalent to the lossless E n if A.", "If n c, E EP is much faster to calculate because it only inverts a small c Γ— c matrix rather than a large n Γ— n. This makes it feasible to run much larger simulations than what has been done in the past.", "A Ο€ = (C C) βˆ’1 C AC (4) EEP = Ξ±G C (I βˆ’ (1 βˆ’ Ξ±)A Ο€ ) βˆ’1 (C C) βˆ’1 (5) Learning in the Network The environment function describes what inputs E t+1 are available to learners given the language expressed by the mature speakers of the previous age cohort with grammars G t .", "The second component of the framework describes the learning algorithm A(E t+1 ) = G t+1 , how individuals respond to their input environment.", "The resulting G t+1 describes which grammars those learners will eventually contribute to the subsequent generation's environment E t+2 .", "This back-andforth between adults' grammars G and childrens' environment E is the two-step cycle of language change (Fig.", "1) .", "In neutral change, learners would acquire grammars at the rates that they are expressed in their environments, but there is good reason to believe .", ".", ".", "G t β†’ E t+1 β†’ G t+1 .", ".", ".", "G t+i β†’ E t+i+1 .", ".", ".", "Figure 1: Language change as an alternation between G and E matrices that most language change involves differential fitness between competing variants, and most nontrivial learning algorithms yield some kind of fitness (Kroch, 1989; Yang, 2000; Blythe and Croft, 2012, etc.)", ", so A is rarely neutral.", "A neutral and simple advantaged model are both considered in Section 3, and a more complex learning algorithm is described for Section 4.", "Application: Testing Assumptions The general nature of the framework described here renders it suitable for reproducing the results of previous works and evaluating their assumptions.", "To demonstrate this, we reproduce the major result from Kauhanen (2016) , which tested the behavior of neutral change in networks of singlegrammar learners, in order to dissect two of its primary assumptions.", "Implemented in a typical network framework, the original setup contains n = 200 individuals in probabilistically generated centralized networks in which individuals mature categorically to the single most frequent grammar in their input.", "The author found that categorical neutral change produced chaotic paths of change regardless of network shape and that periodically \"rewiring\" some of the network edges smoothed this out.", "Without commenting on rewiring, we find that the combination of n and choice of categorical learners conspire to create the chaotic results.", "We create two communities, both centralized along the lines of the single cluster in Kauhanen (2016) , initialize all members of cluster 1 with grammar g 1 and all members of cluster 2 with grammar g 2 , and additional edges are added between members of clusters 1 and 2 to allow interaction.", "G is converted to an indicator matrix at the end of each learning iteration by rounding values to 0 and 1 in order to model categorical learners who only internalize the most common grammar in their inputs as in the original model.", "In a pair of infinitely large clusters or two clusters where individuals are permitted to learn a probabilistic distribution of grammars, each cluster should homogenize to a 50/50 distribution of g 1 and g 2 after some number of iterations depending on the specifics of the network shape and setting for Ξ± creating the red curves in Fig.", "2 .", "At n = 20000, each of 10 trials roughly follows the path of the predicted curve, but when run at the original n = 200 for 10 trials, this produces the type of chaotic behavior which Kauhanen (2016) attempts to repair.", "The outcome appears to be the result of an assumption made out of convenience (n = 200) rather than a principled decision.", "To further explore the impact of the population size assumption, we experiment on a model of advantaged change, which is typically contrasted with neutral change because of its tendency to produce \"well-behaved\" S-curve change (Blythe and Croft, 2012; Kauhanen, 2016) .", "This time, only a single cluster is created, and the advantaged grammar is initially assigned to 1% of the population.", "As seen in Figure 3 , results are chaotic for n = 200 once again and near predicted for n = 20000.", "This is important because at n = 200, advantaged change is chaotic, and most simulations both rise and fall.", "An experimenter who only studied advantaged change in small population might concluded that it is as ill-behaved as neutral change.", "While the conclusions that Kauhanen (2016) draws appear valid for n = 200, it is not clear to what extent they can be projected onto larger populations.", "This demonstrates the need for carefully choosing one's modeling assumptions and testing them out when possible.", "Application: Mergers in Progress The acquisition of phonological mergers in mixed input settings presents an interesting problem.", "It appears that mergers have an inherent advantage because they tend to spread at the expense of distinctions, and once they begin, they are rarely reversed (Labov, 1994) .", "Yang (2009) 's acquisition model quantifies this advantage as the relatively lower chance of misinterpretation if a listener assumes the merged grammar instead of the nonmerged grammar once a sufficient proportion of the environment is merged.", "Applied to Johnson (2007) 's detailed population study of the frontier of the COT-CAUGHT merger in the small towns along the border between Rhode Island and Massachusetts, this accurately predicts the ratio of merged input for a child to acquire the merged grammar, however when applied to a perfectly mixed population of learners, it fails to model the spread of the merged grammar in the population.", "Yang's model is input-driven, so it is conducive to simulation with minimal assumptions past those drawn from the empirical data.", "We test the behavior of this learning model in a typical population network and demonstrate that it produces a reasonable path of change.", "Background The COT-CAUGHT merger, also called the low back merger describes the phenomenon present in varieties of North American English spoken in eastern New England, western Pennsylvania, the American West, and Canada among others where the vowel in words like cot and the vowel in words like caught have come to be pronounced the same (Labov et al., 2005, pp.", "58-65) .", "The geographical extent of the merger is currently expanding, which might be expected if the merger has a cognitive or social advantage associated with it.", "Johnson (2007) 's study of the merger's frontier on the border Rhode Island and Massachusetts uncovered an interesting social dynamic that illustrates the merger's speed: there are families where the parents and older siblings non-merged, but the younger siblings are.", "The merger has swept through in only a few years and passed between the siblings.", "Yang (2009) seeks to understand why mergers have an advantage from a cognitive perspective, and his model treats the acquisition of mergers as an evolutionary process.", "Learners who receive both merged (M + ) and non-merged (M βˆ’ ) input entertain both a merged (g + ) and non-merged (g βˆ’ ) grammar and reward whichever grammar successfully parses the input.", "This kind of variational learner (Yang, 2000) is essentially an adaptation of the classic evolutionary Linear Reward Punishment model (Bush and Mosteller, 1953) .", "The fitness of each grammar is the probability in the limit that it will fail to parse any given input, and since it is virtually always the case that this probability is different for both grammars, fitness is virtually always asymmetric.", "The variational learner is characterized as follows.", "Given two grammars and an input token s, The learner parses s with g 1 with probability p and with g 2 with probability q = 1 βˆ’ p. p is rewarded according to whether the choice of g successfully parses s (g β†’ s) or it fails to (g s), where Ξ³ is some small constant.", "p = p + Ξ³q, g β†’ s (1 βˆ’ Ξ³)p, g s Given a specific problem, one can calculate a penalty probability C for each g, the proportion of input that would cause g s. The grammar with the lower C has the advantage, so the other one will be driven down in the long run.", "C can be estimated from type frequencies in a corpus, and the model is non-parametric because these values do not depend on Ξ³. lim tβ†’βˆž p t = C 2 C 1 + C 2 lim tβ†’βˆž q t = C 1 C 1 + C 2 To understand the COT-CAUGHT merger empirically, one must reason about what kind of input would trigger a penalty and then calculate the penalty probabilities of the merged grammar C + and non-merged grammar C βˆ’ from a corpus.", "This model considers parsing failure to be the rate of initial misinterpretation, and for a vowel merger, the only inputs that could create an initial misinterpretation are minimal pairs because they become homophones.", "Examples of COT-CAUGHT minimal pairs include cot-caught, Don-Dawn, stock-stalk, odd-awed, collar-caller, and so on.", "The merged g + grammar collapses would-be minimal pairs into homophones, so the penalty rate C + comes down to lexical access.", "Under the observation that more frequent homophones are retrieved first regardless of syntactic context (Caramazza et al., 2001) , g + listeners only suffer initial misinterpretation when the less frequent member of a pair is uttered regardless of the rate of M + .", "If H is the sum token frequency of all minimal pairs and h i o , h i oh are the frequencies of the ith pair's members, then C+ is calculated by Eqn.", "6.", "In contrast, g βˆ’ listeners are sensitive to the phonemic distinction, so they misinterpret M βˆ’ input at the rate of mishearing one vowel for the other (Peterson and Barney, 1952) (second half of Eqn.", "7).", "And given M + input, they misinterpret whenever they hear the phoneme which g βˆ’ does not expect (e.g., a merged speaker pronouncing cot with the CAUGHT vowel) times the probability of not mishearing that vowel (1-) plus times the probability of hearing the right vowel (i.e., the merged speaker pronounces cot with the COT vowel but it is misheard anyway) (first half of Eqn.", "7).", "Since g βˆ’ misinterpretation rates are a function of the rate of M + (p) in the environment, there is a threshold of M + speakers above which the merged grammar has a fitness advantage over the non-merged one.", "C + = 1 H i min(h i o , h i oh ) (6) C βˆ’ = 1 H i p 0 ((1 βˆ’ oh )h i o + oh h i oh ) (7) +q 0 ( oh h i o + oh h i oh ) Calculating this threshold for the frequent minimal pairs that Yang extracts from the Wortschatz project (Biemann et al., 2004 ) corpus 3 and mishearing rates from Peterson and Barney (1952), the Yang model predicts that a learner exposed to at least ∼ 17% COT-CAUGHT-merged input will acquire the merger.", "This threshold represents a strong advantage for M + because it is well under the 50% threshold expected for neutral (non-advantaged) change and it is very close to what was found in Johnson (2007) 's sociolinguistic study.", "It predicts that younger children may have g + while their parents and even older siblings have g βˆ’ if the 17% threshold was crossed in E after the acquisition period of the older sibling but before that of the younger sibling.", "Model Setup All the mechanics behind the learning model reduce to a simple statement: learners acquires g + iff > 17% of their input is M + and they acquire g βˆ’ otherwise.", "However, this kind of categorical learner in a perfectly-mixed population leads to immediate fixation at either g βˆ’ or g + in a single iteration, since the proportion of g + speakers in the population is equivalent to the proportion of M + input in every learner's environment.", "This is not realistic change.", "Clearly, social network structure is at least as important as the learning algorithm in modeling the spread of the merger.", "We model the change in a non-uniform social network of 100 centralized clusters of 75 individuals each.", "75 was chosen as half Dunbar's number, the maximum number of reliable social connections that an adult can maintain (Dunbar, 2010) .", "There are two grammars, g + and g βˆ’ , and learners internalize one or the other according to the 17% threshold of M + in their input.", "One cluster represents the source of the merger and is initialized at 100% g + , while the rest begin 100% g βˆ’ .", "Inter-cluster connections are chosen randomly so that some connections are between central members of the clusters and some are between peripheral members.", "The one merged cluster is connected to half the other clusters representing those at the frontier of the change, and each other cluster is connected to five randomly chosen ones.", "4 This network structure echoes work in sociolinguistics, in particular, Milroy and Milroy (1985) 's notion of strong and weak connections in language change, where weak connections between social clusters are particularly important for propagation of a change.", "Propagation of the merged grammar is calculated by E n because we are interested in the behavior of individuals without loss of precision and because it cannot be assumed that each cluster is internally uniform.", "5 Since the spread of the merger has been rapid enough to detect over a period of a few years, iterations are modeled as short age co-horts rather than full generations in the first experiments by updating only a randomly chosen 10% of nodes at each iteration because only a fraction of the population is learning at any given time.", "A model where every node is updated is investigated as well.", "Results The behavior of this simulation is shown graphically in Figure 4 .", "The fine/colored lines indicate the rate of M + within each initially non-merged cluster, and the bold/black line shows the average rate across all initially non-merged.", "The merger spreads from cluster to cluster in succession over the \"weak\" inter-cluster connections and through each cluster over the 'strong' connections before moving on to the next ones.", "Everett, 1995) members have the merger, a period of rapid diffusion of the merger, then some time where a few laggards resist the merger.", "As a result, most clusters exhibit an S-like shape.", "A few clusters change rapidly because of their especially wellconnected positions in the network, and some lag behind the rest because they are poorly connected to the rest of the network.", "More interestingly, the population-wide average, the population-level data at the kind of granularity that is often studied, yields a smooth S-curve with a shallower slope than the individual clusters.", "The fact that it arises naturally here in a network that conforms with typical network shapes but was otherwise randomly generated is encouraging because the experiment was not set up so that it would produce such a curve, and the steep rate of change in individual clusters is what is expected for a change that is rapid enough to affect siblings differently.", "In the above simulation, only a fraction of nodes were updated at each iteration in order to model a rapid change.", "In order to confirm that this choice is not affecting the results and to test a purer implementation of the framework presented here, we remove that constraint and update every node at each iteration.", "Figure 5 shows what happens over 20 iterations in a network that is otherwise identical but with 2/5 as many inter-cluster connections as the original.", "A qualitatively similar pattern arises, so the choice to update only a fraction of the population is not crucially affecting the results.", "In all experiments so far, social connections were fixed at the first iteration even though connections in real populations tend to change over time.", "To investigate that modeling assumption, we perform another simulation in which connections are randomly updated both within and across clusters at each iteration akin to Kauhanen (2016) 's rewiring.", "The result as shown in Figure 6 is similar to before, with one major difference.", "The individual clusters transition more closely in time because no individual cluster remains poorly connected or especially well connected throughout the entire simulation.", "Finally, we test our assumptions about population size by repeating the experiments on a smaller network of 40 clusters of 18 individuals.", "The results are qualitatively similar, but the S-curve appears to be more sensitive to probabilistic connections in the network.", "To explore this, we present the average network-wide rate of (M + ) across 10 trials, revealing that an S-like curve is formed each time but that its slope varies.", "A few trials never reach 100% because some of the clusters are not connected to the innovative one.", "The slope varies between trials, indicating that the rate of change is a function of both the population structure and the learning algorithm, but the network size does not substantially affect these results.", "Discussion The algebraic-network framework for modeling population-level language change presented here has substantial practical and theoretical advantages over previous ones.", "It is much simpler computationally than previous frameworks because it calculates the statistically expected behavior of each generation analytically and therefore removes the entire inner loop of calculating stochastic inter-agent interactions from the simulation.", "It follows the Niyogi and Berwick (1996) formalism for language change which presents a clean and modular way of reasoning about the problem and promotes the centrality of language acquisition.", "In addition to the core algorithm, the framework offers enough flexibility to represent a wide variety of processes from the highly abstract (e.g., Kauhanen (2016) ) to those grounded in soci-olinguistic and acquisition research (e.g., Yang (2009) ).", "In our investigation of Kauhanen's basic assumptions, we discover how seemingly innocuous decisions about population size and learning conspire to drive simulation results.", "If learners are conceived as categorical learners, population size becomes a deciding factor in the path of change.", "So while the original results are interesting and meaningful, they may only valid for small (on the order of 10 2 ) populations.", "In our simulation of the spread of the COT-CAUGHT merger, we show how a cognitivelymotivated model of acquisition requires a network model in order to represent population-level language change.", "The population is represented as a collection of individual clusters based on sociological work, but the clusters themselves are connected randomly.", "The fact that S-curves arise naturally from these networks underscores their centrality to language change.", "One problem that this line of simulation work has always faced has been the lack of viable comparison between models because every study implements its own learning, network, and interaction models.", "The modular nature of our framework advances against this trend since it is now possible to hold the population model constant while slotting in various learning models to test them against one another and vice-versa.", "Finally, since this framework reduces to Niyogi & Berwick's models in perfectly-mixed populations, it can be used to reason about the formal dynamics of language change as well.", "Without simulation, it would be difficult or impossible to undercover the interplay between acquisition and social structure on the propagation of language change.", "Neither factor alone can account for the theoretical or empirically observed patterns.", "Simulations of this kind which explicitly model both simultaneously is well equipped to provide insights that fieldwork and laboratory work cannot.", "As such, it is an invaluable complement to those more traditional methodologies." ] }
{ "paper_header_number": [ "1", "1.1", "2", "2.2", "2.3", "3", "4", "4.1", "4.2", "4.3", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Framework for Transmission in Social Networks", "Propagation in the Network", "Learning in the Network", "Application: Testing Assumptions", "Application: Mergers in Progress", "Background", "Model Setup", "Results", "Discussion" ] }
GEM-SciDuet-train-106#paper-1281#slide-24
Results Community Averages
At small network sizes, the community average is more sensitive to random connections Repeat the small-scale experiment 10 times The slope is ~consistent in most simulations A few simulations show aberrant behaviour
At small network sizes, the community average is more sensitive to random connections Repeat the small-scale experiment 10 times The slope is ~consistent in most simulations A few simulations show aberrant behaviour
[]
GEM-SciDuet-train-107#paper-1284#slide-0
1284
Recursive Subtree Composition in LSTM-Based Dependency Parsing
The need for tree structure modelling on top of sequence modelling is an open issue in neural dependency parsing. We investigate the impact of adding a tree layer on top of a sequential model by recursively composing subtree representations (composition) in a transition-based parser that uses features extracted by a BiLSTM. Composition seems superfluous with such a model, suggesting that BiLSTMs capture information about subtrees. We perform model ablations to tease out the conditions under which composition helps. When ablating the backward LSTM, performance drops and composition does not recover much of the gap. When ablating the forward LSTM, performance drops less dramatically and composition recovers a substantial part of the gap, indicating that a forward LSTM and composition capture similar information. We take the backward LSTM to be related to lookahead features and the forward LSTM to the rich history-based features both crucial for transition-based parsers. To capture history-based information, composition is better than a forward LSTM on its own, but it is even better to have a forward LSTM as part of a BiLSTM. We correlate results with language properties, showing that the improved lookahead of a backward LSTM is especially important for head-final languages.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203 ], "paper_content_text": [ "Introduction Recursive neural networks allow us to construct vector representations of trees or subtrees.", "They have been used for constituency parsing by Socher et al.", "(2013) and Dyer et al.", "(2016) and for dependency parsing by Stenetorp (2013) and , among others.", "In particular, showed that composing representations of subtrees using recursive neural networks can be beneficial for transition-based dependency parsing.", "These results were further strengthened in Kuncoro et al.", "(2017) who showed, using ablation experiments, that composition is key in the Recurrent Neural Network Grammar (RNNG) generative parser by Dyer et al.", "(2016) .", "In a parallel development, Kiperwasser and Goldberg (2016b) showed that using BiLSTMs for feature extraction can lead to high parsing accuracy even with fairly simple parsing architectures, and using BiLSTMs for feature extraction has therefore become very popular in dependency parsing.", "It is used in the state-of-the-art parser of Dozat and Manning (2017) , was used in 8 of the 10 highest performing systems of the 2017 CoNLL shared task (Zeman et al., 2017) and 10 out of the 10 highest performing systems of the 2018 CoNLL shared task (Zeman et al., 2018) .", "This raises the question of whether features extracted with BiLSTMs in themselves capture information about subtrees, thus making recursive composition superfluous.", "Some support for this hypothesis comes from the results of Linzen et al.", "(2016) which indicate that LSTMs can capture hierarchical information: they can be trained to predict long-distance number agreement in English.", "Those results were extended to more constructions and three additional languages by Gulordava et al.", "(2018) .", "However, Kuncoro et al.", "(2018) have also shown that although sequential LSTMs can learn syntactic information, a recursive neural network which explicitly models hierarchy (the RNNG model from ) is better at this: it performs better on the number agreement task from Linzen et al.", "(2016) .", "To further explore this question in the context of dependency parsing, we investigate the use of recursive composition (henceforth referred to as composition) in a parser with an architecture like the one in Kiperwasser and Goldberg (2016b) .", "This allows us to explore variations of features and isolate the conditions under which composi-tion is helpful.", "We hypothesise that the use of a BiLSTM for feature extraction makes it possible to capture information about subtrees and therefore makes the use of subtree composition superfluous.", "We hypothesise that composition becomes useful when part of the BiLSTM is ablated, the forward or the backward LSTM.", "We further hypothesise that composition is most useful when the parser has no access to information about the function of words in the context of the sentence given by POS tags.", "When using POS tags, the tagger has indeed had access to the full sentence.", "We additionally look at what happens when we ablate character vectors which have been shown to capture information which is partially overlapping with information from POS tags.", "We experiment with a wider variety of languages than in order to explore whether the usefulness of different model variants vary depending on language type.", "K&G Transition-Based Parsing We define the parsing architecture introduced by Kiperwasser and Goldberg (2016b) at a high level of abstraction and henceforth refer to it as K&G.", "A K&G parser is a greedy transition-based parser.", "1 For an input sentence of length n with words w 1 , .", ".", ".", ", w n , a sequence of vectors x 1:n is created, where the vector x i is a vector representation of the word w i .", "We refer to these as type vectors, as they are the same for all occurrences of a word type.", "Type vectors are then passed through a feature function which learns representations of words in the context of the sentence.", "x i = e(w i ) v i = f (x 1:n , i) We refer to the vector v i as a token vector, as it is different for different tokens of the same word type.", "In Kiperwasser and Goldberg (2016b) , the feature function used is a BiLSTM.", "As is usual in transition-based parsing, parsing involves taking transitions from an initial configuration to a terminal one.", "Parser configurations are represented by a stack, a buffer and set of dependency arcs (Nivre, 2008) .", "For each configuration c, the feature extractor concatenates the token representations of core elements from the stack and buffer.", "These token vectors are passed to a classifier, typically a Multilayer Perceptron (MLP).", "The MLP scores transitions together with the arc labels for transitions that involve adding an arc.", "Both the word type vectors and the BiLSTMs are trained together with the model.", "looked at the impact of using a recursive composition function in their parser, which is also a transition-based parser but with an architecture different from K&G.", "They make use of a variant of the LSTM called a stack LSTM.", "A stack LSTM has push and pop operations which allow passing through states in a tree structure rather than sequentially.", "Stack LSTMs are used to represent the stack, the buffer, and the sequence of past parsing actions performed for a configuration.", "Composing Subtree Representations The words of the sentence are represented by vectors of the word types, together with a vector representing the word's POS tag.", "In the initial configuration, the vectors of all words are in the buffer and the stack is empty.", "The representation of the buffer is the end state of a backward LSTM over the word vectors.", "As parsing evolves, the word vectors are popped from the buffer, pushed to and popped from the stack and the representations of stack and buffer get updated.", "define a recursive composition function and compose tree representations incrementally, as dependents get attached to their head.", "The composed representation c is built by concatenating the vector h of the head with the vector of the dependent d, as well as a vector r representing the label paired with the direction of the arc.", "That concatenated vector is passed through an affine transformation and then through a tanh non-linear activation.", "c = tanh(W [h; d; r] + b) They create two versions of the parser.", "In the first version, when a dependent is attached to a head, the word vector of the head is replaced by a composed vector of the head and dependent.", "In the second version, they simply keep the vector of the head when attaching a dependent to a head.", "They observe that the version with composition is substantially better than the version without, by 1.3 LAS points for English (on the Penn Treebank (PTB) test set) and 2.1 for Chinese (on the Chinese Treebank (CTB) test set).", "Their parser uses POS tag information.", "POS tags help to disambiguate between different functional uses of a word and in this way give information about the use of the word in context.", "We hypothesise that the effect of using a recursive composition function is stronger when not making use of POS tags.", "Composition in a K&G Parser The parsing architectures of the stack LSTM parser (S-LSTM) and K&G are different but have some similarities.", "2 In both cases, the configuration is represented by vectors obtained by LSTMs.", "In K&G, it is represented by the token vectors of top items of the stack and the first item of the buffer.", "In the S-LSTM, it is represented by the vector representations of the entire stack, buffer and sequence of past transitions.", "Both types of parsers learn vector representations of word types which are passed to an LSTM.", "In K&G, they are passed to an LSTM in a feature extraction step that happens before parsing.", "The LSTM in this case is used to learn vectors that have information about the context of each word, a token vector.", "In the S-LSTM, word type vectors are passed to Stack LSTMs as parsing evolves.", "In this case, LSTMs are used to learn vector representations of the stack and buffer (as well as one which learns a representation of the parsing action history).", "When composition is not used in the S-LSTM, word vectors represent word types.", "When composition is used, as parsing evolves, the stack and buffer vectors get updated with information about the subtrees they contain, so that they gradually become contextualised.", "In this sense, those vectors become more like token vectors in K&G.", "More specifically, as explained in the previous section, when a dependent is attached to its head, the composition function is applied to the vectors of head and dependent and the vector of the head is replaced by this composed vector.", "We cannot apply composition on type vectors in the K&G architecture, since they are not used after the feature extraction step and hence cannot influence the representation of the configuration.", "Instead, we apply composition on the token vectors.", "We embed those composed representations in the same space as the token vectors.", "In K&G, like in the S-LSTM, we can create a composition function and compose the representation of subtrees as parsing evolves.", "We create two versions of the parser, one where word tokens are represented by their token vector.", "The other where they are represented by their token vector and the vector of their subtree c i , which is initially just a copy of the token vector (v i = f (x 1:n , i)β€’c i ).", "When a dependent word d is attached to a word h with a relation and direction r, c i is computed with the same composition function as in the S-LSTM defined in the previous section, repeated below.", "3 This composition function is a simple recurrent cell.", "Simple RNNs have known shortcomings which have been addressed by using LSTMs, as proposed by Hochreiter and Schmidhuber (1997) .", "A natural extension to this composition function is therefore to replace it with an LSTM cell.", "We also try this variant.", "We construct LSTMs for subtrees.", "We initialise a new LSTM for each new subtree that is formed, that is, when a dependent d is attached to a head h which does not have any dependent yet.", "Each time we attach a dependent to a head, we construct a vector which is a concatenation of h, d and r. We pass this vector to the LSTM of h. c is the output state of the LSTM after passing through that vector.", "We denote those models with +rc for the one using an ungated recurrent cell and with +lc for the one using an LSTM cell.", "c = tanh(W [h; d; r] + b) c = LSTM([h; d; r]) As results show (see Β§ 5), neither type of composition seems useful when used with the K&G parsing model, which indicates that BiLSTMs capture information about subtrees.", "To further investigate this and in order to isolate the conditions under which composition is helpful, we perform different model ablations and test the impact of recursive composition on these ablated models.", "First, we ablate parts of the BiLSTMs: we ablate either the forward or the backward LSTM.", "We therefore build parsers with 3 different feature functions f (x, i) over the word type vectors x i in the sentence x: a BiLSTM (bi) (our baseline), a backward LSTM (bw) (i.e., ablating the forward LSTM) and a forward LSTM (f w) (i.e., ablating the backward LSTM): bi(x, i) = BILSTM(x 1:n , i) bw(x, i) = LSTM(x n:1 , i) f w(x, i) = LSTM(x 1:n , i) K&G parsers with unidirectional LSTMs are, in some sense, more similar to the S-LSTM than those with a BiLSTM, since the S-LSTM only uses unidirectional LSTMs.", "We hypothesise that composition will help the parser using unidirectional LSTMs in the same way it helps an S-LSTM.", "We additionally experiment with the vector representing the word at the input of the LSTM.", "The most complex representation consists of a concatenation of an embedding of the word type e(w i ), an embedding of the (predicted) POS tag of w i , p(w i ) and a character representation of the word obtained by running a BiLSTM over the characters ch 1:m of w i (BiLSTM(ch 1:m )).", "x i = e(w i ) β€’ p(w i ) β€’ BiLSTM(ch 1:m ) Without a POS tag embedding, the word vector is a representation of the word type.", "With POS information, we have some information about the word in the context of the sentence and the tagger has had access to the full sentence.", "The representation of the word at the input of the BiLSTM is therefore more contextualised and it can be expected that a recursive composition function will be less helpful than when POS information is not used.", "Character information has been shown to be useful for dependency parsing first by .", "and Smith et al.", "(2018b) among others have shown that POS and character information are somewhat complementary.", "used similar character vectors in the S-LSTM parser but did not look at the impact of composition when using these vectors.", "Here, we experiment with ablating either or both of the character and POS vectors.", "We look at the impact of using composition on the full model as well as these ablated models.", "We hypothesise that composition is most helpful when those vectors are not used, since they give information about the functional use of the word in context.", "Parser We use UUParser, a variant of the K&G transition-based parser that employs the arc-hybrid transition system from Kuhlmann et al.", "(2011) extended with a SWAP transition and a Static-Dynamic oracle, as described in de Lhoneux et al.", "(2017b) 4 .", "The SWAP transition is used to allow the construction of non-projective dependency trees (Nivre, 2009) .", "We use default hyperparameters.", "When using POS tags, we use the universal POS tags from the UD treebanks which are coarsegrained and consistent across languages.", "Those POS tags are predicted by UDPipe (Straka et al., 2016) both for training and parsing.", "This parser obtained the 7th best LAS score on average in the 2018 CoNLL shared task (Zeman et al., 2018) , about 2.5 LAS points below the best system, which uses an ensemble system as well as ELMo embeddings, as introduced by Peters et al.", "(2018) .", "Note, however, that we use a slightly impoverished version of the model used for the shared task which is described in Smith et al.", "(2018a) : we use a less accurate POS tagger (UDPipe) and we do not make use of multi-treebank models.", "In addition, Smith et al.", "(2018a) use the three top items of the stack as well as the first item of the buffer to represent the configuration, while we only use the two top items of the stack and the first item of the buffer.", "Smith et al.", "(2018a) also use an extended feature set as introduced by Kiperwasser and Goldberg (2016b) where they also use the rightmost and leftmost children of the items of the stack and buffer that they consider.", "We do not use that extended feature set.", "This is to keep the parser settings as simple as possible and avoid adding confounding factors.", "It is still a near-SOTA model.", "We evaluate parsing models on the development sets and report the average of the 5 best results in 30 epochs and 5 runs with different random seeds.", "Data We test our models on a sample of treebanks from Universal Dependencies v2.1 (Nivre et al., 2017) .", "We follow the criteria from de Lhoneux et al.", "(2017c) to select our sample: we ensure typological variety, we ensure variety of domains, we verify the quality of the treebanks, and we use one treebank with a large amount of non-projective arcs.", "However, unlike them, we do not use extremely small treebanks.", "Our selection is the same as theirs but we remove the tiny treebanks and replace them with 3 others.", "Our final set is: Ancient Greek (PROIEL), Basque, Chinese, Czech, English, Finnish, French, Hebrew and Japanese.", "Results First, we look at the effect of our different recursive composition functions on the full model (i.e., the model using a BiLSTM for feature extraction as well as both character and POS tag information).", "As can be seen from Figure 1 , recursive composition using an LSTM cell (+lc) is generally better than recursive composition with a recurrent cell (+rc), but neither technique reliably improves the accuracy of a BiLSTM parser.", "Ablating the forward and backward LSTMs Second, we only consider the models using character and POS information and look at the effect of ablating parts of the BiLSTM on the different languages.", "The results can be seen in Figure 2 .", "As expected, the BiLSTM parser performs considerably better than both unidirectional LSTM parsers, and the backward LSTM is considerably better than the forward LSTM, on average.", "It is, however, interesting to note that using a forward LSTM is much more hurtful for some languages than others: it is especially hurtful for Chinese and Japanese.", "This can be explained by language properties: the right-headed languages suffer more from ablating the backward LSTM than other languages.", "We observe a correlation between how hurtful a forward model is compared to the baseline and the percentage of right-headed content dependency relations (R = βˆ’0.838, p < .01), see Figure 3 .", "5 There is no significant correlation between how hurtful ablating the forward LSTM is and the percentage of left-headed content dependency relations (p > .05) indicating that its usefulness is not dependent on language properties.", "We hypothesise that dependency length or sentence length can play a role but we also find no correlation between how hurtful it is to ablate the forward LSTM and average dependency or sentence length in treebanks.", "It is finally also interesting to note that the backward LSTM performance is close to the BiLSTMs performance for some languages (Japanese and French).", "5 The reason we only consider content dependency relations is that the UD scheme focuses on dependency relations between content words and treats function words as features of content words to maximise parallelism across languages (de Marneffe et al., 2014) .", "We now look at the effect of using recursive composition on these ablated models.", "Results are given in Figure 4 .", "First of all, we observe unsurprisingly that composition using an LSTM cell is much better than using a simple recurrent cell.", "Second, both types of composition help the backward LSTM case, but neither reliably helps the bi models.", "Finally, the recurrent cell does not help the forward LSTM case but the LSTM cell does to some extent.", "It is interesting to note that using composition, especially using an LSTM cell, bridges a substantial part of the gap between the bw and the bi models.", "These results can be related to the literature on transition-based dependency parsing.", "Transitionbased parsers generally rely on two types of features: history-based features over the emerging dependency tree and lookahead features over the buffer of remaining input.", "The former are based on a hierarchical structure, the latter are purely sequential.", "McDonald and Nivre (2007) and Mc-Donald and Nivre (2011) have shown that historybased features enhance transition-based parsers as long as they do not suffer from error propagation.", "However, Nivre (2006) has also shown that lookahead features are absolutely crucial given the greedy left-to-right parsing strategy.", "In the model architectures considered here, the backward LSTM provides an improved lookahead.", "Similarly to the lookahead in statistical parsing, it is sequential.", "The difference is that it gives information about upcoming words with unbounded length.", "The forward LSTM in this model architecture provides history-based information but unlike in statistical parsing, that information is built sequentially rather than hierarchically: the forward LSTM passes through the sentence in the linear order of the sentence.", "In our results, we see that lookahead features are more important than the history-based ones.", "It hurts parsing accuracy more to ablate the backward LSTM than to ablate the forward one.", "This is expected given that some history-based information is still available through the top tokens on the stack, while the lookahead information is almost lost completely without the backward LSTM.", "A composition function gives hierarchical information about the history of parsing actions.", "It makes sense that it helps the backward LSTM model most since that model has no access to any information about parsing history.", "It helps the forward LSTM slightly which indicates that there can be gains from using structured information about parsing history rather than sequential information.", "We could then expect that composition should help the BiLSTM model which, how- Figure 5 : LAS of baseline, using char and/or POS tags to construct word representations ever, is not the case.", "This might be because the BiLSTM constructs information about parsing history and lookahead into a unique representation.", "In any case, this indicates that BiLSTMs are powerful feature extractors which seem to capture useful information about subtrees.", "Ablating POS and character information Next, we look at the effect of the different word representation methods on the different languages, as represented in Figure 5 .", "As is consistent with the literature de Lhoneux et al., 2017a; Smith et al., 2018b) , using characterbased word representations and/or POS tags consistently improves parsing accuracy but has a different impact in different languages and the benefits of both methods are not cumulative: using the two combined is not much better than using either on its own.", "In particular, character models are an efficient way to obtain large improvements in morphologically rich languages.", "We look at the impact of recursive compositions on all combinations of ablated models, see Table 1 .", "We only look at the impact of using an LSTM cell rather than a recurrent cell since it was a better technique across the board (see previous section).", "Looking first at BiLSTMs, it seems that composition does not reliably help parsing accuracy, regardless of access to POS and character information.", "This indicates that the vectors obtained from the BiLSTM already contain information that would otherwise be obtained by using composition.", "Turning to results with either the forward or the backward LSTM ablated, we see the expected pattern.", "Composition helps more when the model lacks POS tags, indicating that there is some redundancy between these two methods of building contextual information.", "Composition helps recover a substantial part of the gap of the model with a backward LSTM with or without POS tag.", "It recovers a much less substantial part of the gap in other cases which means that, although there is some redundancy between these different methods of building contextual information, they are still complementary and a recursive composition function cannot fully compensate for the lack of a backward LSTM or POS and/or character information.", "There are some language idiosyncracies in the results.", "While composition helps recover most of the gap for the backward LSTM models without POS and/or character information for Czech and English, it does it to a much smaller extent for Basque and Finnish.", "We hypothesise that arc depth might impact the usefulness of composition, since more depth means more matrix multiplications with the composition function.", "However, we find no correlation between average arc depth of the treebanks and usefulness of composition.", "It is an open question why composition helps some languages more than others.", "Note that we are not the first to use composition over vectors obtained from a BiLSTM in the context of dependency parsing, as this was done by Qi and Manning (2017) .", "The difference is that they compose vectors before scoring transitions.", "It was also done by Kiperwasser and Goldberg (2016a) who showed that using BiLSTM vectors for words in their Tree LSTM parser is helpful but they did not compare this to using BiLSTM vectors without the Tree LSTM.", "Recurrent and recursive LSTMs in the way they have been considered in this paper are two ways of constructing contextual information and making it available for local decisions in a greedy parser.", "The strength of recursive LSTMs is that they can build this contextual information using hierarchical context rather than linear context.", "A possible weakness is that this makes the model sensitive to error propagation: a wrong attachment leads to using the wrong contextual information.", "It is therefore possible that the benefits and drawbacks of using this method cancel each other out in the context of BiLSTMs.", "Ensemble To investigate further the information captured by BiLSTMs, we ensemble the 6 versions of the models with POS and character information with the different feature extractors (bi, bw, f w) with (+lc) and without composition.", "We use the (unweighted) reparsing technique of Sagae and Lavie (2006) 6 and ignoring labels.", "As can be seen from the UAS scores in Table 2 , the ensemble (full) largely outperforms the parser using only a BiLSTM, indicating that the information obtained from the different models is complementary.", "To investigate the contribution of each of the 6 models, we ablate each one by one.", "As can be seen from Table 2 , ablating either of the BiLSTM models or the backward LSTM using composition, results in the least effective of the ablated models, further strengthening the conclusion that BiL-STMs are powerful feature extractors.", "Conclusion We investigated the impact of composing the representation of subtrees in a transition-based parser.", "We observed that composition does not reliably help a parser that uses a BiLSTM for feature extraction, indicating that vectors obtained from the BiLSTM might capture subtree information, which is consistent with the results of Linzen et al.", "(2016) .", "However, we observe that, when ablating the backward LSTM, performance drops and recursive composition does not help to recover much of this gap.", "We hypothesise that this is because the backward LSTM primarily improves the lookahead for the greedy parser.", "When ablating the forward LSTM, performance drops to a smaller extent and recursive composition recovers a substantial part of the gap.", "This indicates that a forward LSTM and a recursive composition function capture similar information, which we take to be related to the rich history-based features crucial for a transition-based parser.", "To capture this infor-mation, a recursive composition function is better than a forward LSTM on its own, but it is even better to have a forward LSTM as part of a BiLSTM.", "We further find that recursive composition helps more when POS tags are ablated from the model, indicating that POS tags and a recursive composition function are partly redundant ways of constructing contextual information.", "Finally, we correlate results with language properties, showing that the improved lookahead of a backward LSTM is especially important for head-final languages." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "5.1", "5.2", "5.3", "6" ], "paper_header_content": [ "Introduction", "K&G Transition-Based Parsing", "Composing Subtree Representations", "Composition in a K&G Parser", "Results", "Ablating the forward and backward LSTMs", "Ablating POS and character information", "Ensemble", "Conclusion" ] }
GEM-SciDuet-train-107#paper-1284#slide-0
Recursive vs recurrent NNs
The largest city in Minnesota Miryam de Lhoneux, Miguel Ballesteros and Joakim Nivre Petite kent te aman nl English PTB Chinese CTB Examine composition in simple architecture
The largest city in Minnesota Miryam de Lhoneux, Miguel Ballesteros and Joakim Nivre Petite kent te aman nl English PTB Chinese CTB Examine composition in simple architecture
[]
GEM-SciDuet-train-107#paper-1284#slide-1
1284
Recursive Subtree Composition in LSTM-Based Dependency Parsing
The need for tree structure modelling on top of sequence modelling is an open issue in neural dependency parsing. We investigate the impact of adding a tree layer on top of a sequential model by recursively composing subtree representations (composition) in a transition-based parser that uses features extracted by a BiLSTM. Composition seems superfluous with such a model, suggesting that BiLSTMs capture information about subtrees. We perform model ablations to tease out the conditions under which composition helps. When ablating the backward LSTM, performance drops and composition does not recover much of the gap. When ablating the forward LSTM, performance drops less dramatically and composition recovers a substantial part of the gap, indicating that a forward LSTM and composition capture similar information. We take the backward LSTM to be related to lookahead features and the forward LSTM to the rich history-based features both crucial for transition-based parsers. To capture history-based information, composition is better than a forward LSTM on its own, but it is even better to have a forward LSTM as part of a BiLSTM. We correlate results with language properties, showing that the improved lookahead of a backward LSTM is especially important for head-final languages.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203 ], "paper_content_text": [ "Introduction Recursive neural networks allow us to construct vector representations of trees or subtrees.", "They have been used for constituency parsing by Socher et al.", "(2013) and Dyer et al.", "(2016) and for dependency parsing by Stenetorp (2013) and , among others.", "In particular, showed that composing representations of subtrees using recursive neural networks can be beneficial for transition-based dependency parsing.", "These results were further strengthened in Kuncoro et al.", "(2017) who showed, using ablation experiments, that composition is key in the Recurrent Neural Network Grammar (RNNG) generative parser by Dyer et al.", "(2016) .", "In a parallel development, Kiperwasser and Goldberg (2016b) showed that using BiLSTMs for feature extraction can lead to high parsing accuracy even with fairly simple parsing architectures, and using BiLSTMs for feature extraction has therefore become very popular in dependency parsing.", "It is used in the state-of-the-art parser of Dozat and Manning (2017) , was used in 8 of the 10 highest performing systems of the 2017 CoNLL shared task (Zeman et al., 2017) and 10 out of the 10 highest performing systems of the 2018 CoNLL shared task (Zeman et al., 2018) .", "This raises the question of whether features extracted with BiLSTMs in themselves capture information about subtrees, thus making recursive composition superfluous.", "Some support for this hypothesis comes from the results of Linzen et al.", "(2016) which indicate that LSTMs can capture hierarchical information: they can be trained to predict long-distance number agreement in English.", "Those results were extended to more constructions and three additional languages by Gulordava et al.", "(2018) .", "However, Kuncoro et al.", "(2018) have also shown that although sequential LSTMs can learn syntactic information, a recursive neural network which explicitly models hierarchy (the RNNG model from ) is better at this: it performs better on the number agreement task from Linzen et al.", "(2016) .", "To further explore this question in the context of dependency parsing, we investigate the use of recursive composition (henceforth referred to as composition) in a parser with an architecture like the one in Kiperwasser and Goldberg (2016b) .", "This allows us to explore variations of features and isolate the conditions under which composi-tion is helpful.", "We hypothesise that the use of a BiLSTM for feature extraction makes it possible to capture information about subtrees and therefore makes the use of subtree composition superfluous.", "We hypothesise that composition becomes useful when part of the BiLSTM is ablated, the forward or the backward LSTM.", "We further hypothesise that composition is most useful when the parser has no access to information about the function of words in the context of the sentence given by POS tags.", "When using POS tags, the tagger has indeed had access to the full sentence.", "We additionally look at what happens when we ablate character vectors which have been shown to capture information which is partially overlapping with information from POS tags.", "We experiment with a wider variety of languages than in order to explore whether the usefulness of different model variants vary depending on language type.", "K&G Transition-Based Parsing We define the parsing architecture introduced by Kiperwasser and Goldberg (2016b) at a high level of abstraction and henceforth refer to it as K&G.", "A K&G parser is a greedy transition-based parser.", "1 For an input sentence of length n with words w 1 , .", ".", ".", ", w n , a sequence of vectors x 1:n is created, where the vector x i is a vector representation of the word w i .", "We refer to these as type vectors, as they are the same for all occurrences of a word type.", "Type vectors are then passed through a feature function which learns representations of words in the context of the sentence.", "x i = e(w i ) v i = f (x 1:n , i) We refer to the vector v i as a token vector, as it is different for different tokens of the same word type.", "In Kiperwasser and Goldberg (2016b) , the feature function used is a BiLSTM.", "As is usual in transition-based parsing, parsing involves taking transitions from an initial configuration to a terminal one.", "Parser configurations are represented by a stack, a buffer and set of dependency arcs (Nivre, 2008) .", "For each configuration c, the feature extractor concatenates the token representations of core elements from the stack and buffer.", "These token vectors are passed to a classifier, typically a Multilayer Perceptron (MLP).", "The MLP scores transitions together with the arc labels for transitions that involve adding an arc.", "Both the word type vectors and the BiLSTMs are trained together with the model.", "looked at the impact of using a recursive composition function in their parser, which is also a transition-based parser but with an architecture different from K&G.", "They make use of a variant of the LSTM called a stack LSTM.", "A stack LSTM has push and pop operations which allow passing through states in a tree structure rather than sequentially.", "Stack LSTMs are used to represent the stack, the buffer, and the sequence of past parsing actions performed for a configuration.", "Composing Subtree Representations The words of the sentence are represented by vectors of the word types, together with a vector representing the word's POS tag.", "In the initial configuration, the vectors of all words are in the buffer and the stack is empty.", "The representation of the buffer is the end state of a backward LSTM over the word vectors.", "As parsing evolves, the word vectors are popped from the buffer, pushed to and popped from the stack and the representations of stack and buffer get updated.", "define a recursive composition function and compose tree representations incrementally, as dependents get attached to their head.", "The composed representation c is built by concatenating the vector h of the head with the vector of the dependent d, as well as a vector r representing the label paired with the direction of the arc.", "That concatenated vector is passed through an affine transformation and then through a tanh non-linear activation.", "c = tanh(W [h; d; r] + b) They create two versions of the parser.", "In the first version, when a dependent is attached to a head, the word vector of the head is replaced by a composed vector of the head and dependent.", "In the second version, they simply keep the vector of the head when attaching a dependent to a head.", "They observe that the version with composition is substantially better than the version without, by 1.3 LAS points for English (on the Penn Treebank (PTB) test set) and 2.1 for Chinese (on the Chinese Treebank (CTB) test set).", "Their parser uses POS tag information.", "POS tags help to disambiguate between different functional uses of a word and in this way give information about the use of the word in context.", "We hypothesise that the effect of using a recursive composition function is stronger when not making use of POS tags.", "Composition in a K&G Parser The parsing architectures of the stack LSTM parser (S-LSTM) and K&G are different but have some similarities.", "2 In both cases, the configuration is represented by vectors obtained by LSTMs.", "In K&G, it is represented by the token vectors of top items of the stack and the first item of the buffer.", "In the S-LSTM, it is represented by the vector representations of the entire stack, buffer and sequence of past transitions.", "Both types of parsers learn vector representations of word types which are passed to an LSTM.", "In K&G, they are passed to an LSTM in a feature extraction step that happens before parsing.", "The LSTM in this case is used to learn vectors that have information about the context of each word, a token vector.", "In the S-LSTM, word type vectors are passed to Stack LSTMs as parsing evolves.", "In this case, LSTMs are used to learn vector representations of the stack and buffer (as well as one which learns a representation of the parsing action history).", "When composition is not used in the S-LSTM, word vectors represent word types.", "When composition is used, as parsing evolves, the stack and buffer vectors get updated with information about the subtrees they contain, so that they gradually become contextualised.", "In this sense, those vectors become more like token vectors in K&G.", "More specifically, as explained in the previous section, when a dependent is attached to its head, the composition function is applied to the vectors of head and dependent and the vector of the head is replaced by this composed vector.", "We cannot apply composition on type vectors in the K&G architecture, since they are not used after the feature extraction step and hence cannot influence the representation of the configuration.", "Instead, we apply composition on the token vectors.", "We embed those composed representations in the same space as the token vectors.", "In K&G, like in the S-LSTM, we can create a composition function and compose the representation of subtrees as parsing evolves.", "We create two versions of the parser, one where word tokens are represented by their token vector.", "The other where they are represented by their token vector and the vector of their subtree c i , which is initially just a copy of the token vector (v i = f (x 1:n , i)β€’c i ).", "When a dependent word d is attached to a word h with a relation and direction r, c i is computed with the same composition function as in the S-LSTM defined in the previous section, repeated below.", "3 This composition function is a simple recurrent cell.", "Simple RNNs have known shortcomings which have been addressed by using LSTMs, as proposed by Hochreiter and Schmidhuber (1997) .", "A natural extension to this composition function is therefore to replace it with an LSTM cell.", "We also try this variant.", "We construct LSTMs for subtrees.", "We initialise a new LSTM for each new subtree that is formed, that is, when a dependent d is attached to a head h which does not have any dependent yet.", "Each time we attach a dependent to a head, we construct a vector which is a concatenation of h, d and r. We pass this vector to the LSTM of h. c is the output state of the LSTM after passing through that vector.", "We denote those models with +rc for the one using an ungated recurrent cell and with +lc for the one using an LSTM cell.", "c = tanh(W [h; d; r] + b) c = LSTM([h; d; r]) As results show (see Β§ 5), neither type of composition seems useful when used with the K&G parsing model, which indicates that BiLSTMs capture information about subtrees.", "To further investigate this and in order to isolate the conditions under which composition is helpful, we perform different model ablations and test the impact of recursive composition on these ablated models.", "First, we ablate parts of the BiLSTMs: we ablate either the forward or the backward LSTM.", "We therefore build parsers with 3 different feature functions f (x, i) over the word type vectors x i in the sentence x: a BiLSTM (bi) (our baseline), a backward LSTM (bw) (i.e., ablating the forward LSTM) and a forward LSTM (f w) (i.e., ablating the backward LSTM): bi(x, i) = BILSTM(x 1:n , i) bw(x, i) = LSTM(x n:1 , i) f w(x, i) = LSTM(x 1:n , i) K&G parsers with unidirectional LSTMs are, in some sense, more similar to the S-LSTM than those with a BiLSTM, since the S-LSTM only uses unidirectional LSTMs.", "We hypothesise that composition will help the parser using unidirectional LSTMs in the same way it helps an S-LSTM.", "We additionally experiment with the vector representing the word at the input of the LSTM.", "The most complex representation consists of a concatenation of an embedding of the word type e(w i ), an embedding of the (predicted) POS tag of w i , p(w i ) and a character representation of the word obtained by running a BiLSTM over the characters ch 1:m of w i (BiLSTM(ch 1:m )).", "x i = e(w i ) β€’ p(w i ) β€’ BiLSTM(ch 1:m ) Without a POS tag embedding, the word vector is a representation of the word type.", "With POS information, we have some information about the word in the context of the sentence and the tagger has had access to the full sentence.", "The representation of the word at the input of the BiLSTM is therefore more contextualised and it can be expected that a recursive composition function will be less helpful than when POS information is not used.", "Character information has been shown to be useful for dependency parsing first by .", "and Smith et al.", "(2018b) among others have shown that POS and character information are somewhat complementary.", "used similar character vectors in the S-LSTM parser but did not look at the impact of composition when using these vectors.", "Here, we experiment with ablating either or both of the character and POS vectors.", "We look at the impact of using composition on the full model as well as these ablated models.", "We hypothesise that composition is most helpful when those vectors are not used, since they give information about the functional use of the word in context.", "Parser We use UUParser, a variant of the K&G transition-based parser that employs the arc-hybrid transition system from Kuhlmann et al.", "(2011) extended with a SWAP transition and a Static-Dynamic oracle, as described in de Lhoneux et al.", "(2017b) 4 .", "The SWAP transition is used to allow the construction of non-projective dependency trees (Nivre, 2009) .", "We use default hyperparameters.", "When using POS tags, we use the universal POS tags from the UD treebanks which are coarsegrained and consistent across languages.", "Those POS tags are predicted by UDPipe (Straka et al., 2016) both for training and parsing.", "This parser obtained the 7th best LAS score on average in the 2018 CoNLL shared task (Zeman et al., 2018) , about 2.5 LAS points below the best system, which uses an ensemble system as well as ELMo embeddings, as introduced by Peters et al.", "(2018) .", "Note, however, that we use a slightly impoverished version of the model used for the shared task which is described in Smith et al.", "(2018a) : we use a less accurate POS tagger (UDPipe) and we do not make use of multi-treebank models.", "In addition, Smith et al.", "(2018a) use the three top items of the stack as well as the first item of the buffer to represent the configuration, while we only use the two top items of the stack and the first item of the buffer.", "Smith et al.", "(2018a) also use an extended feature set as introduced by Kiperwasser and Goldberg (2016b) where they also use the rightmost and leftmost children of the items of the stack and buffer that they consider.", "We do not use that extended feature set.", "This is to keep the parser settings as simple as possible and avoid adding confounding factors.", "It is still a near-SOTA model.", "We evaluate parsing models on the development sets and report the average of the 5 best results in 30 epochs and 5 runs with different random seeds.", "Data We test our models on a sample of treebanks from Universal Dependencies v2.1 (Nivre et al., 2017) .", "We follow the criteria from de Lhoneux et al.", "(2017c) to select our sample: we ensure typological variety, we ensure variety of domains, we verify the quality of the treebanks, and we use one treebank with a large amount of non-projective arcs.", "However, unlike them, we do not use extremely small treebanks.", "Our selection is the same as theirs but we remove the tiny treebanks and replace them with 3 others.", "Our final set is: Ancient Greek (PROIEL), Basque, Chinese, Czech, English, Finnish, French, Hebrew and Japanese.", "Results First, we look at the effect of our different recursive composition functions on the full model (i.e., the model using a BiLSTM for feature extraction as well as both character and POS tag information).", "As can be seen from Figure 1 , recursive composition using an LSTM cell (+lc) is generally better than recursive composition with a recurrent cell (+rc), but neither technique reliably improves the accuracy of a BiLSTM parser.", "Ablating the forward and backward LSTMs Second, we only consider the models using character and POS information and look at the effect of ablating parts of the BiLSTM on the different languages.", "The results can be seen in Figure 2 .", "As expected, the BiLSTM parser performs considerably better than both unidirectional LSTM parsers, and the backward LSTM is considerably better than the forward LSTM, on average.", "It is, however, interesting to note that using a forward LSTM is much more hurtful for some languages than others: it is especially hurtful for Chinese and Japanese.", "This can be explained by language properties: the right-headed languages suffer more from ablating the backward LSTM than other languages.", "We observe a correlation between how hurtful a forward model is compared to the baseline and the percentage of right-headed content dependency relations (R = βˆ’0.838, p < .01), see Figure 3 .", "5 There is no significant correlation between how hurtful ablating the forward LSTM is and the percentage of left-headed content dependency relations (p > .05) indicating that its usefulness is not dependent on language properties.", "We hypothesise that dependency length or sentence length can play a role but we also find no correlation between how hurtful it is to ablate the forward LSTM and average dependency or sentence length in treebanks.", "It is finally also interesting to note that the backward LSTM performance is close to the BiLSTMs performance for some languages (Japanese and French).", "5 The reason we only consider content dependency relations is that the UD scheme focuses on dependency relations between content words and treats function words as features of content words to maximise parallelism across languages (de Marneffe et al., 2014) .", "We now look at the effect of using recursive composition on these ablated models.", "Results are given in Figure 4 .", "First of all, we observe unsurprisingly that composition using an LSTM cell is much better than using a simple recurrent cell.", "Second, both types of composition help the backward LSTM case, but neither reliably helps the bi models.", "Finally, the recurrent cell does not help the forward LSTM case but the LSTM cell does to some extent.", "It is interesting to note that using composition, especially using an LSTM cell, bridges a substantial part of the gap between the bw and the bi models.", "These results can be related to the literature on transition-based dependency parsing.", "Transitionbased parsers generally rely on two types of features: history-based features over the emerging dependency tree and lookahead features over the buffer of remaining input.", "The former are based on a hierarchical structure, the latter are purely sequential.", "McDonald and Nivre (2007) and Mc-Donald and Nivre (2011) have shown that historybased features enhance transition-based parsers as long as they do not suffer from error propagation.", "However, Nivre (2006) has also shown that lookahead features are absolutely crucial given the greedy left-to-right parsing strategy.", "In the model architectures considered here, the backward LSTM provides an improved lookahead.", "Similarly to the lookahead in statistical parsing, it is sequential.", "The difference is that it gives information about upcoming words with unbounded length.", "The forward LSTM in this model architecture provides history-based information but unlike in statistical parsing, that information is built sequentially rather than hierarchically: the forward LSTM passes through the sentence in the linear order of the sentence.", "In our results, we see that lookahead features are more important than the history-based ones.", "It hurts parsing accuracy more to ablate the backward LSTM than to ablate the forward one.", "This is expected given that some history-based information is still available through the top tokens on the stack, while the lookahead information is almost lost completely without the backward LSTM.", "A composition function gives hierarchical information about the history of parsing actions.", "It makes sense that it helps the backward LSTM model most since that model has no access to any information about parsing history.", "It helps the forward LSTM slightly which indicates that there can be gains from using structured information about parsing history rather than sequential information.", "We could then expect that composition should help the BiLSTM model which, how- Figure 5 : LAS of baseline, using char and/or POS tags to construct word representations ever, is not the case.", "This might be because the BiLSTM constructs information about parsing history and lookahead into a unique representation.", "In any case, this indicates that BiLSTMs are powerful feature extractors which seem to capture useful information about subtrees.", "Ablating POS and character information Next, we look at the effect of the different word representation methods on the different languages, as represented in Figure 5 .", "As is consistent with the literature de Lhoneux et al., 2017a; Smith et al., 2018b) , using characterbased word representations and/or POS tags consistently improves parsing accuracy but has a different impact in different languages and the benefits of both methods are not cumulative: using the two combined is not much better than using either on its own.", "In particular, character models are an efficient way to obtain large improvements in morphologically rich languages.", "We look at the impact of recursive compositions on all combinations of ablated models, see Table 1 .", "We only look at the impact of using an LSTM cell rather than a recurrent cell since it was a better technique across the board (see previous section).", "Looking first at BiLSTMs, it seems that composition does not reliably help parsing accuracy, regardless of access to POS and character information.", "This indicates that the vectors obtained from the BiLSTM already contain information that would otherwise be obtained by using composition.", "Turning to results with either the forward or the backward LSTM ablated, we see the expected pattern.", "Composition helps more when the model lacks POS tags, indicating that there is some redundancy between these two methods of building contextual information.", "Composition helps recover a substantial part of the gap of the model with a backward LSTM with or without POS tag.", "It recovers a much less substantial part of the gap in other cases which means that, although there is some redundancy between these different methods of building contextual information, they are still complementary and a recursive composition function cannot fully compensate for the lack of a backward LSTM or POS and/or character information.", "There are some language idiosyncracies in the results.", "While composition helps recover most of the gap for the backward LSTM models without POS and/or character information for Czech and English, it does it to a much smaller extent for Basque and Finnish.", "We hypothesise that arc depth might impact the usefulness of composition, since more depth means more matrix multiplications with the composition function.", "However, we find no correlation between average arc depth of the treebanks and usefulness of composition.", "It is an open question why composition helps some languages more than others.", "Note that we are not the first to use composition over vectors obtained from a BiLSTM in the context of dependency parsing, as this was done by Qi and Manning (2017) .", "The difference is that they compose vectors before scoring transitions.", "It was also done by Kiperwasser and Goldberg (2016a) who showed that using BiLSTM vectors for words in their Tree LSTM parser is helpful but they did not compare this to using BiLSTM vectors without the Tree LSTM.", "Recurrent and recursive LSTMs in the way they have been considered in this paper are two ways of constructing contextual information and making it available for local decisions in a greedy parser.", "The strength of recursive LSTMs is that they can build this contextual information using hierarchical context rather than linear context.", "A possible weakness is that this makes the model sensitive to error propagation: a wrong attachment leads to using the wrong contextual information.", "It is therefore possible that the benefits and drawbacks of using this method cancel each other out in the context of BiLSTMs.", "Ensemble To investigate further the information captured by BiLSTMs, we ensemble the 6 versions of the models with POS and character information with the different feature extractors (bi, bw, f w) with (+lc) and without composition.", "We use the (unweighted) reparsing technique of Sagae and Lavie (2006) 6 and ignoring labels.", "As can be seen from the UAS scores in Table 2 , the ensemble (full) largely outperforms the parser using only a BiLSTM, indicating that the information obtained from the different models is complementary.", "To investigate the contribution of each of the 6 models, we ablate each one by one.", "As can be seen from Table 2 , ablating either of the BiLSTM models or the backward LSTM using composition, results in the least effective of the ablated models, further strengthening the conclusion that BiL-STMs are powerful feature extractors.", "Conclusion We investigated the impact of composing the representation of subtrees in a transition-based parser.", "We observed that composition does not reliably help a parser that uses a BiLSTM for feature extraction, indicating that vectors obtained from the BiLSTM might capture subtree information, which is consistent with the results of Linzen et al.", "(2016) .", "However, we observe that, when ablating the backward LSTM, performance drops and recursive composition does not help to recover much of this gap.", "We hypothesise that this is because the backward LSTM primarily improves the lookahead for the greedy parser.", "When ablating the forward LSTM, performance drops to a smaller extent and recursive composition recovers a substantial part of the gap.", "This indicates that a forward LSTM and a recursive composition function capture similar information, which we take to be related to the rich history-based features crucial for a transition-based parser.", "To capture this infor-mation, a recursive composition function is better than a forward LSTM on its own, but it is even better to have a forward LSTM as part of a BiLSTM.", "We further find that recursive composition helps more when POS tags are ablated from the model, indicating that POS tags and a recursive composition function are partly redundant ways of constructing contextual information.", "Finally, we correlate results with language properties, showing that the improved lookahead of a backward LSTM is especially important for head-final languages." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "5.1", "5.2", "5.3", "6" ], "paper_header_content": [ "Introduction", "K&G Transition-Based Parsing", "Composing Subtree Representations", "Composition in a K&G Parser", "Results", "Ablating the forward and backward LSTMs", "Ablating POS and character information", "Ensemble", "Conclusion" ] }
GEM-SciDuet-train-107#paper-1284#slide-1
Recursive NN for Transition Based Parsing
the largest city left-arc the left-arc largest city Recursive composition function in the stack-LSTM parser (Dyer et al., 2015): city1 c(city0, largest, left nmod) city2 c(city1, the, left det)
the largest city left-arc the left-arc largest city Recursive composition function in the stack-LSTM parser (Dyer et al., 2015): city1 c(city0, largest, left nmod) city2 c(city1, the, left det)
[]
GEM-SciDuet-train-107#paper-1284#slide-2
1284
Recursive Subtree Composition in LSTM-Based Dependency Parsing
The need for tree structure modelling on top of sequence modelling is an open issue in neural dependency parsing. We investigate the impact of adding a tree layer on top of a sequential model by recursively composing subtree representations (composition) in a transition-based parser that uses features extracted by a BiLSTM. Composition seems superfluous with such a model, suggesting that BiLSTMs capture information about subtrees. We perform model ablations to tease out the conditions under which composition helps. When ablating the backward LSTM, performance drops and composition does not recover much of the gap. When ablating the forward LSTM, performance drops less dramatically and composition recovers a substantial part of the gap, indicating that a forward LSTM and composition capture similar information. We take the backward LSTM to be related to lookahead features and the forward LSTM to the rich history-based features both crucial for transition-based parsers. To capture history-based information, composition is better than a forward LSTM on its own, but it is even better to have a forward LSTM as part of a BiLSTM. We correlate results with language properties, showing that the improved lookahead of a backward LSTM is especially important for head-final languages.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203 ], "paper_content_text": [ "Introduction Recursive neural networks allow us to construct vector representations of trees or subtrees.", "They have been used for constituency parsing by Socher et al.", "(2013) and Dyer et al.", "(2016) and for dependency parsing by Stenetorp (2013) and , among others.", "In particular, showed that composing representations of subtrees using recursive neural networks can be beneficial for transition-based dependency parsing.", "These results were further strengthened in Kuncoro et al.", "(2017) who showed, using ablation experiments, that composition is key in the Recurrent Neural Network Grammar (RNNG) generative parser by Dyer et al.", "(2016) .", "In a parallel development, Kiperwasser and Goldberg (2016b) showed that using BiLSTMs for feature extraction can lead to high parsing accuracy even with fairly simple parsing architectures, and using BiLSTMs for feature extraction has therefore become very popular in dependency parsing.", "It is used in the state-of-the-art parser of Dozat and Manning (2017) , was used in 8 of the 10 highest performing systems of the 2017 CoNLL shared task (Zeman et al., 2017) and 10 out of the 10 highest performing systems of the 2018 CoNLL shared task (Zeman et al., 2018) .", "This raises the question of whether features extracted with BiLSTMs in themselves capture information about subtrees, thus making recursive composition superfluous.", "Some support for this hypothesis comes from the results of Linzen et al.", "(2016) which indicate that LSTMs can capture hierarchical information: they can be trained to predict long-distance number agreement in English.", "Those results were extended to more constructions and three additional languages by Gulordava et al.", "(2018) .", "However, Kuncoro et al.", "(2018) have also shown that although sequential LSTMs can learn syntactic information, a recursive neural network which explicitly models hierarchy (the RNNG model from ) is better at this: it performs better on the number agreement task from Linzen et al.", "(2016) .", "To further explore this question in the context of dependency parsing, we investigate the use of recursive composition (henceforth referred to as composition) in a parser with an architecture like the one in Kiperwasser and Goldberg (2016b) .", "This allows us to explore variations of features and isolate the conditions under which composi-tion is helpful.", "We hypothesise that the use of a BiLSTM for feature extraction makes it possible to capture information about subtrees and therefore makes the use of subtree composition superfluous.", "We hypothesise that composition becomes useful when part of the BiLSTM is ablated, the forward or the backward LSTM.", "We further hypothesise that composition is most useful when the parser has no access to information about the function of words in the context of the sentence given by POS tags.", "When using POS tags, the tagger has indeed had access to the full sentence.", "We additionally look at what happens when we ablate character vectors which have been shown to capture information which is partially overlapping with information from POS tags.", "We experiment with a wider variety of languages than in order to explore whether the usefulness of different model variants vary depending on language type.", "K&G Transition-Based Parsing We define the parsing architecture introduced by Kiperwasser and Goldberg (2016b) at a high level of abstraction and henceforth refer to it as K&G.", "A K&G parser is a greedy transition-based parser.", "1 For an input sentence of length n with words w 1 , .", ".", ".", ", w n , a sequence of vectors x 1:n is created, where the vector x i is a vector representation of the word w i .", "We refer to these as type vectors, as they are the same for all occurrences of a word type.", "Type vectors are then passed through a feature function which learns representations of words in the context of the sentence.", "x i = e(w i ) v i = f (x 1:n , i) We refer to the vector v i as a token vector, as it is different for different tokens of the same word type.", "In Kiperwasser and Goldberg (2016b) , the feature function used is a BiLSTM.", "As is usual in transition-based parsing, parsing involves taking transitions from an initial configuration to a terminal one.", "Parser configurations are represented by a stack, a buffer and set of dependency arcs (Nivre, 2008) .", "For each configuration c, the feature extractor concatenates the token representations of core elements from the stack and buffer.", "These token vectors are passed to a classifier, typically a Multilayer Perceptron (MLP).", "The MLP scores transitions together with the arc labels for transitions that involve adding an arc.", "Both the word type vectors and the BiLSTMs are trained together with the model.", "looked at the impact of using a recursive composition function in their parser, which is also a transition-based parser but with an architecture different from K&G.", "They make use of a variant of the LSTM called a stack LSTM.", "A stack LSTM has push and pop operations which allow passing through states in a tree structure rather than sequentially.", "Stack LSTMs are used to represent the stack, the buffer, and the sequence of past parsing actions performed for a configuration.", "Composing Subtree Representations The words of the sentence are represented by vectors of the word types, together with a vector representing the word's POS tag.", "In the initial configuration, the vectors of all words are in the buffer and the stack is empty.", "The representation of the buffer is the end state of a backward LSTM over the word vectors.", "As parsing evolves, the word vectors are popped from the buffer, pushed to and popped from the stack and the representations of stack and buffer get updated.", "define a recursive composition function and compose tree representations incrementally, as dependents get attached to their head.", "The composed representation c is built by concatenating the vector h of the head with the vector of the dependent d, as well as a vector r representing the label paired with the direction of the arc.", "That concatenated vector is passed through an affine transformation and then through a tanh non-linear activation.", "c = tanh(W [h; d; r] + b) They create two versions of the parser.", "In the first version, when a dependent is attached to a head, the word vector of the head is replaced by a composed vector of the head and dependent.", "In the second version, they simply keep the vector of the head when attaching a dependent to a head.", "They observe that the version with composition is substantially better than the version without, by 1.3 LAS points for English (on the Penn Treebank (PTB) test set) and 2.1 for Chinese (on the Chinese Treebank (CTB) test set).", "Their parser uses POS tag information.", "POS tags help to disambiguate between different functional uses of a word and in this way give information about the use of the word in context.", "We hypothesise that the effect of using a recursive composition function is stronger when not making use of POS tags.", "Composition in a K&G Parser The parsing architectures of the stack LSTM parser (S-LSTM) and K&G are different but have some similarities.", "2 In both cases, the configuration is represented by vectors obtained by LSTMs.", "In K&G, it is represented by the token vectors of top items of the stack and the first item of the buffer.", "In the S-LSTM, it is represented by the vector representations of the entire stack, buffer and sequence of past transitions.", "Both types of parsers learn vector representations of word types which are passed to an LSTM.", "In K&G, they are passed to an LSTM in a feature extraction step that happens before parsing.", "The LSTM in this case is used to learn vectors that have information about the context of each word, a token vector.", "In the S-LSTM, word type vectors are passed to Stack LSTMs as parsing evolves.", "In this case, LSTMs are used to learn vector representations of the stack and buffer (as well as one which learns a representation of the parsing action history).", "When composition is not used in the S-LSTM, word vectors represent word types.", "When composition is used, as parsing evolves, the stack and buffer vectors get updated with information about the subtrees they contain, so that they gradually become contextualised.", "In this sense, those vectors become more like token vectors in K&G.", "More specifically, as explained in the previous section, when a dependent is attached to its head, the composition function is applied to the vectors of head and dependent and the vector of the head is replaced by this composed vector.", "We cannot apply composition on type vectors in the K&G architecture, since they are not used after the feature extraction step and hence cannot influence the representation of the configuration.", "Instead, we apply composition on the token vectors.", "We embed those composed representations in the same space as the token vectors.", "In K&G, like in the S-LSTM, we can create a composition function and compose the representation of subtrees as parsing evolves.", "We create two versions of the parser, one where word tokens are represented by their token vector.", "The other where they are represented by their token vector and the vector of their subtree c i , which is initially just a copy of the token vector (v i = f (x 1:n , i)β€’c i ).", "When a dependent word d is attached to a word h with a relation and direction r, c i is computed with the same composition function as in the S-LSTM defined in the previous section, repeated below.", "3 This composition function is a simple recurrent cell.", "Simple RNNs have known shortcomings which have been addressed by using LSTMs, as proposed by Hochreiter and Schmidhuber (1997) .", "A natural extension to this composition function is therefore to replace it with an LSTM cell.", "We also try this variant.", "We construct LSTMs for subtrees.", "We initialise a new LSTM for each new subtree that is formed, that is, when a dependent d is attached to a head h which does not have any dependent yet.", "Each time we attach a dependent to a head, we construct a vector which is a concatenation of h, d and r. We pass this vector to the LSTM of h. c is the output state of the LSTM after passing through that vector.", "We denote those models with +rc for the one using an ungated recurrent cell and with +lc for the one using an LSTM cell.", "c = tanh(W [h; d; r] + b) c = LSTM([h; d; r]) As results show (see Β§ 5), neither type of composition seems useful when used with the K&G parsing model, which indicates that BiLSTMs capture information about subtrees.", "To further investigate this and in order to isolate the conditions under which composition is helpful, we perform different model ablations and test the impact of recursive composition on these ablated models.", "First, we ablate parts of the BiLSTMs: we ablate either the forward or the backward LSTM.", "We therefore build parsers with 3 different feature functions f (x, i) over the word type vectors x i in the sentence x: a BiLSTM (bi) (our baseline), a backward LSTM (bw) (i.e., ablating the forward LSTM) and a forward LSTM (f w) (i.e., ablating the backward LSTM): bi(x, i) = BILSTM(x 1:n , i) bw(x, i) = LSTM(x n:1 , i) f w(x, i) = LSTM(x 1:n , i) K&G parsers with unidirectional LSTMs are, in some sense, more similar to the S-LSTM than those with a BiLSTM, since the S-LSTM only uses unidirectional LSTMs.", "We hypothesise that composition will help the parser using unidirectional LSTMs in the same way it helps an S-LSTM.", "We additionally experiment with the vector representing the word at the input of the LSTM.", "The most complex representation consists of a concatenation of an embedding of the word type e(w i ), an embedding of the (predicted) POS tag of w i , p(w i ) and a character representation of the word obtained by running a BiLSTM over the characters ch 1:m of w i (BiLSTM(ch 1:m )).", "x i = e(w i ) β€’ p(w i ) β€’ BiLSTM(ch 1:m ) Without a POS tag embedding, the word vector is a representation of the word type.", "With POS information, we have some information about the word in the context of the sentence and the tagger has had access to the full sentence.", "The representation of the word at the input of the BiLSTM is therefore more contextualised and it can be expected that a recursive composition function will be less helpful than when POS information is not used.", "Character information has been shown to be useful for dependency parsing first by .", "and Smith et al.", "(2018b) among others have shown that POS and character information are somewhat complementary.", "used similar character vectors in the S-LSTM parser but did not look at the impact of composition when using these vectors.", "Here, we experiment with ablating either or both of the character and POS vectors.", "We look at the impact of using composition on the full model as well as these ablated models.", "We hypothesise that composition is most helpful when those vectors are not used, since they give information about the functional use of the word in context.", "Parser We use UUParser, a variant of the K&G transition-based parser that employs the arc-hybrid transition system from Kuhlmann et al.", "(2011) extended with a SWAP transition and a Static-Dynamic oracle, as described in de Lhoneux et al.", "(2017b) 4 .", "The SWAP transition is used to allow the construction of non-projective dependency trees (Nivre, 2009) .", "We use default hyperparameters.", "When using POS tags, we use the universal POS tags from the UD treebanks which are coarsegrained and consistent across languages.", "Those POS tags are predicted by UDPipe (Straka et al., 2016) both for training and parsing.", "This parser obtained the 7th best LAS score on average in the 2018 CoNLL shared task (Zeman et al., 2018) , about 2.5 LAS points below the best system, which uses an ensemble system as well as ELMo embeddings, as introduced by Peters et al.", "(2018) .", "Note, however, that we use a slightly impoverished version of the model used for the shared task which is described in Smith et al.", "(2018a) : we use a less accurate POS tagger (UDPipe) and we do not make use of multi-treebank models.", "In addition, Smith et al.", "(2018a) use the three top items of the stack as well as the first item of the buffer to represent the configuration, while we only use the two top items of the stack and the first item of the buffer.", "Smith et al.", "(2018a) also use an extended feature set as introduced by Kiperwasser and Goldberg (2016b) where they also use the rightmost and leftmost children of the items of the stack and buffer that they consider.", "We do not use that extended feature set.", "This is to keep the parser settings as simple as possible and avoid adding confounding factors.", "It is still a near-SOTA model.", "We evaluate parsing models on the development sets and report the average of the 5 best results in 30 epochs and 5 runs with different random seeds.", "Data We test our models on a sample of treebanks from Universal Dependencies v2.1 (Nivre et al., 2017) .", "We follow the criteria from de Lhoneux et al.", "(2017c) to select our sample: we ensure typological variety, we ensure variety of domains, we verify the quality of the treebanks, and we use one treebank with a large amount of non-projective arcs.", "However, unlike them, we do not use extremely small treebanks.", "Our selection is the same as theirs but we remove the tiny treebanks and replace them with 3 others.", "Our final set is: Ancient Greek (PROIEL), Basque, Chinese, Czech, English, Finnish, French, Hebrew and Japanese.", "Results First, we look at the effect of our different recursive composition functions on the full model (i.e., the model using a BiLSTM for feature extraction as well as both character and POS tag information).", "As can be seen from Figure 1 , recursive composition using an LSTM cell (+lc) is generally better than recursive composition with a recurrent cell (+rc), but neither technique reliably improves the accuracy of a BiLSTM parser.", "Ablating the forward and backward LSTMs Second, we only consider the models using character and POS information and look at the effect of ablating parts of the BiLSTM on the different languages.", "The results can be seen in Figure 2 .", "As expected, the BiLSTM parser performs considerably better than both unidirectional LSTM parsers, and the backward LSTM is considerably better than the forward LSTM, on average.", "It is, however, interesting to note that using a forward LSTM is much more hurtful for some languages than others: it is especially hurtful for Chinese and Japanese.", "This can be explained by language properties: the right-headed languages suffer more from ablating the backward LSTM than other languages.", "We observe a correlation between how hurtful a forward model is compared to the baseline and the percentage of right-headed content dependency relations (R = βˆ’0.838, p < .01), see Figure 3 .", "5 There is no significant correlation between how hurtful ablating the forward LSTM is and the percentage of left-headed content dependency relations (p > .05) indicating that its usefulness is not dependent on language properties.", "We hypothesise that dependency length or sentence length can play a role but we also find no correlation between how hurtful it is to ablate the forward LSTM and average dependency or sentence length in treebanks.", "It is finally also interesting to note that the backward LSTM performance is close to the BiLSTMs performance for some languages (Japanese and French).", "5 The reason we only consider content dependency relations is that the UD scheme focuses on dependency relations between content words and treats function words as features of content words to maximise parallelism across languages (de Marneffe et al., 2014) .", "We now look at the effect of using recursive composition on these ablated models.", "Results are given in Figure 4 .", "First of all, we observe unsurprisingly that composition using an LSTM cell is much better than using a simple recurrent cell.", "Second, both types of composition help the backward LSTM case, but neither reliably helps the bi models.", "Finally, the recurrent cell does not help the forward LSTM case but the LSTM cell does to some extent.", "It is interesting to note that using composition, especially using an LSTM cell, bridges a substantial part of the gap between the bw and the bi models.", "These results can be related to the literature on transition-based dependency parsing.", "Transitionbased parsers generally rely on two types of features: history-based features over the emerging dependency tree and lookahead features over the buffer of remaining input.", "The former are based on a hierarchical structure, the latter are purely sequential.", "McDonald and Nivre (2007) and Mc-Donald and Nivre (2011) have shown that historybased features enhance transition-based parsers as long as they do not suffer from error propagation.", "However, Nivre (2006) has also shown that lookahead features are absolutely crucial given the greedy left-to-right parsing strategy.", "In the model architectures considered here, the backward LSTM provides an improved lookahead.", "Similarly to the lookahead in statistical parsing, it is sequential.", "The difference is that it gives information about upcoming words with unbounded length.", "The forward LSTM in this model architecture provides history-based information but unlike in statistical parsing, that information is built sequentially rather than hierarchically: the forward LSTM passes through the sentence in the linear order of the sentence.", "In our results, we see that lookahead features are more important than the history-based ones.", "It hurts parsing accuracy more to ablate the backward LSTM than to ablate the forward one.", "This is expected given that some history-based information is still available through the top tokens on the stack, while the lookahead information is almost lost completely without the backward LSTM.", "A composition function gives hierarchical information about the history of parsing actions.", "It makes sense that it helps the backward LSTM model most since that model has no access to any information about parsing history.", "It helps the forward LSTM slightly which indicates that there can be gains from using structured information about parsing history rather than sequential information.", "We could then expect that composition should help the BiLSTM model which, how- Figure 5 : LAS of baseline, using char and/or POS tags to construct word representations ever, is not the case.", "This might be because the BiLSTM constructs information about parsing history and lookahead into a unique representation.", "In any case, this indicates that BiLSTMs are powerful feature extractors which seem to capture useful information about subtrees.", "Ablating POS and character information Next, we look at the effect of the different word representation methods on the different languages, as represented in Figure 5 .", "As is consistent with the literature de Lhoneux et al., 2017a; Smith et al., 2018b) , using characterbased word representations and/or POS tags consistently improves parsing accuracy but has a different impact in different languages and the benefits of both methods are not cumulative: using the two combined is not much better than using either on its own.", "In particular, character models are an efficient way to obtain large improvements in morphologically rich languages.", "We look at the impact of recursive compositions on all combinations of ablated models, see Table 1 .", "We only look at the impact of using an LSTM cell rather than a recurrent cell since it was a better technique across the board (see previous section).", "Looking first at BiLSTMs, it seems that composition does not reliably help parsing accuracy, regardless of access to POS and character information.", "This indicates that the vectors obtained from the BiLSTM already contain information that would otherwise be obtained by using composition.", "Turning to results with either the forward or the backward LSTM ablated, we see the expected pattern.", "Composition helps more when the model lacks POS tags, indicating that there is some redundancy between these two methods of building contextual information.", "Composition helps recover a substantial part of the gap of the model with a backward LSTM with or without POS tag.", "It recovers a much less substantial part of the gap in other cases which means that, although there is some redundancy between these different methods of building contextual information, they are still complementary and a recursive composition function cannot fully compensate for the lack of a backward LSTM or POS and/or character information.", "There are some language idiosyncracies in the results.", "While composition helps recover most of the gap for the backward LSTM models without POS and/or character information for Czech and English, it does it to a much smaller extent for Basque and Finnish.", "We hypothesise that arc depth might impact the usefulness of composition, since more depth means more matrix multiplications with the composition function.", "However, we find no correlation between average arc depth of the treebanks and usefulness of composition.", "It is an open question why composition helps some languages more than others.", "Note that we are not the first to use composition over vectors obtained from a BiLSTM in the context of dependency parsing, as this was done by Qi and Manning (2017) .", "The difference is that they compose vectors before scoring transitions.", "It was also done by Kiperwasser and Goldberg (2016a) who showed that using BiLSTM vectors for words in their Tree LSTM parser is helpful but they did not compare this to using BiLSTM vectors without the Tree LSTM.", "Recurrent and recursive LSTMs in the way they have been considered in this paper are two ways of constructing contextual information and making it available for local decisions in a greedy parser.", "The strength of recursive LSTMs is that they can build this contextual information using hierarchical context rather than linear context.", "A possible weakness is that this makes the model sensitive to error propagation: a wrong attachment leads to using the wrong contextual information.", "It is therefore possible that the benefits and drawbacks of using this method cancel each other out in the context of BiLSTMs.", "Ensemble To investigate further the information captured by BiLSTMs, we ensemble the 6 versions of the models with POS and character information with the different feature extractors (bi, bw, f w) with (+lc) and without composition.", "We use the (unweighted) reparsing technique of Sagae and Lavie (2006) 6 and ignoring labels.", "As can be seen from the UAS scores in Table 2 , the ensemble (full) largely outperforms the parser using only a BiLSTM, indicating that the information obtained from the different models is complementary.", "To investigate the contribution of each of the 6 models, we ablate each one by one.", "As can be seen from Table 2 , ablating either of the BiLSTM models or the backward LSTM using composition, results in the least effective of the ablated models, further strengthening the conclusion that BiL-STMs are powerful feature extractors.", "Conclusion We investigated the impact of composing the representation of subtrees in a transition-based parser.", "We observed that composition does not reliably help a parser that uses a BiLSTM for feature extraction, indicating that vectors obtained from the BiLSTM might capture subtree information, which is consistent with the results of Linzen et al.", "(2016) .", "However, we observe that, when ablating the backward LSTM, performance drops and recursive composition does not help to recover much of this gap.", "We hypothesise that this is because the backward LSTM primarily improves the lookahead for the greedy parser.", "When ablating the forward LSTM, performance drops to a smaller extent and recursive composition recovers a substantial part of the gap.", "This indicates that a forward LSTM and a recursive composition function capture similar information, which we take to be related to the rich history-based features crucial for a transition-based parser.", "To capture this infor-mation, a recursive composition function is better than a forward LSTM on its own, but it is even better to have a forward LSTM as part of a BiLSTM.", "We further find that recursive composition helps more when POS tags are ablated from the model, indicating that POS tags and a recursive composition function are partly redundant ways of constructing contextual information.", "Finally, we correlate results with language properties, showing that the improved lookahead of a backward LSTM is especially important for head-final languages." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "5.1", "5.2", "5.3", "6" ], "paper_header_content": [ "Introduction", "K&G Transition-Based Parsing", "Composing Subtree Representations", "Composition in a K&G Parser", "Results", "Ablating the forward and backward LSTMs", "Ablating POS and character information", "Ensemble", "Conclusion" ] }
GEM-SciDuet-train-107#paper-1284#slide-2
Transition Based Parsing using BiLSTM
the brown fox jumped root Vthe Vbrown Vfox Vjumped Vroot concat concat concat concat concat LSTM b LSTM b LSTM b LSTM b LSTM b LSTM f LSTM f LSTM f LSTM f LSTM f X the X brown X fox X jumped X root t h e b r o w n f o x j u m p e d e(the) e(brown) e(fox) e(jumped) pe(the) pe(brown) pe(fox) pe(jumped)
the brown fox jumped root Vthe Vbrown Vfox Vjumped Vroot concat concat concat concat concat LSTM b LSTM b LSTM b LSTM b LSTM b LSTM f LSTM f LSTM f LSTM f LSTM f X the X brown X fox X jumped X root t h e b r o w n f o x j u m p e d e(the) e(brown) e(fox) e(jumped) pe(the) pe(brown) pe(fox) pe(jumped)
[]
GEM-SciDuet-train-107#paper-1284#slide-3
1284
Recursive Subtree Composition in LSTM-Based Dependency Parsing
The need for tree structure modelling on top of sequence modelling is an open issue in neural dependency parsing. We investigate the impact of adding a tree layer on top of a sequential model by recursively composing subtree representations (composition) in a transition-based parser that uses features extracted by a BiLSTM. Composition seems superfluous with such a model, suggesting that BiLSTMs capture information about subtrees. We perform model ablations to tease out the conditions under which composition helps. When ablating the backward LSTM, performance drops and composition does not recover much of the gap. When ablating the forward LSTM, performance drops less dramatically and composition recovers a substantial part of the gap, indicating that a forward LSTM and composition capture similar information. We take the backward LSTM to be related to lookahead features and the forward LSTM to the rich history-based features both crucial for transition-based parsers. To capture history-based information, composition is better than a forward LSTM on its own, but it is even better to have a forward LSTM as part of a BiLSTM. We correlate results with language properties, showing that the improved lookahead of a backward LSTM is especially important for head-final languages.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203 ], "paper_content_text": [ "Introduction Recursive neural networks allow us to construct vector representations of trees or subtrees.", "They have been used for constituency parsing by Socher et al.", "(2013) and Dyer et al.", "(2016) and for dependency parsing by Stenetorp (2013) and , among others.", "In particular, showed that composing representations of subtrees using recursive neural networks can be beneficial for transition-based dependency parsing.", "These results were further strengthened in Kuncoro et al.", "(2017) who showed, using ablation experiments, that composition is key in the Recurrent Neural Network Grammar (RNNG) generative parser by Dyer et al.", "(2016) .", "In a parallel development, Kiperwasser and Goldberg (2016b) showed that using BiLSTMs for feature extraction can lead to high parsing accuracy even with fairly simple parsing architectures, and using BiLSTMs for feature extraction has therefore become very popular in dependency parsing.", "It is used in the state-of-the-art parser of Dozat and Manning (2017) , was used in 8 of the 10 highest performing systems of the 2017 CoNLL shared task (Zeman et al., 2017) and 10 out of the 10 highest performing systems of the 2018 CoNLL shared task (Zeman et al., 2018) .", "This raises the question of whether features extracted with BiLSTMs in themselves capture information about subtrees, thus making recursive composition superfluous.", "Some support for this hypothesis comes from the results of Linzen et al.", "(2016) which indicate that LSTMs can capture hierarchical information: they can be trained to predict long-distance number agreement in English.", "Those results were extended to more constructions and three additional languages by Gulordava et al.", "(2018) .", "However, Kuncoro et al.", "(2018) have also shown that although sequential LSTMs can learn syntactic information, a recursive neural network which explicitly models hierarchy (the RNNG model from ) is better at this: it performs better on the number agreement task from Linzen et al.", "(2016) .", "To further explore this question in the context of dependency parsing, we investigate the use of recursive composition (henceforth referred to as composition) in a parser with an architecture like the one in Kiperwasser and Goldberg (2016b) .", "This allows us to explore variations of features and isolate the conditions under which composi-tion is helpful.", "We hypothesise that the use of a BiLSTM for feature extraction makes it possible to capture information about subtrees and therefore makes the use of subtree composition superfluous.", "We hypothesise that composition becomes useful when part of the BiLSTM is ablated, the forward or the backward LSTM.", "We further hypothesise that composition is most useful when the parser has no access to information about the function of words in the context of the sentence given by POS tags.", "When using POS tags, the tagger has indeed had access to the full sentence.", "We additionally look at what happens when we ablate character vectors which have been shown to capture information which is partially overlapping with information from POS tags.", "We experiment with a wider variety of languages than in order to explore whether the usefulness of different model variants vary depending on language type.", "K&G Transition-Based Parsing We define the parsing architecture introduced by Kiperwasser and Goldberg (2016b) at a high level of abstraction and henceforth refer to it as K&G.", "A K&G parser is a greedy transition-based parser.", "1 For an input sentence of length n with words w 1 , .", ".", ".", ", w n , a sequence of vectors x 1:n is created, where the vector x i is a vector representation of the word w i .", "We refer to these as type vectors, as they are the same for all occurrences of a word type.", "Type vectors are then passed through a feature function which learns representations of words in the context of the sentence.", "x i = e(w i ) v i = f (x 1:n , i) We refer to the vector v i as a token vector, as it is different for different tokens of the same word type.", "In Kiperwasser and Goldberg (2016b) , the feature function used is a BiLSTM.", "As is usual in transition-based parsing, parsing involves taking transitions from an initial configuration to a terminal one.", "Parser configurations are represented by a stack, a buffer and set of dependency arcs (Nivre, 2008) .", "For each configuration c, the feature extractor concatenates the token representations of core elements from the stack and buffer.", "These token vectors are passed to a classifier, typically a Multilayer Perceptron (MLP).", "The MLP scores transitions together with the arc labels for transitions that involve adding an arc.", "Both the word type vectors and the BiLSTMs are trained together with the model.", "looked at the impact of using a recursive composition function in their parser, which is also a transition-based parser but with an architecture different from K&G.", "They make use of a variant of the LSTM called a stack LSTM.", "A stack LSTM has push and pop operations which allow passing through states in a tree structure rather than sequentially.", "Stack LSTMs are used to represent the stack, the buffer, and the sequence of past parsing actions performed for a configuration.", "Composing Subtree Representations The words of the sentence are represented by vectors of the word types, together with a vector representing the word's POS tag.", "In the initial configuration, the vectors of all words are in the buffer and the stack is empty.", "The representation of the buffer is the end state of a backward LSTM over the word vectors.", "As parsing evolves, the word vectors are popped from the buffer, pushed to and popped from the stack and the representations of stack and buffer get updated.", "define a recursive composition function and compose tree representations incrementally, as dependents get attached to their head.", "The composed representation c is built by concatenating the vector h of the head with the vector of the dependent d, as well as a vector r representing the label paired with the direction of the arc.", "That concatenated vector is passed through an affine transformation and then through a tanh non-linear activation.", "c = tanh(W [h; d; r] + b) They create two versions of the parser.", "In the first version, when a dependent is attached to a head, the word vector of the head is replaced by a composed vector of the head and dependent.", "In the second version, they simply keep the vector of the head when attaching a dependent to a head.", "They observe that the version with composition is substantially better than the version without, by 1.3 LAS points for English (on the Penn Treebank (PTB) test set) and 2.1 for Chinese (on the Chinese Treebank (CTB) test set).", "Their parser uses POS tag information.", "POS tags help to disambiguate between different functional uses of a word and in this way give information about the use of the word in context.", "We hypothesise that the effect of using a recursive composition function is stronger when not making use of POS tags.", "Composition in a K&G Parser The parsing architectures of the stack LSTM parser (S-LSTM) and K&G are different but have some similarities.", "2 In both cases, the configuration is represented by vectors obtained by LSTMs.", "In K&G, it is represented by the token vectors of top items of the stack and the first item of the buffer.", "In the S-LSTM, it is represented by the vector representations of the entire stack, buffer and sequence of past transitions.", "Both types of parsers learn vector representations of word types which are passed to an LSTM.", "In K&G, they are passed to an LSTM in a feature extraction step that happens before parsing.", "The LSTM in this case is used to learn vectors that have information about the context of each word, a token vector.", "In the S-LSTM, word type vectors are passed to Stack LSTMs as parsing evolves.", "In this case, LSTMs are used to learn vector representations of the stack and buffer (as well as one which learns a representation of the parsing action history).", "When composition is not used in the S-LSTM, word vectors represent word types.", "When composition is used, as parsing evolves, the stack and buffer vectors get updated with information about the subtrees they contain, so that they gradually become contextualised.", "In this sense, those vectors become more like token vectors in K&G.", "More specifically, as explained in the previous section, when a dependent is attached to its head, the composition function is applied to the vectors of head and dependent and the vector of the head is replaced by this composed vector.", "We cannot apply composition on type vectors in the K&G architecture, since they are not used after the feature extraction step and hence cannot influence the representation of the configuration.", "Instead, we apply composition on the token vectors.", "We embed those composed representations in the same space as the token vectors.", "In K&G, like in the S-LSTM, we can create a composition function and compose the representation of subtrees as parsing evolves.", "We create two versions of the parser, one where word tokens are represented by their token vector.", "The other where they are represented by their token vector and the vector of their subtree c i , which is initially just a copy of the token vector (v i = f (x 1:n , i)β€’c i ).", "When a dependent word d is attached to a word h with a relation and direction r, c i is computed with the same composition function as in the S-LSTM defined in the previous section, repeated below.", "3 This composition function is a simple recurrent cell.", "Simple RNNs have known shortcomings which have been addressed by using LSTMs, as proposed by Hochreiter and Schmidhuber (1997) .", "A natural extension to this composition function is therefore to replace it with an LSTM cell.", "We also try this variant.", "We construct LSTMs for subtrees.", "We initialise a new LSTM for each new subtree that is formed, that is, when a dependent d is attached to a head h which does not have any dependent yet.", "Each time we attach a dependent to a head, we construct a vector which is a concatenation of h, d and r. We pass this vector to the LSTM of h. c is the output state of the LSTM after passing through that vector.", "We denote those models with +rc for the one using an ungated recurrent cell and with +lc for the one using an LSTM cell.", "c = tanh(W [h; d; r] + b) c = LSTM([h; d; r]) As results show (see Β§ 5), neither type of composition seems useful when used with the K&G parsing model, which indicates that BiLSTMs capture information about subtrees.", "To further investigate this and in order to isolate the conditions under which composition is helpful, we perform different model ablations and test the impact of recursive composition on these ablated models.", "First, we ablate parts of the BiLSTMs: we ablate either the forward or the backward LSTM.", "We therefore build parsers with 3 different feature functions f (x, i) over the word type vectors x i in the sentence x: a BiLSTM (bi) (our baseline), a backward LSTM (bw) (i.e., ablating the forward LSTM) and a forward LSTM (f w) (i.e., ablating the backward LSTM): bi(x, i) = BILSTM(x 1:n , i) bw(x, i) = LSTM(x n:1 , i) f w(x, i) = LSTM(x 1:n , i) K&G parsers with unidirectional LSTMs are, in some sense, more similar to the S-LSTM than those with a BiLSTM, since the S-LSTM only uses unidirectional LSTMs.", "We hypothesise that composition will help the parser using unidirectional LSTMs in the same way it helps an S-LSTM.", "We additionally experiment with the vector representing the word at the input of the LSTM.", "The most complex representation consists of a concatenation of an embedding of the word type e(w i ), an embedding of the (predicted) POS tag of w i , p(w i ) and a character representation of the word obtained by running a BiLSTM over the characters ch 1:m of w i (BiLSTM(ch 1:m )).", "x i = e(w i ) β€’ p(w i ) β€’ BiLSTM(ch 1:m ) Without a POS tag embedding, the word vector is a representation of the word type.", "With POS information, we have some information about the word in the context of the sentence and the tagger has had access to the full sentence.", "The representation of the word at the input of the BiLSTM is therefore more contextualised and it can be expected that a recursive composition function will be less helpful than when POS information is not used.", "Character information has been shown to be useful for dependency parsing first by .", "and Smith et al.", "(2018b) among others have shown that POS and character information are somewhat complementary.", "used similar character vectors in the S-LSTM parser but did not look at the impact of composition when using these vectors.", "Here, we experiment with ablating either or both of the character and POS vectors.", "We look at the impact of using composition on the full model as well as these ablated models.", "We hypothesise that composition is most helpful when those vectors are not used, since they give information about the functional use of the word in context.", "Parser We use UUParser, a variant of the K&G transition-based parser that employs the arc-hybrid transition system from Kuhlmann et al.", "(2011) extended with a SWAP transition and a Static-Dynamic oracle, as described in de Lhoneux et al.", "(2017b) 4 .", "The SWAP transition is used to allow the construction of non-projective dependency trees (Nivre, 2009) .", "We use default hyperparameters.", "When using POS tags, we use the universal POS tags from the UD treebanks which are coarsegrained and consistent across languages.", "Those POS tags are predicted by UDPipe (Straka et al., 2016) both for training and parsing.", "This parser obtained the 7th best LAS score on average in the 2018 CoNLL shared task (Zeman et al., 2018) , about 2.5 LAS points below the best system, which uses an ensemble system as well as ELMo embeddings, as introduced by Peters et al.", "(2018) .", "Note, however, that we use a slightly impoverished version of the model used for the shared task which is described in Smith et al.", "(2018a) : we use a less accurate POS tagger (UDPipe) and we do not make use of multi-treebank models.", "In addition, Smith et al.", "(2018a) use the three top items of the stack as well as the first item of the buffer to represent the configuration, while we only use the two top items of the stack and the first item of the buffer.", "Smith et al.", "(2018a) also use an extended feature set as introduced by Kiperwasser and Goldberg (2016b) where they also use the rightmost and leftmost children of the items of the stack and buffer that they consider.", "We do not use that extended feature set.", "This is to keep the parser settings as simple as possible and avoid adding confounding factors.", "It is still a near-SOTA model.", "We evaluate parsing models on the development sets and report the average of the 5 best results in 30 epochs and 5 runs with different random seeds.", "Data We test our models on a sample of treebanks from Universal Dependencies v2.1 (Nivre et al., 2017) .", "We follow the criteria from de Lhoneux et al.", "(2017c) to select our sample: we ensure typological variety, we ensure variety of domains, we verify the quality of the treebanks, and we use one treebank with a large amount of non-projective arcs.", "However, unlike them, we do not use extremely small treebanks.", "Our selection is the same as theirs but we remove the tiny treebanks and replace them with 3 others.", "Our final set is: Ancient Greek (PROIEL), Basque, Chinese, Czech, English, Finnish, French, Hebrew and Japanese.", "Results First, we look at the effect of our different recursive composition functions on the full model (i.e., the model using a BiLSTM for feature extraction as well as both character and POS tag information).", "As can be seen from Figure 1 , recursive composition using an LSTM cell (+lc) is generally better than recursive composition with a recurrent cell (+rc), but neither technique reliably improves the accuracy of a BiLSTM parser.", "Ablating the forward and backward LSTMs Second, we only consider the models using character and POS information and look at the effect of ablating parts of the BiLSTM on the different languages.", "The results can be seen in Figure 2 .", "As expected, the BiLSTM parser performs considerably better than both unidirectional LSTM parsers, and the backward LSTM is considerably better than the forward LSTM, on average.", "It is, however, interesting to note that using a forward LSTM is much more hurtful for some languages than others: it is especially hurtful for Chinese and Japanese.", "This can be explained by language properties: the right-headed languages suffer more from ablating the backward LSTM than other languages.", "We observe a correlation between how hurtful a forward model is compared to the baseline and the percentage of right-headed content dependency relations (R = βˆ’0.838, p < .01), see Figure 3 .", "5 There is no significant correlation between how hurtful ablating the forward LSTM is and the percentage of left-headed content dependency relations (p > .05) indicating that its usefulness is not dependent on language properties.", "We hypothesise that dependency length or sentence length can play a role but we also find no correlation between how hurtful it is to ablate the forward LSTM and average dependency or sentence length in treebanks.", "It is finally also interesting to note that the backward LSTM performance is close to the BiLSTMs performance for some languages (Japanese and French).", "5 The reason we only consider content dependency relations is that the UD scheme focuses on dependency relations between content words and treats function words as features of content words to maximise parallelism across languages (de Marneffe et al., 2014) .", "We now look at the effect of using recursive composition on these ablated models.", "Results are given in Figure 4 .", "First of all, we observe unsurprisingly that composition using an LSTM cell is much better than using a simple recurrent cell.", "Second, both types of composition help the backward LSTM case, but neither reliably helps the bi models.", "Finally, the recurrent cell does not help the forward LSTM case but the LSTM cell does to some extent.", "It is interesting to note that using composition, especially using an LSTM cell, bridges a substantial part of the gap between the bw and the bi models.", "These results can be related to the literature on transition-based dependency parsing.", "Transitionbased parsers generally rely on two types of features: history-based features over the emerging dependency tree and lookahead features over the buffer of remaining input.", "The former are based on a hierarchical structure, the latter are purely sequential.", "McDonald and Nivre (2007) and Mc-Donald and Nivre (2011) have shown that historybased features enhance transition-based parsers as long as they do not suffer from error propagation.", "However, Nivre (2006) has also shown that lookahead features are absolutely crucial given the greedy left-to-right parsing strategy.", "In the model architectures considered here, the backward LSTM provides an improved lookahead.", "Similarly to the lookahead in statistical parsing, it is sequential.", "The difference is that it gives information about upcoming words with unbounded length.", "The forward LSTM in this model architecture provides history-based information but unlike in statistical parsing, that information is built sequentially rather than hierarchically: the forward LSTM passes through the sentence in the linear order of the sentence.", "In our results, we see that lookahead features are more important than the history-based ones.", "It hurts parsing accuracy more to ablate the backward LSTM than to ablate the forward one.", "This is expected given that some history-based information is still available through the top tokens on the stack, while the lookahead information is almost lost completely without the backward LSTM.", "A composition function gives hierarchical information about the history of parsing actions.", "It makes sense that it helps the backward LSTM model most since that model has no access to any information about parsing history.", "It helps the forward LSTM slightly which indicates that there can be gains from using structured information about parsing history rather than sequential information.", "We could then expect that composition should help the BiLSTM model which, how- Figure 5 : LAS of baseline, using char and/or POS tags to construct word representations ever, is not the case.", "This might be because the BiLSTM constructs information about parsing history and lookahead into a unique representation.", "In any case, this indicates that BiLSTMs are powerful feature extractors which seem to capture useful information about subtrees.", "Ablating POS and character information Next, we look at the effect of the different word representation methods on the different languages, as represented in Figure 5 .", "As is consistent with the literature de Lhoneux et al., 2017a; Smith et al., 2018b) , using characterbased word representations and/or POS tags consistently improves parsing accuracy but has a different impact in different languages and the benefits of both methods are not cumulative: using the two combined is not much better than using either on its own.", "In particular, character models are an efficient way to obtain large improvements in morphologically rich languages.", "We look at the impact of recursive compositions on all combinations of ablated models, see Table 1 .", "We only look at the impact of using an LSTM cell rather than a recurrent cell since it was a better technique across the board (see previous section).", "Looking first at BiLSTMs, it seems that composition does not reliably help parsing accuracy, regardless of access to POS and character information.", "This indicates that the vectors obtained from the BiLSTM already contain information that would otherwise be obtained by using composition.", "Turning to results with either the forward or the backward LSTM ablated, we see the expected pattern.", "Composition helps more when the model lacks POS tags, indicating that there is some redundancy between these two methods of building contextual information.", "Composition helps recover a substantial part of the gap of the model with a backward LSTM with or without POS tag.", "It recovers a much less substantial part of the gap in other cases which means that, although there is some redundancy between these different methods of building contextual information, they are still complementary and a recursive composition function cannot fully compensate for the lack of a backward LSTM or POS and/or character information.", "There are some language idiosyncracies in the results.", "While composition helps recover most of the gap for the backward LSTM models without POS and/or character information for Czech and English, it does it to a much smaller extent for Basque and Finnish.", "We hypothesise that arc depth might impact the usefulness of composition, since more depth means more matrix multiplications with the composition function.", "However, we find no correlation between average arc depth of the treebanks and usefulness of composition.", "It is an open question why composition helps some languages more than others.", "Note that we are not the first to use composition over vectors obtained from a BiLSTM in the context of dependency parsing, as this was done by Qi and Manning (2017) .", "The difference is that they compose vectors before scoring transitions.", "It was also done by Kiperwasser and Goldberg (2016a) who showed that using BiLSTM vectors for words in their Tree LSTM parser is helpful but they did not compare this to using BiLSTM vectors without the Tree LSTM.", "Recurrent and recursive LSTMs in the way they have been considered in this paper are two ways of constructing contextual information and making it available for local decisions in a greedy parser.", "The strength of recursive LSTMs is that they can build this contextual information using hierarchical context rather than linear context.", "A possible weakness is that this makes the model sensitive to error propagation: a wrong attachment leads to using the wrong contextual information.", "It is therefore possible that the benefits and drawbacks of using this method cancel each other out in the context of BiLSTMs.", "Ensemble To investigate further the information captured by BiLSTMs, we ensemble the 6 versions of the models with POS and character information with the different feature extractors (bi, bw, f w) with (+lc) and without composition.", "We use the (unweighted) reparsing technique of Sagae and Lavie (2006) 6 and ignoring labels.", "As can be seen from the UAS scores in Table 2 , the ensemble (full) largely outperforms the parser using only a BiLSTM, indicating that the information obtained from the different models is complementary.", "To investigate the contribution of each of the 6 models, we ablate each one by one.", "As can be seen from Table 2 , ablating either of the BiLSTM models or the backward LSTM using composition, results in the least effective of the ablated models, further strengthening the conclusion that BiL-STMs are powerful feature extractors.", "Conclusion We investigated the impact of composing the representation of subtrees in a transition-based parser.", "We observed that composition does not reliably help a parser that uses a BiLSTM for feature extraction, indicating that vectors obtained from the BiLSTM might capture subtree information, which is consistent with the results of Linzen et al.", "(2016) .", "However, we observe that, when ablating the backward LSTM, performance drops and recursive composition does not help to recover much of this gap.", "We hypothesise that this is because the backward LSTM primarily improves the lookahead for the greedy parser.", "When ablating the forward LSTM, performance drops to a smaller extent and recursive composition recovers a substantial part of the gap.", "This indicates that a forward LSTM and a recursive composition function capture similar information, which we take to be related to the rich history-based features crucial for a transition-based parser.", "To capture this infor-mation, a recursive composition function is better than a forward LSTM on its own, but it is even better to have a forward LSTM as part of a BiLSTM.", "We further find that recursive composition helps more when POS tags are ablated from the model, indicating that POS tags and a recursive composition function are partly redundant ways of constructing contextual information.", "Finally, we correlate results with language properties, showing that the improved lookahead of a backward LSTM is especially important for head-final languages." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "5.1", "5.2", "5.3", "6" ], "paper_header_content": [ "Introduction", "K&G Transition-Based Parsing", "Composing Subtree Representations", "Composition in a K&G Parser", "Results", "Ablating the forward and backward LSTMs", "Ablating POS and character information", "Ensemble", "Conclusion" ] }
GEM-SciDuet-train-107#paper-1284#slide-3
Transition Based Parsing using BiLSTMs
X the X brown X fox X jumped X root Vthe Vbrown Vfox Vjumped Vroot concat concat concat concat concat LSTM b LSTM b LSTM b LSTM b LSTM b LSTM f LSTM f LSTM f LSTM f LSTM f
X the X brown X fox X jumped X root Vthe Vbrown Vfox Vjumped Vroot concat concat concat concat concat LSTM b LSTM b LSTM b LSTM b LSTM b LSTM f LSTM f LSTM f LSTM f LSTM f
[]
GEM-SciDuet-train-107#paper-1284#slide-4
1284
Recursive Subtree Composition in LSTM-Based Dependency Parsing
The need for tree structure modelling on top of sequence modelling is an open issue in neural dependency parsing. We investigate the impact of adding a tree layer on top of a sequential model by recursively composing subtree representations (composition) in a transition-based parser that uses features extracted by a BiLSTM. Composition seems superfluous with such a model, suggesting that BiLSTMs capture information about subtrees. We perform model ablations to tease out the conditions under which composition helps. When ablating the backward LSTM, performance drops and composition does not recover much of the gap. When ablating the forward LSTM, performance drops less dramatically and composition recovers a substantial part of the gap, indicating that a forward LSTM and composition capture similar information. We take the backward LSTM to be related to lookahead features and the forward LSTM to the rich history-based features both crucial for transition-based parsers. To capture history-based information, composition is better than a forward LSTM on its own, but it is even better to have a forward LSTM as part of a BiLSTM. We correlate results with language properties, showing that the improved lookahead of a backward LSTM is especially important for head-final languages.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203 ], "paper_content_text": [ "Introduction Recursive neural networks allow us to construct vector representations of trees or subtrees.", "They have been used for constituency parsing by Socher et al.", "(2013) and Dyer et al.", "(2016) and for dependency parsing by Stenetorp (2013) and , among others.", "In particular, showed that composing representations of subtrees using recursive neural networks can be beneficial for transition-based dependency parsing.", "These results were further strengthened in Kuncoro et al.", "(2017) who showed, using ablation experiments, that composition is key in the Recurrent Neural Network Grammar (RNNG) generative parser by Dyer et al.", "(2016) .", "In a parallel development, Kiperwasser and Goldberg (2016b) showed that using BiLSTMs for feature extraction can lead to high parsing accuracy even with fairly simple parsing architectures, and using BiLSTMs for feature extraction has therefore become very popular in dependency parsing.", "It is used in the state-of-the-art parser of Dozat and Manning (2017) , was used in 8 of the 10 highest performing systems of the 2017 CoNLL shared task (Zeman et al., 2017) and 10 out of the 10 highest performing systems of the 2018 CoNLL shared task (Zeman et al., 2018) .", "This raises the question of whether features extracted with BiLSTMs in themselves capture information about subtrees, thus making recursive composition superfluous.", "Some support for this hypothesis comes from the results of Linzen et al.", "(2016) which indicate that LSTMs can capture hierarchical information: they can be trained to predict long-distance number agreement in English.", "Those results were extended to more constructions and three additional languages by Gulordava et al.", "(2018) .", "However, Kuncoro et al.", "(2018) have also shown that although sequential LSTMs can learn syntactic information, a recursive neural network which explicitly models hierarchy (the RNNG model from ) is better at this: it performs better on the number agreement task from Linzen et al.", "(2016) .", "To further explore this question in the context of dependency parsing, we investigate the use of recursive composition (henceforth referred to as composition) in a parser with an architecture like the one in Kiperwasser and Goldberg (2016b) .", "This allows us to explore variations of features and isolate the conditions under which composi-tion is helpful.", "We hypothesise that the use of a BiLSTM for feature extraction makes it possible to capture information about subtrees and therefore makes the use of subtree composition superfluous.", "We hypothesise that composition becomes useful when part of the BiLSTM is ablated, the forward or the backward LSTM.", "We further hypothesise that composition is most useful when the parser has no access to information about the function of words in the context of the sentence given by POS tags.", "When using POS tags, the tagger has indeed had access to the full sentence.", "We additionally look at what happens when we ablate character vectors which have been shown to capture information which is partially overlapping with information from POS tags.", "We experiment with a wider variety of languages than in order to explore whether the usefulness of different model variants vary depending on language type.", "K&G Transition-Based Parsing We define the parsing architecture introduced by Kiperwasser and Goldberg (2016b) at a high level of abstraction and henceforth refer to it as K&G.", "A K&G parser is a greedy transition-based parser.", "1 For an input sentence of length n with words w 1 , .", ".", ".", ", w n , a sequence of vectors x 1:n is created, where the vector x i is a vector representation of the word w i .", "We refer to these as type vectors, as they are the same for all occurrences of a word type.", "Type vectors are then passed through a feature function which learns representations of words in the context of the sentence.", "x i = e(w i ) v i = f (x 1:n , i) We refer to the vector v i as a token vector, as it is different for different tokens of the same word type.", "In Kiperwasser and Goldberg (2016b) , the feature function used is a BiLSTM.", "As is usual in transition-based parsing, parsing involves taking transitions from an initial configuration to a terminal one.", "Parser configurations are represented by a stack, a buffer and set of dependency arcs (Nivre, 2008) .", "For each configuration c, the feature extractor concatenates the token representations of core elements from the stack and buffer.", "These token vectors are passed to a classifier, typically a Multilayer Perceptron (MLP).", "The MLP scores transitions together with the arc labels for transitions that involve adding an arc.", "Both the word type vectors and the BiLSTMs are trained together with the model.", "looked at the impact of using a recursive composition function in their parser, which is also a transition-based parser but with an architecture different from K&G.", "They make use of a variant of the LSTM called a stack LSTM.", "A stack LSTM has push and pop operations which allow passing through states in a tree structure rather than sequentially.", "Stack LSTMs are used to represent the stack, the buffer, and the sequence of past parsing actions performed for a configuration.", "Composing Subtree Representations The words of the sentence are represented by vectors of the word types, together with a vector representing the word's POS tag.", "In the initial configuration, the vectors of all words are in the buffer and the stack is empty.", "The representation of the buffer is the end state of a backward LSTM over the word vectors.", "As parsing evolves, the word vectors are popped from the buffer, pushed to and popped from the stack and the representations of stack and buffer get updated.", "define a recursive composition function and compose tree representations incrementally, as dependents get attached to their head.", "The composed representation c is built by concatenating the vector h of the head with the vector of the dependent d, as well as a vector r representing the label paired with the direction of the arc.", "That concatenated vector is passed through an affine transformation and then through a tanh non-linear activation.", "c = tanh(W [h; d; r] + b) They create two versions of the parser.", "In the first version, when a dependent is attached to a head, the word vector of the head is replaced by a composed vector of the head and dependent.", "In the second version, they simply keep the vector of the head when attaching a dependent to a head.", "They observe that the version with composition is substantially better than the version without, by 1.3 LAS points for English (on the Penn Treebank (PTB) test set) and 2.1 for Chinese (on the Chinese Treebank (CTB) test set).", "Their parser uses POS tag information.", "POS tags help to disambiguate between different functional uses of a word and in this way give information about the use of the word in context.", "We hypothesise that the effect of using a recursive composition function is stronger when not making use of POS tags.", "Composition in a K&G Parser The parsing architectures of the stack LSTM parser (S-LSTM) and K&G are different but have some similarities.", "2 In both cases, the configuration is represented by vectors obtained by LSTMs.", "In K&G, it is represented by the token vectors of top items of the stack and the first item of the buffer.", "In the S-LSTM, it is represented by the vector representations of the entire stack, buffer and sequence of past transitions.", "Both types of parsers learn vector representations of word types which are passed to an LSTM.", "In K&G, they are passed to an LSTM in a feature extraction step that happens before parsing.", "The LSTM in this case is used to learn vectors that have information about the context of each word, a token vector.", "In the S-LSTM, word type vectors are passed to Stack LSTMs as parsing evolves.", "In this case, LSTMs are used to learn vector representations of the stack and buffer (as well as one which learns a representation of the parsing action history).", "When composition is not used in the S-LSTM, word vectors represent word types.", "When composition is used, as parsing evolves, the stack and buffer vectors get updated with information about the subtrees they contain, so that they gradually become contextualised.", "In this sense, those vectors become more like token vectors in K&G.", "More specifically, as explained in the previous section, when a dependent is attached to its head, the composition function is applied to the vectors of head and dependent and the vector of the head is replaced by this composed vector.", "We cannot apply composition on type vectors in the K&G architecture, since they are not used after the feature extraction step and hence cannot influence the representation of the configuration.", "Instead, we apply composition on the token vectors.", "We embed those composed representations in the same space as the token vectors.", "In K&G, like in the S-LSTM, we can create a composition function and compose the representation of subtrees as parsing evolves.", "We create two versions of the parser, one where word tokens are represented by their token vector.", "The other where they are represented by their token vector and the vector of their subtree c i , which is initially just a copy of the token vector (v i = f (x 1:n , i)β€’c i ).", "When a dependent word d is attached to a word h with a relation and direction r, c i is computed with the same composition function as in the S-LSTM defined in the previous section, repeated below.", "3 This composition function is a simple recurrent cell.", "Simple RNNs have known shortcomings which have been addressed by using LSTMs, as proposed by Hochreiter and Schmidhuber (1997) .", "A natural extension to this composition function is therefore to replace it with an LSTM cell.", "We also try this variant.", "We construct LSTMs for subtrees.", "We initialise a new LSTM for each new subtree that is formed, that is, when a dependent d is attached to a head h which does not have any dependent yet.", "Each time we attach a dependent to a head, we construct a vector which is a concatenation of h, d and r. We pass this vector to the LSTM of h. c is the output state of the LSTM after passing through that vector.", "We denote those models with +rc for the one using an ungated recurrent cell and with +lc for the one using an LSTM cell.", "c = tanh(W [h; d; r] + b) c = LSTM([h; d; r]) As results show (see Β§ 5), neither type of composition seems useful when used with the K&G parsing model, which indicates that BiLSTMs capture information about subtrees.", "To further investigate this and in order to isolate the conditions under which composition is helpful, we perform different model ablations and test the impact of recursive composition on these ablated models.", "First, we ablate parts of the BiLSTMs: we ablate either the forward or the backward LSTM.", "We therefore build parsers with 3 different feature functions f (x, i) over the word type vectors x i in the sentence x: a BiLSTM (bi) (our baseline), a backward LSTM (bw) (i.e., ablating the forward LSTM) and a forward LSTM (f w) (i.e., ablating the backward LSTM): bi(x, i) = BILSTM(x 1:n , i) bw(x, i) = LSTM(x n:1 , i) f w(x, i) = LSTM(x 1:n , i) K&G parsers with unidirectional LSTMs are, in some sense, more similar to the S-LSTM than those with a BiLSTM, since the S-LSTM only uses unidirectional LSTMs.", "We hypothesise that composition will help the parser using unidirectional LSTMs in the same way it helps an S-LSTM.", "We additionally experiment with the vector representing the word at the input of the LSTM.", "The most complex representation consists of a concatenation of an embedding of the word type e(w i ), an embedding of the (predicted) POS tag of w i , p(w i ) and a character representation of the word obtained by running a BiLSTM over the characters ch 1:m of w i (BiLSTM(ch 1:m )).", "x i = e(w i ) β€’ p(w i ) β€’ BiLSTM(ch 1:m ) Without a POS tag embedding, the word vector is a representation of the word type.", "With POS information, we have some information about the word in the context of the sentence and the tagger has had access to the full sentence.", "The representation of the word at the input of the BiLSTM is therefore more contextualised and it can be expected that a recursive composition function will be less helpful than when POS information is not used.", "Character information has been shown to be useful for dependency parsing first by .", "and Smith et al.", "(2018b) among others have shown that POS and character information are somewhat complementary.", "used similar character vectors in the S-LSTM parser but did not look at the impact of composition when using these vectors.", "Here, we experiment with ablating either or both of the character and POS vectors.", "We look at the impact of using composition on the full model as well as these ablated models.", "We hypothesise that composition is most helpful when those vectors are not used, since they give information about the functional use of the word in context.", "Parser We use UUParser, a variant of the K&G transition-based parser that employs the arc-hybrid transition system from Kuhlmann et al.", "(2011) extended with a SWAP transition and a Static-Dynamic oracle, as described in de Lhoneux et al.", "(2017b) 4 .", "The SWAP transition is used to allow the construction of non-projective dependency trees (Nivre, 2009) .", "We use default hyperparameters.", "When using POS tags, we use the universal POS tags from the UD treebanks which are coarsegrained and consistent across languages.", "Those POS tags are predicted by UDPipe (Straka et al., 2016) both for training and parsing.", "This parser obtained the 7th best LAS score on average in the 2018 CoNLL shared task (Zeman et al., 2018) , about 2.5 LAS points below the best system, which uses an ensemble system as well as ELMo embeddings, as introduced by Peters et al.", "(2018) .", "Note, however, that we use a slightly impoverished version of the model used for the shared task which is described in Smith et al.", "(2018a) : we use a less accurate POS tagger (UDPipe) and we do not make use of multi-treebank models.", "In addition, Smith et al.", "(2018a) use the three top items of the stack as well as the first item of the buffer to represent the configuration, while we only use the two top items of the stack and the first item of the buffer.", "Smith et al.", "(2018a) also use an extended feature set as introduced by Kiperwasser and Goldberg (2016b) where they also use the rightmost and leftmost children of the items of the stack and buffer that they consider.", "We do not use that extended feature set.", "This is to keep the parser settings as simple as possible and avoid adding confounding factors.", "It is still a near-SOTA model.", "We evaluate parsing models on the development sets and report the average of the 5 best results in 30 epochs and 5 runs with different random seeds.", "Data We test our models on a sample of treebanks from Universal Dependencies v2.1 (Nivre et al., 2017) .", "We follow the criteria from de Lhoneux et al.", "(2017c) to select our sample: we ensure typological variety, we ensure variety of domains, we verify the quality of the treebanks, and we use one treebank with a large amount of non-projective arcs.", "However, unlike them, we do not use extremely small treebanks.", "Our selection is the same as theirs but we remove the tiny treebanks and replace them with 3 others.", "Our final set is: Ancient Greek (PROIEL), Basque, Chinese, Czech, English, Finnish, French, Hebrew and Japanese.", "Results First, we look at the effect of our different recursive composition functions on the full model (i.e., the model using a BiLSTM for feature extraction as well as both character and POS tag information).", "As can be seen from Figure 1 , recursive composition using an LSTM cell (+lc) is generally better than recursive composition with a recurrent cell (+rc), but neither technique reliably improves the accuracy of a BiLSTM parser.", "Ablating the forward and backward LSTMs Second, we only consider the models using character and POS information and look at the effect of ablating parts of the BiLSTM on the different languages.", "The results can be seen in Figure 2 .", "As expected, the BiLSTM parser performs considerably better than both unidirectional LSTM parsers, and the backward LSTM is considerably better than the forward LSTM, on average.", "It is, however, interesting to note that using a forward LSTM is much more hurtful for some languages than others: it is especially hurtful for Chinese and Japanese.", "This can be explained by language properties: the right-headed languages suffer more from ablating the backward LSTM than other languages.", "We observe a correlation between how hurtful a forward model is compared to the baseline and the percentage of right-headed content dependency relations (R = βˆ’0.838, p < .01), see Figure 3 .", "5 There is no significant correlation between how hurtful ablating the forward LSTM is and the percentage of left-headed content dependency relations (p > .05) indicating that its usefulness is not dependent on language properties.", "We hypothesise that dependency length or sentence length can play a role but we also find no correlation between how hurtful it is to ablate the forward LSTM and average dependency or sentence length in treebanks.", "It is finally also interesting to note that the backward LSTM performance is close to the BiLSTMs performance for some languages (Japanese and French).", "5 The reason we only consider content dependency relations is that the UD scheme focuses on dependency relations between content words and treats function words as features of content words to maximise parallelism across languages (de Marneffe et al., 2014) .", "We now look at the effect of using recursive composition on these ablated models.", "Results are given in Figure 4 .", "First of all, we observe unsurprisingly that composition using an LSTM cell is much better than using a simple recurrent cell.", "Second, both types of composition help the backward LSTM case, but neither reliably helps the bi models.", "Finally, the recurrent cell does not help the forward LSTM case but the LSTM cell does to some extent.", "It is interesting to note that using composition, especially using an LSTM cell, bridges a substantial part of the gap between the bw and the bi models.", "These results can be related to the literature on transition-based dependency parsing.", "Transitionbased parsers generally rely on two types of features: history-based features over the emerging dependency tree and lookahead features over the buffer of remaining input.", "The former are based on a hierarchical structure, the latter are purely sequential.", "McDonald and Nivre (2007) and Mc-Donald and Nivre (2011) have shown that historybased features enhance transition-based parsers as long as they do not suffer from error propagation.", "However, Nivre (2006) has also shown that lookahead features are absolutely crucial given the greedy left-to-right parsing strategy.", "In the model architectures considered here, the backward LSTM provides an improved lookahead.", "Similarly to the lookahead in statistical parsing, it is sequential.", "The difference is that it gives information about upcoming words with unbounded length.", "The forward LSTM in this model architecture provides history-based information but unlike in statistical parsing, that information is built sequentially rather than hierarchically: the forward LSTM passes through the sentence in the linear order of the sentence.", "In our results, we see that lookahead features are more important than the history-based ones.", "It hurts parsing accuracy more to ablate the backward LSTM than to ablate the forward one.", "This is expected given that some history-based information is still available through the top tokens on the stack, while the lookahead information is almost lost completely without the backward LSTM.", "A composition function gives hierarchical information about the history of parsing actions.", "It makes sense that it helps the backward LSTM model most since that model has no access to any information about parsing history.", "It helps the forward LSTM slightly which indicates that there can be gains from using structured information about parsing history rather than sequential information.", "We could then expect that composition should help the BiLSTM model which, how- Figure 5 : LAS of baseline, using char and/or POS tags to construct word representations ever, is not the case.", "This might be because the BiLSTM constructs information about parsing history and lookahead into a unique representation.", "In any case, this indicates that BiLSTMs are powerful feature extractors which seem to capture useful information about subtrees.", "Ablating POS and character information Next, we look at the effect of the different word representation methods on the different languages, as represented in Figure 5 .", "As is consistent with the literature de Lhoneux et al., 2017a; Smith et al., 2018b) , using characterbased word representations and/or POS tags consistently improves parsing accuracy but has a different impact in different languages and the benefits of both methods are not cumulative: using the two combined is not much better than using either on its own.", "In particular, character models are an efficient way to obtain large improvements in morphologically rich languages.", "We look at the impact of recursive compositions on all combinations of ablated models, see Table 1 .", "We only look at the impact of using an LSTM cell rather than a recurrent cell since it was a better technique across the board (see previous section).", "Looking first at BiLSTMs, it seems that composition does not reliably help parsing accuracy, regardless of access to POS and character information.", "This indicates that the vectors obtained from the BiLSTM already contain information that would otherwise be obtained by using composition.", "Turning to results with either the forward or the backward LSTM ablated, we see the expected pattern.", "Composition helps more when the model lacks POS tags, indicating that there is some redundancy between these two methods of building contextual information.", "Composition helps recover a substantial part of the gap of the model with a backward LSTM with or without POS tag.", "It recovers a much less substantial part of the gap in other cases which means that, although there is some redundancy between these different methods of building contextual information, they are still complementary and a recursive composition function cannot fully compensate for the lack of a backward LSTM or POS and/or character information.", "There are some language idiosyncracies in the results.", "While composition helps recover most of the gap for the backward LSTM models without POS and/or character information for Czech and English, it does it to a much smaller extent for Basque and Finnish.", "We hypothesise that arc depth might impact the usefulness of composition, since more depth means more matrix multiplications with the composition function.", "However, we find no correlation between average arc depth of the treebanks and usefulness of composition.", "It is an open question why composition helps some languages more than others.", "Note that we are not the first to use composition over vectors obtained from a BiLSTM in the context of dependency parsing, as this was done by Qi and Manning (2017) .", "The difference is that they compose vectors before scoring transitions.", "It was also done by Kiperwasser and Goldberg (2016a) who showed that using BiLSTM vectors for words in their Tree LSTM parser is helpful but they did not compare this to using BiLSTM vectors without the Tree LSTM.", "Recurrent and recursive LSTMs in the way they have been considered in this paper are two ways of constructing contextual information and making it available for local decisions in a greedy parser.", "The strength of recursive LSTMs is that they can build this contextual information using hierarchical context rather than linear context.", "A possible weakness is that this makes the model sensitive to error propagation: a wrong attachment leads to using the wrong contextual information.", "It is therefore possible that the benefits and drawbacks of using this method cancel each other out in the context of BiLSTMs.", "Ensemble To investigate further the information captured by BiLSTMs, we ensemble the 6 versions of the models with POS and character information with the different feature extractors (bi, bw, f w) with (+lc) and without composition.", "We use the (unweighted) reparsing technique of Sagae and Lavie (2006) 6 and ignoring labels.", "As can be seen from the UAS scores in Table 2 , the ensemble (full) largely outperforms the parser using only a BiLSTM, indicating that the information obtained from the different models is complementary.", "To investigate the contribution of each of the 6 models, we ablate each one by one.", "As can be seen from Table 2 , ablating either of the BiLSTM models or the backward LSTM using composition, results in the least effective of the ablated models, further strengthening the conclusion that BiL-STMs are powerful feature extractors.", "Conclusion We investigated the impact of composing the representation of subtrees in a transition-based parser.", "We observed that composition does not reliably help a parser that uses a BiLSTM for feature extraction, indicating that vectors obtained from the BiLSTM might capture subtree information, which is consistent with the results of Linzen et al.", "(2016) .", "However, we observe that, when ablating the backward LSTM, performance drops and recursive composition does not help to recover much of this gap.", "We hypothesise that this is because the backward LSTM primarily improves the lookahead for the greedy parser.", "When ablating the forward LSTM, performance drops to a smaller extent and recursive composition recovers a substantial part of the gap.", "This indicates that a forward LSTM and a recursive composition function capture similar information, which we take to be related to the rich history-based features crucial for a transition-based parser.", "To capture this infor-mation, a recursive composition function is better than a forward LSTM on its own, but it is even better to have a forward LSTM as part of a BiLSTM.", "We further find that recursive composition helps more when POS tags are ablated from the model, indicating that POS tags and a recursive composition function are partly redundant ways of constructing contextual information.", "Finally, we correlate results with language properties, showing that the improved lookahead of a backward LSTM is especially important for head-final languages." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "5.1", "5.2", "5.3", "6" ], "paper_header_content": [ "Introduction", "K&G Transition-Based Parsing", "Composing Subtree Representations", "Composition in a K&G Parser", "Results", "Ablating the forward and backward LSTMs", "Ablating POS and character information", "Ensemble", "Conclusion" ] }
GEM-SciDuet-train-107#paper-1284#slide-4
Recursive Composition in the BiLSTM parser
V the Vbrown Vfox Vjumped Vroot concat concat concat concat concat LSTM b LSTM b LSTM b LSTM b LSTM b LSTM f LSTM f LSTM f LSTM f LSTM f X the X brown X fox X jumped X root V the Cthe Vbrown Cbrown Vfox Cfox Vjumped Cjumped Vroot Croot nmod Cfox = tanh(W[Cfox,Cbrown,leftnmod]+b) chead tanh(W [h; d r b) rc chead Lstm([h; d r lc
V the Vbrown Vfox Vjumped Vroot concat concat concat concat concat LSTM b LSTM b LSTM b LSTM b LSTM b LSTM f LSTM f LSTM f LSTM f LSTM f X the X brown X fox X jumped X root V the Cthe Vbrown Cbrown Vfox Cfox Vjumped Cjumped Vroot Croot nmod Cfox = tanh(W[Cfox,Cbrown,leftnmod]+b) chead tanh(W [h; d r b) rc chead Lstm([h; d r lc
[]
GEM-SciDuet-train-107#paper-1284#slide-5
1284
Recursive Subtree Composition in LSTM-Based Dependency Parsing
The need for tree structure modelling on top of sequence modelling is an open issue in neural dependency parsing. We investigate the impact of adding a tree layer on top of a sequential model by recursively composing subtree representations (composition) in a transition-based parser that uses features extracted by a BiLSTM. Composition seems superfluous with such a model, suggesting that BiLSTMs capture information about subtrees. We perform model ablations to tease out the conditions under which composition helps. When ablating the backward LSTM, performance drops and composition does not recover much of the gap. When ablating the forward LSTM, performance drops less dramatically and composition recovers a substantial part of the gap, indicating that a forward LSTM and composition capture similar information. We take the backward LSTM to be related to lookahead features and the forward LSTM to the rich history-based features both crucial for transition-based parsers. To capture history-based information, composition is better than a forward LSTM on its own, but it is even better to have a forward LSTM as part of a BiLSTM. We correlate results with language properties, showing that the improved lookahead of a backward LSTM is especially important for head-final languages.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203 ], "paper_content_text": [ "Introduction Recursive neural networks allow us to construct vector representations of trees or subtrees.", "They have been used for constituency parsing by Socher et al.", "(2013) and Dyer et al.", "(2016) and for dependency parsing by Stenetorp (2013) and , among others.", "In particular, showed that composing representations of subtrees using recursive neural networks can be beneficial for transition-based dependency parsing.", "These results were further strengthened in Kuncoro et al.", "(2017) who showed, using ablation experiments, that composition is key in the Recurrent Neural Network Grammar (RNNG) generative parser by Dyer et al.", "(2016) .", "In a parallel development, Kiperwasser and Goldberg (2016b) showed that using BiLSTMs for feature extraction can lead to high parsing accuracy even with fairly simple parsing architectures, and using BiLSTMs for feature extraction has therefore become very popular in dependency parsing.", "It is used in the state-of-the-art parser of Dozat and Manning (2017) , was used in 8 of the 10 highest performing systems of the 2017 CoNLL shared task (Zeman et al., 2017) and 10 out of the 10 highest performing systems of the 2018 CoNLL shared task (Zeman et al., 2018) .", "This raises the question of whether features extracted with BiLSTMs in themselves capture information about subtrees, thus making recursive composition superfluous.", "Some support for this hypothesis comes from the results of Linzen et al.", "(2016) which indicate that LSTMs can capture hierarchical information: they can be trained to predict long-distance number agreement in English.", "Those results were extended to more constructions and three additional languages by Gulordava et al.", "(2018) .", "However, Kuncoro et al.", "(2018) have also shown that although sequential LSTMs can learn syntactic information, a recursive neural network which explicitly models hierarchy (the RNNG model from ) is better at this: it performs better on the number agreement task from Linzen et al.", "(2016) .", "To further explore this question in the context of dependency parsing, we investigate the use of recursive composition (henceforth referred to as composition) in a parser with an architecture like the one in Kiperwasser and Goldberg (2016b) .", "This allows us to explore variations of features and isolate the conditions under which composi-tion is helpful.", "We hypothesise that the use of a BiLSTM for feature extraction makes it possible to capture information about subtrees and therefore makes the use of subtree composition superfluous.", "We hypothesise that composition becomes useful when part of the BiLSTM is ablated, the forward or the backward LSTM.", "We further hypothesise that composition is most useful when the parser has no access to information about the function of words in the context of the sentence given by POS tags.", "When using POS tags, the tagger has indeed had access to the full sentence.", "We additionally look at what happens when we ablate character vectors which have been shown to capture information which is partially overlapping with information from POS tags.", "We experiment with a wider variety of languages than in order to explore whether the usefulness of different model variants vary depending on language type.", "K&G Transition-Based Parsing We define the parsing architecture introduced by Kiperwasser and Goldberg (2016b) at a high level of abstraction and henceforth refer to it as K&G.", "A K&G parser is a greedy transition-based parser.", "1 For an input sentence of length n with words w 1 , .", ".", ".", ", w n , a sequence of vectors x 1:n is created, where the vector x i is a vector representation of the word w i .", "We refer to these as type vectors, as they are the same for all occurrences of a word type.", "Type vectors are then passed through a feature function which learns representations of words in the context of the sentence.", "x i = e(w i ) v i = f (x 1:n , i) We refer to the vector v i as a token vector, as it is different for different tokens of the same word type.", "In Kiperwasser and Goldberg (2016b) , the feature function used is a BiLSTM.", "As is usual in transition-based parsing, parsing involves taking transitions from an initial configuration to a terminal one.", "Parser configurations are represented by a stack, a buffer and set of dependency arcs (Nivre, 2008) .", "For each configuration c, the feature extractor concatenates the token representations of core elements from the stack and buffer.", "These token vectors are passed to a classifier, typically a Multilayer Perceptron (MLP).", "The MLP scores transitions together with the arc labels for transitions that involve adding an arc.", "Both the word type vectors and the BiLSTMs are trained together with the model.", "looked at the impact of using a recursive composition function in their parser, which is also a transition-based parser but with an architecture different from K&G.", "They make use of a variant of the LSTM called a stack LSTM.", "A stack LSTM has push and pop operations which allow passing through states in a tree structure rather than sequentially.", "Stack LSTMs are used to represent the stack, the buffer, and the sequence of past parsing actions performed for a configuration.", "Composing Subtree Representations The words of the sentence are represented by vectors of the word types, together with a vector representing the word's POS tag.", "In the initial configuration, the vectors of all words are in the buffer and the stack is empty.", "The representation of the buffer is the end state of a backward LSTM over the word vectors.", "As parsing evolves, the word vectors are popped from the buffer, pushed to and popped from the stack and the representations of stack and buffer get updated.", "define a recursive composition function and compose tree representations incrementally, as dependents get attached to their head.", "The composed representation c is built by concatenating the vector h of the head with the vector of the dependent d, as well as a vector r representing the label paired with the direction of the arc.", "That concatenated vector is passed through an affine transformation and then through a tanh non-linear activation.", "c = tanh(W [h; d; r] + b) They create two versions of the parser.", "In the first version, when a dependent is attached to a head, the word vector of the head is replaced by a composed vector of the head and dependent.", "In the second version, they simply keep the vector of the head when attaching a dependent to a head.", "They observe that the version with composition is substantially better than the version without, by 1.3 LAS points for English (on the Penn Treebank (PTB) test set) and 2.1 for Chinese (on the Chinese Treebank (CTB) test set).", "Their parser uses POS tag information.", "POS tags help to disambiguate between different functional uses of a word and in this way give information about the use of the word in context.", "We hypothesise that the effect of using a recursive composition function is stronger when not making use of POS tags.", "Composition in a K&G Parser The parsing architectures of the stack LSTM parser (S-LSTM) and K&G are different but have some similarities.", "2 In both cases, the configuration is represented by vectors obtained by LSTMs.", "In K&G, it is represented by the token vectors of top items of the stack and the first item of the buffer.", "In the S-LSTM, it is represented by the vector representations of the entire stack, buffer and sequence of past transitions.", "Both types of parsers learn vector representations of word types which are passed to an LSTM.", "In K&G, they are passed to an LSTM in a feature extraction step that happens before parsing.", "The LSTM in this case is used to learn vectors that have information about the context of each word, a token vector.", "In the S-LSTM, word type vectors are passed to Stack LSTMs as parsing evolves.", "In this case, LSTMs are used to learn vector representations of the stack and buffer (as well as one which learns a representation of the parsing action history).", "When composition is not used in the S-LSTM, word vectors represent word types.", "When composition is used, as parsing evolves, the stack and buffer vectors get updated with information about the subtrees they contain, so that they gradually become contextualised.", "In this sense, those vectors become more like token vectors in K&G.", "More specifically, as explained in the previous section, when a dependent is attached to its head, the composition function is applied to the vectors of head and dependent and the vector of the head is replaced by this composed vector.", "We cannot apply composition on type vectors in the K&G architecture, since they are not used after the feature extraction step and hence cannot influence the representation of the configuration.", "Instead, we apply composition on the token vectors.", "We embed those composed representations in the same space as the token vectors.", "In K&G, like in the S-LSTM, we can create a composition function and compose the representation of subtrees as parsing evolves.", "We create two versions of the parser, one where word tokens are represented by their token vector.", "The other where they are represented by their token vector and the vector of their subtree c i , which is initially just a copy of the token vector (v i = f (x 1:n , i)β€’c i ).", "When a dependent word d is attached to a word h with a relation and direction r, c i is computed with the same composition function as in the S-LSTM defined in the previous section, repeated below.", "3 This composition function is a simple recurrent cell.", "Simple RNNs have known shortcomings which have been addressed by using LSTMs, as proposed by Hochreiter and Schmidhuber (1997) .", "A natural extension to this composition function is therefore to replace it with an LSTM cell.", "We also try this variant.", "We construct LSTMs for subtrees.", "We initialise a new LSTM for each new subtree that is formed, that is, when a dependent d is attached to a head h which does not have any dependent yet.", "Each time we attach a dependent to a head, we construct a vector which is a concatenation of h, d and r. We pass this vector to the LSTM of h. c is the output state of the LSTM after passing through that vector.", "We denote those models with +rc for the one using an ungated recurrent cell and with +lc for the one using an LSTM cell.", "c = tanh(W [h; d; r] + b) c = LSTM([h; d; r]) As results show (see Β§ 5), neither type of composition seems useful when used with the K&G parsing model, which indicates that BiLSTMs capture information about subtrees.", "To further investigate this and in order to isolate the conditions under which composition is helpful, we perform different model ablations and test the impact of recursive composition on these ablated models.", "First, we ablate parts of the BiLSTMs: we ablate either the forward or the backward LSTM.", "We therefore build parsers with 3 different feature functions f (x, i) over the word type vectors x i in the sentence x: a BiLSTM (bi) (our baseline), a backward LSTM (bw) (i.e., ablating the forward LSTM) and a forward LSTM (f w) (i.e., ablating the backward LSTM): bi(x, i) = BILSTM(x 1:n , i) bw(x, i) = LSTM(x n:1 , i) f w(x, i) = LSTM(x 1:n , i) K&G parsers with unidirectional LSTMs are, in some sense, more similar to the S-LSTM than those with a BiLSTM, since the S-LSTM only uses unidirectional LSTMs.", "We hypothesise that composition will help the parser using unidirectional LSTMs in the same way it helps an S-LSTM.", "We additionally experiment with the vector representing the word at the input of the LSTM.", "The most complex representation consists of a concatenation of an embedding of the word type e(w i ), an embedding of the (predicted) POS tag of w i , p(w i ) and a character representation of the word obtained by running a BiLSTM over the characters ch 1:m of w i (BiLSTM(ch 1:m )).", "x i = e(w i ) β€’ p(w i ) β€’ BiLSTM(ch 1:m ) Without a POS tag embedding, the word vector is a representation of the word type.", "With POS information, we have some information about the word in the context of the sentence and the tagger has had access to the full sentence.", "The representation of the word at the input of the BiLSTM is therefore more contextualised and it can be expected that a recursive composition function will be less helpful than when POS information is not used.", "Character information has been shown to be useful for dependency parsing first by .", "and Smith et al.", "(2018b) among others have shown that POS and character information are somewhat complementary.", "used similar character vectors in the S-LSTM parser but did not look at the impact of composition when using these vectors.", "Here, we experiment with ablating either or both of the character and POS vectors.", "We look at the impact of using composition on the full model as well as these ablated models.", "We hypothesise that composition is most helpful when those vectors are not used, since they give information about the functional use of the word in context.", "Parser We use UUParser, a variant of the K&G transition-based parser that employs the arc-hybrid transition system from Kuhlmann et al.", "(2011) extended with a SWAP transition and a Static-Dynamic oracle, as described in de Lhoneux et al.", "(2017b) 4 .", "The SWAP transition is used to allow the construction of non-projective dependency trees (Nivre, 2009) .", "We use default hyperparameters.", "When using POS tags, we use the universal POS tags from the UD treebanks which are coarsegrained and consistent across languages.", "Those POS tags are predicted by UDPipe (Straka et al., 2016) both for training and parsing.", "This parser obtained the 7th best LAS score on average in the 2018 CoNLL shared task (Zeman et al., 2018) , about 2.5 LAS points below the best system, which uses an ensemble system as well as ELMo embeddings, as introduced by Peters et al.", "(2018) .", "Note, however, that we use a slightly impoverished version of the model used for the shared task which is described in Smith et al.", "(2018a) : we use a less accurate POS tagger (UDPipe) and we do not make use of multi-treebank models.", "In addition, Smith et al.", "(2018a) use the three top items of the stack as well as the first item of the buffer to represent the configuration, while we only use the two top items of the stack and the first item of the buffer.", "Smith et al.", "(2018a) also use an extended feature set as introduced by Kiperwasser and Goldberg (2016b) where they also use the rightmost and leftmost children of the items of the stack and buffer that they consider.", "We do not use that extended feature set.", "This is to keep the parser settings as simple as possible and avoid adding confounding factors.", "It is still a near-SOTA model.", "We evaluate parsing models on the development sets and report the average of the 5 best results in 30 epochs and 5 runs with different random seeds.", "Data We test our models on a sample of treebanks from Universal Dependencies v2.1 (Nivre et al., 2017) .", "We follow the criteria from de Lhoneux et al.", "(2017c) to select our sample: we ensure typological variety, we ensure variety of domains, we verify the quality of the treebanks, and we use one treebank with a large amount of non-projective arcs.", "However, unlike them, we do not use extremely small treebanks.", "Our selection is the same as theirs but we remove the tiny treebanks and replace them with 3 others.", "Our final set is: Ancient Greek (PROIEL), Basque, Chinese, Czech, English, Finnish, French, Hebrew and Japanese.", "Results First, we look at the effect of our different recursive composition functions on the full model (i.e., the model using a BiLSTM for feature extraction as well as both character and POS tag information).", "As can be seen from Figure 1 , recursive composition using an LSTM cell (+lc) is generally better than recursive composition with a recurrent cell (+rc), but neither technique reliably improves the accuracy of a BiLSTM parser.", "Ablating the forward and backward LSTMs Second, we only consider the models using character and POS information and look at the effect of ablating parts of the BiLSTM on the different languages.", "The results can be seen in Figure 2 .", "As expected, the BiLSTM parser performs considerably better than both unidirectional LSTM parsers, and the backward LSTM is considerably better than the forward LSTM, on average.", "It is, however, interesting to note that using a forward LSTM is much more hurtful for some languages than others: it is especially hurtful for Chinese and Japanese.", "This can be explained by language properties: the right-headed languages suffer more from ablating the backward LSTM than other languages.", "We observe a correlation between how hurtful a forward model is compared to the baseline and the percentage of right-headed content dependency relations (R = βˆ’0.838, p < .01), see Figure 3 .", "5 There is no significant correlation between how hurtful ablating the forward LSTM is and the percentage of left-headed content dependency relations (p > .05) indicating that its usefulness is not dependent on language properties.", "We hypothesise that dependency length or sentence length can play a role but we also find no correlation between how hurtful it is to ablate the forward LSTM and average dependency or sentence length in treebanks.", "It is finally also interesting to note that the backward LSTM performance is close to the BiLSTMs performance for some languages (Japanese and French).", "5 The reason we only consider content dependency relations is that the UD scheme focuses on dependency relations between content words and treats function words as features of content words to maximise parallelism across languages (de Marneffe et al., 2014) .", "We now look at the effect of using recursive composition on these ablated models.", "Results are given in Figure 4 .", "First of all, we observe unsurprisingly that composition using an LSTM cell is much better than using a simple recurrent cell.", "Second, both types of composition help the backward LSTM case, but neither reliably helps the bi models.", "Finally, the recurrent cell does not help the forward LSTM case but the LSTM cell does to some extent.", "It is interesting to note that using composition, especially using an LSTM cell, bridges a substantial part of the gap between the bw and the bi models.", "These results can be related to the literature on transition-based dependency parsing.", "Transitionbased parsers generally rely on two types of features: history-based features over the emerging dependency tree and lookahead features over the buffer of remaining input.", "The former are based on a hierarchical structure, the latter are purely sequential.", "McDonald and Nivre (2007) and Mc-Donald and Nivre (2011) have shown that historybased features enhance transition-based parsers as long as they do not suffer from error propagation.", "However, Nivre (2006) has also shown that lookahead features are absolutely crucial given the greedy left-to-right parsing strategy.", "In the model architectures considered here, the backward LSTM provides an improved lookahead.", "Similarly to the lookahead in statistical parsing, it is sequential.", "The difference is that it gives information about upcoming words with unbounded length.", "The forward LSTM in this model architecture provides history-based information but unlike in statistical parsing, that information is built sequentially rather than hierarchically: the forward LSTM passes through the sentence in the linear order of the sentence.", "In our results, we see that lookahead features are more important than the history-based ones.", "It hurts parsing accuracy more to ablate the backward LSTM than to ablate the forward one.", "This is expected given that some history-based information is still available through the top tokens on the stack, while the lookahead information is almost lost completely without the backward LSTM.", "A composition function gives hierarchical information about the history of parsing actions.", "It makes sense that it helps the backward LSTM model most since that model has no access to any information about parsing history.", "It helps the forward LSTM slightly which indicates that there can be gains from using structured information about parsing history rather than sequential information.", "We could then expect that composition should help the BiLSTM model which, how- Figure 5 : LAS of baseline, using char and/or POS tags to construct word representations ever, is not the case.", "This might be because the BiLSTM constructs information about parsing history and lookahead into a unique representation.", "In any case, this indicates that BiLSTMs are powerful feature extractors which seem to capture useful information about subtrees.", "Ablating POS and character information Next, we look at the effect of the different word representation methods on the different languages, as represented in Figure 5 .", "As is consistent with the literature de Lhoneux et al., 2017a; Smith et al., 2018b) , using characterbased word representations and/or POS tags consistently improves parsing accuracy but has a different impact in different languages and the benefits of both methods are not cumulative: using the two combined is not much better than using either on its own.", "In particular, character models are an efficient way to obtain large improvements in morphologically rich languages.", "We look at the impact of recursive compositions on all combinations of ablated models, see Table 1 .", "We only look at the impact of using an LSTM cell rather than a recurrent cell since it was a better technique across the board (see previous section).", "Looking first at BiLSTMs, it seems that composition does not reliably help parsing accuracy, regardless of access to POS and character information.", "This indicates that the vectors obtained from the BiLSTM already contain information that would otherwise be obtained by using composition.", "Turning to results with either the forward or the backward LSTM ablated, we see the expected pattern.", "Composition helps more when the model lacks POS tags, indicating that there is some redundancy between these two methods of building contextual information.", "Composition helps recover a substantial part of the gap of the model with a backward LSTM with or without POS tag.", "It recovers a much less substantial part of the gap in other cases which means that, although there is some redundancy between these different methods of building contextual information, they are still complementary and a recursive composition function cannot fully compensate for the lack of a backward LSTM or POS and/or character information.", "There are some language idiosyncracies in the results.", "While composition helps recover most of the gap for the backward LSTM models without POS and/or character information for Czech and English, it does it to a much smaller extent for Basque and Finnish.", "We hypothesise that arc depth might impact the usefulness of composition, since more depth means more matrix multiplications with the composition function.", "However, we find no correlation between average arc depth of the treebanks and usefulness of composition.", "It is an open question why composition helps some languages more than others.", "Note that we are not the first to use composition over vectors obtained from a BiLSTM in the context of dependency parsing, as this was done by Qi and Manning (2017) .", "The difference is that they compose vectors before scoring transitions.", "It was also done by Kiperwasser and Goldberg (2016a) who showed that using BiLSTM vectors for words in their Tree LSTM parser is helpful but they did not compare this to using BiLSTM vectors without the Tree LSTM.", "Recurrent and recursive LSTMs in the way they have been considered in this paper are two ways of constructing contextual information and making it available for local decisions in a greedy parser.", "The strength of recursive LSTMs is that they can build this contextual information using hierarchical context rather than linear context.", "A possible weakness is that this makes the model sensitive to error propagation: a wrong attachment leads to using the wrong contextual information.", "It is therefore possible that the benefits and drawbacks of using this method cancel each other out in the context of BiLSTMs.", "Ensemble To investigate further the information captured by BiLSTMs, we ensemble the 6 versions of the models with POS and character information with the different feature extractors (bi, bw, f w) with (+lc) and without composition.", "We use the (unweighted) reparsing technique of Sagae and Lavie (2006) 6 and ignoring labels.", "As can be seen from the UAS scores in Table 2 , the ensemble (full) largely outperforms the parser using only a BiLSTM, indicating that the information obtained from the different models is complementary.", "To investigate the contribution of each of the 6 models, we ablate each one by one.", "As can be seen from Table 2 , ablating either of the BiLSTM models or the backward LSTM using composition, results in the least effective of the ablated models, further strengthening the conclusion that BiL-STMs are powerful feature extractors.", "Conclusion We investigated the impact of composing the representation of subtrees in a transition-based parser.", "We observed that composition does not reliably help a parser that uses a BiLSTM for feature extraction, indicating that vectors obtained from the BiLSTM might capture subtree information, which is consistent with the results of Linzen et al.", "(2016) .", "However, we observe that, when ablating the backward LSTM, performance drops and recursive composition does not help to recover much of this gap.", "We hypothesise that this is because the backward LSTM primarily improves the lookahead for the greedy parser.", "When ablating the forward LSTM, performance drops to a smaller extent and recursive composition recovers a substantial part of the gap.", "This indicates that a forward LSTM and a recursive composition function capture similar information, which we take to be related to the rich history-based features crucial for a transition-based parser.", "To capture this infor-mation, a recursive composition function is better than a forward LSTM on its own, but it is even better to have a forward LSTM as part of a BiLSTM.", "We further find that recursive composition helps more when POS tags are ablated from the model, indicating that POS tags and a recursive composition function are partly redundant ways of constructing contextual information.", "Finally, we correlate results with language properties, showing that the improved lookahead of a backward LSTM is especially important for head-final languages." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "5.1", "5.2", "5.3", "6" ], "paper_header_content": [ "Introduction", "K&G Transition-Based Parsing", "Composing Subtree Representations", "Composition in a K&G Parser", "Results", "Ablating the forward and backward LSTMs", "Ablating POS and character information", "Ensemble", "Conclusion" ] }
GEM-SciDuet-train-107#paper-1284#slide-5
Results BiLSTM composition
cs en eu fi a gre he ja zh av. NTE eee ee oc ements [mmm bi mmm bitrc mm bitc| bit+re bi+Ic NTE eee ee oc oR Ce etd 14/22
cs en eu fi a gre he ja zh av. NTE eee ee oc ements [mmm bi mmm bitrc mm bitc| bit+re bi+Ic NTE eee ee oc oR Ce etd 14/22
[]
GEM-SciDuet-train-107#paper-1284#slide-6
1284
Recursive Subtree Composition in LSTM-Based Dependency Parsing
The need for tree structure modelling on top of sequence modelling is an open issue in neural dependency parsing. We investigate the impact of adding a tree layer on top of a sequential model by recursively composing subtree representations (composition) in a transition-based parser that uses features extracted by a BiLSTM. Composition seems superfluous with such a model, suggesting that BiLSTMs capture information about subtrees. We perform model ablations to tease out the conditions under which composition helps. When ablating the backward LSTM, performance drops and composition does not recover much of the gap. When ablating the forward LSTM, performance drops less dramatically and composition recovers a substantial part of the gap, indicating that a forward LSTM and composition capture similar information. We take the backward LSTM to be related to lookahead features and the forward LSTM to the rich history-based features both crucial for transition-based parsers. To capture history-based information, composition is better than a forward LSTM on its own, but it is even better to have a forward LSTM as part of a BiLSTM. We correlate results with language properties, showing that the improved lookahead of a backward LSTM is especially important for head-final languages.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203 ], "paper_content_text": [ "Introduction Recursive neural networks allow us to construct vector representations of trees or subtrees.", "They have been used for constituency parsing by Socher et al.", "(2013) and Dyer et al.", "(2016) and for dependency parsing by Stenetorp (2013) and , among others.", "In particular, showed that composing representations of subtrees using recursive neural networks can be beneficial for transition-based dependency parsing.", "These results were further strengthened in Kuncoro et al.", "(2017) who showed, using ablation experiments, that composition is key in the Recurrent Neural Network Grammar (RNNG) generative parser by Dyer et al.", "(2016) .", "In a parallel development, Kiperwasser and Goldberg (2016b) showed that using BiLSTMs for feature extraction can lead to high parsing accuracy even with fairly simple parsing architectures, and using BiLSTMs for feature extraction has therefore become very popular in dependency parsing.", "It is used in the state-of-the-art parser of Dozat and Manning (2017) , was used in 8 of the 10 highest performing systems of the 2017 CoNLL shared task (Zeman et al., 2017) and 10 out of the 10 highest performing systems of the 2018 CoNLL shared task (Zeman et al., 2018) .", "This raises the question of whether features extracted with BiLSTMs in themselves capture information about subtrees, thus making recursive composition superfluous.", "Some support for this hypothesis comes from the results of Linzen et al.", "(2016) which indicate that LSTMs can capture hierarchical information: they can be trained to predict long-distance number agreement in English.", "Those results were extended to more constructions and three additional languages by Gulordava et al.", "(2018) .", "However, Kuncoro et al.", "(2018) have also shown that although sequential LSTMs can learn syntactic information, a recursive neural network which explicitly models hierarchy (the RNNG model from ) is better at this: it performs better on the number agreement task from Linzen et al.", "(2016) .", "To further explore this question in the context of dependency parsing, we investigate the use of recursive composition (henceforth referred to as composition) in a parser with an architecture like the one in Kiperwasser and Goldberg (2016b) .", "This allows us to explore variations of features and isolate the conditions under which composi-tion is helpful.", "We hypothesise that the use of a BiLSTM for feature extraction makes it possible to capture information about subtrees and therefore makes the use of subtree composition superfluous.", "We hypothesise that composition becomes useful when part of the BiLSTM is ablated, the forward or the backward LSTM.", "We further hypothesise that composition is most useful when the parser has no access to information about the function of words in the context of the sentence given by POS tags.", "When using POS tags, the tagger has indeed had access to the full sentence.", "We additionally look at what happens when we ablate character vectors which have been shown to capture information which is partially overlapping with information from POS tags.", "We experiment with a wider variety of languages than in order to explore whether the usefulness of different model variants vary depending on language type.", "K&G Transition-Based Parsing We define the parsing architecture introduced by Kiperwasser and Goldberg (2016b) at a high level of abstraction and henceforth refer to it as K&G.", "A K&G parser is a greedy transition-based parser.", "1 For an input sentence of length n with words w 1 , .", ".", ".", ", w n , a sequence of vectors x 1:n is created, where the vector x i is a vector representation of the word w i .", "We refer to these as type vectors, as they are the same for all occurrences of a word type.", "Type vectors are then passed through a feature function which learns representations of words in the context of the sentence.", "x i = e(w i ) v i = f (x 1:n , i) We refer to the vector v i as a token vector, as it is different for different tokens of the same word type.", "In Kiperwasser and Goldberg (2016b) , the feature function used is a BiLSTM.", "As is usual in transition-based parsing, parsing involves taking transitions from an initial configuration to a terminal one.", "Parser configurations are represented by a stack, a buffer and set of dependency arcs (Nivre, 2008) .", "For each configuration c, the feature extractor concatenates the token representations of core elements from the stack and buffer.", "These token vectors are passed to a classifier, typically a Multilayer Perceptron (MLP).", "The MLP scores transitions together with the arc labels for transitions that involve adding an arc.", "Both the word type vectors and the BiLSTMs are trained together with the model.", "looked at the impact of using a recursive composition function in their parser, which is also a transition-based parser but with an architecture different from K&G.", "They make use of a variant of the LSTM called a stack LSTM.", "A stack LSTM has push and pop operations which allow passing through states in a tree structure rather than sequentially.", "Stack LSTMs are used to represent the stack, the buffer, and the sequence of past parsing actions performed for a configuration.", "Composing Subtree Representations The words of the sentence are represented by vectors of the word types, together with a vector representing the word's POS tag.", "In the initial configuration, the vectors of all words are in the buffer and the stack is empty.", "The representation of the buffer is the end state of a backward LSTM over the word vectors.", "As parsing evolves, the word vectors are popped from the buffer, pushed to and popped from the stack and the representations of stack and buffer get updated.", "define a recursive composition function and compose tree representations incrementally, as dependents get attached to their head.", "The composed representation c is built by concatenating the vector h of the head with the vector of the dependent d, as well as a vector r representing the label paired with the direction of the arc.", "That concatenated vector is passed through an affine transformation and then through a tanh non-linear activation.", "c = tanh(W [h; d; r] + b) They create two versions of the parser.", "In the first version, when a dependent is attached to a head, the word vector of the head is replaced by a composed vector of the head and dependent.", "In the second version, they simply keep the vector of the head when attaching a dependent to a head.", "They observe that the version with composition is substantially better than the version without, by 1.3 LAS points for English (on the Penn Treebank (PTB) test set) and 2.1 for Chinese (on the Chinese Treebank (CTB) test set).", "Their parser uses POS tag information.", "POS tags help to disambiguate between different functional uses of a word and in this way give information about the use of the word in context.", "We hypothesise that the effect of using a recursive composition function is stronger when not making use of POS tags.", "Composition in a K&G Parser The parsing architectures of the stack LSTM parser (S-LSTM) and K&G are different but have some similarities.", "2 In both cases, the configuration is represented by vectors obtained by LSTMs.", "In K&G, it is represented by the token vectors of top items of the stack and the first item of the buffer.", "In the S-LSTM, it is represented by the vector representations of the entire stack, buffer and sequence of past transitions.", "Both types of parsers learn vector representations of word types which are passed to an LSTM.", "In K&G, they are passed to an LSTM in a feature extraction step that happens before parsing.", "The LSTM in this case is used to learn vectors that have information about the context of each word, a token vector.", "In the S-LSTM, word type vectors are passed to Stack LSTMs as parsing evolves.", "In this case, LSTMs are used to learn vector representations of the stack and buffer (as well as one which learns a representation of the parsing action history).", "When composition is not used in the S-LSTM, word vectors represent word types.", "When composition is used, as parsing evolves, the stack and buffer vectors get updated with information about the subtrees they contain, so that they gradually become contextualised.", "In this sense, those vectors become more like token vectors in K&G.", "More specifically, as explained in the previous section, when a dependent is attached to its head, the composition function is applied to the vectors of head and dependent and the vector of the head is replaced by this composed vector.", "We cannot apply composition on type vectors in the K&G architecture, since they are not used after the feature extraction step and hence cannot influence the representation of the configuration.", "Instead, we apply composition on the token vectors.", "We embed those composed representations in the same space as the token vectors.", "In K&G, like in the S-LSTM, we can create a composition function and compose the representation of subtrees as parsing evolves.", "We create two versions of the parser, one where word tokens are represented by their token vector.", "The other where they are represented by their token vector and the vector of their subtree c i , which is initially just a copy of the token vector (v i = f (x 1:n , i)β€’c i ).", "When a dependent word d is attached to a word h with a relation and direction r, c i is computed with the same composition function as in the S-LSTM defined in the previous section, repeated below.", "3 This composition function is a simple recurrent cell.", "Simple RNNs have known shortcomings which have been addressed by using LSTMs, as proposed by Hochreiter and Schmidhuber (1997) .", "A natural extension to this composition function is therefore to replace it with an LSTM cell.", "We also try this variant.", "We construct LSTMs for subtrees.", "We initialise a new LSTM for each new subtree that is formed, that is, when a dependent d is attached to a head h which does not have any dependent yet.", "Each time we attach a dependent to a head, we construct a vector which is a concatenation of h, d and r. We pass this vector to the LSTM of h. c is the output state of the LSTM after passing through that vector.", "We denote those models with +rc for the one using an ungated recurrent cell and with +lc for the one using an LSTM cell.", "c = tanh(W [h; d; r] + b) c = LSTM([h; d; r]) As results show (see Β§ 5), neither type of composition seems useful when used with the K&G parsing model, which indicates that BiLSTMs capture information about subtrees.", "To further investigate this and in order to isolate the conditions under which composition is helpful, we perform different model ablations and test the impact of recursive composition on these ablated models.", "First, we ablate parts of the BiLSTMs: we ablate either the forward or the backward LSTM.", "We therefore build parsers with 3 different feature functions f (x, i) over the word type vectors x i in the sentence x: a BiLSTM (bi) (our baseline), a backward LSTM (bw) (i.e., ablating the forward LSTM) and a forward LSTM (f w) (i.e., ablating the backward LSTM): bi(x, i) = BILSTM(x 1:n , i) bw(x, i) = LSTM(x n:1 , i) f w(x, i) = LSTM(x 1:n , i) K&G parsers with unidirectional LSTMs are, in some sense, more similar to the S-LSTM than those with a BiLSTM, since the S-LSTM only uses unidirectional LSTMs.", "We hypothesise that composition will help the parser using unidirectional LSTMs in the same way it helps an S-LSTM.", "We additionally experiment with the vector representing the word at the input of the LSTM.", "The most complex representation consists of a concatenation of an embedding of the word type e(w i ), an embedding of the (predicted) POS tag of w i , p(w i ) and a character representation of the word obtained by running a BiLSTM over the characters ch 1:m of w i (BiLSTM(ch 1:m )).", "x i = e(w i ) β€’ p(w i ) β€’ BiLSTM(ch 1:m ) Without a POS tag embedding, the word vector is a representation of the word type.", "With POS information, we have some information about the word in the context of the sentence and the tagger has had access to the full sentence.", "The representation of the word at the input of the BiLSTM is therefore more contextualised and it can be expected that a recursive composition function will be less helpful than when POS information is not used.", "Character information has been shown to be useful for dependency parsing first by .", "and Smith et al.", "(2018b) among others have shown that POS and character information are somewhat complementary.", "used similar character vectors in the S-LSTM parser but did not look at the impact of composition when using these vectors.", "Here, we experiment with ablating either or both of the character and POS vectors.", "We look at the impact of using composition on the full model as well as these ablated models.", "We hypothesise that composition is most helpful when those vectors are not used, since they give information about the functional use of the word in context.", "Parser We use UUParser, a variant of the K&G transition-based parser that employs the arc-hybrid transition system from Kuhlmann et al.", "(2011) extended with a SWAP transition and a Static-Dynamic oracle, as described in de Lhoneux et al.", "(2017b) 4 .", "The SWAP transition is used to allow the construction of non-projective dependency trees (Nivre, 2009) .", "We use default hyperparameters.", "When using POS tags, we use the universal POS tags from the UD treebanks which are coarsegrained and consistent across languages.", "Those POS tags are predicted by UDPipe (Straka et al., 2016) both for training and parsing.", "This parser obtained the 7th best LAS score on average in the 2018 CoNLL shared task (Zeman et al., 2018) , about 2.5 LAS points below the best system, which uses an ensemble system as well as ELMo embeddings, as introduced by Peters et al.", "(2018) .", "Note, however, that we use a slightly impoverished version of the model used for the shared task which is described in Smith et al.", "(2018a) : we use a less accurate POS tagger (UDPipe) and we do not make use of multi-treebank models.", "In addition, Smith et al.", "(2018a) use the three top items of the stack as well as the first item of the buffer to represent the configuration, while we only use the two top items of the stack and the first item of the buffer.", "Smith et al.", "(2018a) also use an extended feature set as introduced by Kiperwasser and Goldberg (2016b) where they also use the rightmost and leftmost children of the items of the stack and buffer that they consider.", "We do not use that extended feature set.", "This is to keep the parser settings as simple as possible and avoid adding confounding factors.", "It is still a near-SOTA model.", "We evaluate parsing models on the development sets and report the average of the 5 best results in 30 epochs and 5 runs with different random seeds.", "Data We test our models on a sample of treebanks from Universal Dependencies v2.1 (Nivre et al., 2017) .", "We follow the criteria from de Lhoneux et al.", "(2017c) to select our sample: we ensure typological variety, we ensure variety of domains, we verify the quality of the treebanks, and we use one treebank with a large amount of non-projective arcs.", "However, unlike them, we do not use extremely small treebanks.", "Our selection is the same as theirs but we remove the tiny treebanks and replace them with 3 others.", "Our final set is: Ancient Greek (PROIEL), Basque, Chinese, Czech, English, Finnish, French, Hebrew and Japanese.", "Results First, we look at the effect of our different recursive composition functions on the full model (i.e., the model using a BiLSTM for feature extraction as well as both character and POS tag information).", "As can be seen from Figure 1 , recursive composition using an LSTM cell (+lc) is generally better than recursive composition with a recurrent cell (+rc), but neither technique reliably improves the accuracy of a BiLSTM parser.", "Ablating the forward and backward LSTMs Second, we only consider the models using character and POS information and look at the effect of ablating parts of the BiLSTM on the different languages.", "The results can be seen in Figure 2 .", "As expected, the BiLSTM parser performs considerably better than both unidirectional LSTM parsers, and the backward LSTM is considerably better than the forward LSTM, on average.", "It is, however, interesting to note that using a forward LSTM is much more hurtful for some languages than others: it is especially hurtful for Chinese and Japanese.", "This can be explained by language properties: the right-headed languages suffer more from ablating the backward LSTM than other languages.", "We observe a correlation between how hurtful a forward model is compared to the baseline and the percentage of right-headed content dependency relations (R = βˆ’0.838, p < .01), see Figure 3 .", "5 There is no significant correlation between how hurtful ablating the forward LSTM is and the percentage of left-headed content dependency relations (p > .05) indicating that its usefulness is not dependent on language properties.", "We hypothesise that dependency length or sentence length can play a role but we also find no correlation between how hurtful it is to ablate the forward LSTM and average dependency or sentence length in treebanks.", "It is finally also interesting to note that the backward LSTM performance is close to the BiLSTMs performance for some languages (Japanese and French).", "5 The reason we only consider content dependency relations is that the UD scheme focuses on dependency relations between content words and treats function words as features of content words to maximise parallelism across languages (de Marneffe et al., 2014) .", "We now look at the effect of using recursive composition on these ablated models.", "Results are given in Figure 4 .", "First of all, we observe unsurprisingly that composition using an LSTM cell is much better than using a simple recurrent cell.", "Second, both types of composition help the backward LSTM case, but neither reliably helps the bi models.", "Finally, the recurrent cell does not help the forward LSTM case but the LSTM cell does to some extent.", "It is interesting to note that using composition, especially using an LSTM cell, bridges a substantial part of the gap between the bw and the bi models.", "These results can be related to the literature on transition-based dependency parsing.", "Transitionbased parsers generally rely on two types of features: history-based features over the emerging dependency tree and lookahead features over the buffer of remaining input.", "The former are based on a hierarchical structure, the latter are purely sequential.", "McDonald and Nivre (2007) and Mc-Donald and Nivre (2011) have shown that historybased features enhance transition-based parsers as long as they do not suffer from error propagation.", "However, Nivre (2006) has also shown that lookahead features are absolutely crucial given the greedy left-to-right parsing strategy.", "In the model architectures considered here, the backward LSTM provides an improved lookahead.", "Similarly to the lookahead in statistical parsing, it is sequential.", "The difference is that it gives information about upcoming words with unbounded length.", "The forward LSTM in this model architecture provides history-based information but unlike in statistical parsing, that information is built sequentially rather than hierarchically: the forward LSTM passes through the sentence in the linear order of the sentence.", "In our results, we see that lookahead features are more important than the history-based ones.", "It hurts parsing accuracy more to ablate the backward LSTM than to ablate the forward one.", "This is expected given that some history-based information is still available through the top tokens on the stack, while the lookahead information is almost lost completely without the backward LSTM.", "A composition function gives hierarchical information about the history of parsing actions.", "It makes sense that it helps the backward LSTM model most since that model has no access to any information about parsing history.", "It helps the forward LSTM slightly which indicates that there can be gains from using structured information about parsing history rather than sequential information.", "We could then expect that composition should help the BiLSTM model which, how- Figure 5 : LAS of baseline, using char and/or POS tags to construct word representations ever, is not the case.", "This might be because the BiLSTM constructs information about parsing history and lookahead into a unique representation.", "In any case, this indicates that BiLSTMs are powerful feature extractors which seem to capture useful information about subtrees.", "Ablating POS and character information Next, we look at the effect of the different word representation methods on the different languages, as represented in Figure 5 .", "As is consistent with the literature de Lhoneux et al., 2017a; Smith et al., 2018b) , using characterbased word representations and/or POS tags consistently improves parsing accuracy but has a different impact in different languages and the benefits of both methods are not cumulative: using the two combined is not much better than using either on its own.", "In particular, character models are an efficient way to obtain large improvements in morphologically rich languages.", "We look at the impact of recursive compositions on all combinations of ablated models, see Table 1 .", "We only look at the impact of using an LSTM cell rather than a recurrent cell since it was a better technique across the board (see previous section).", "Looking first at BiLSTMs, it seems that composition does not reliably help parsing accuracy, regardless of access to POS and character information.", "This indicates that the vectors obtained from the BiLSTM already contain information that would otherwise be obtained by using composition.", "Turning to results with either the forward or the backward LSTM ablated, we see the expected pattern.", "Composition helps more when the model lacks POS tags, indicating that there is some redundancy between these two methods of building contextual information.", "Composition helps recover a substantial part of the gap of the model with a backward LSTM with or without POS tag.", "It recovers a much less substantial part of the gap in other cases which means that, although there is some redundancy between these different methods of building contextual information, they are still complementary and a recursive composition function cannot fully compensate for the lack of a backward LSTM or POS and/or character information.", "There are some language idiosyncracies in the results.", "While composition helps recover most of the gap for the backward LSTM models without POS and/or character information for Czech and English, it does it to a much smaller extent for Basque and Finnish.", "We hypothesise that arc depth might impact the usefulness of composition, since more depth means more matrix multiplications with the composition function.", "However, we find no correlation between average arc depth of the treebanks and usefulness of composition.", "It is an open question why composition helps some languages more than others.", "Note that we are not the first to use composition over vectors obtained from a BiLSTM in the context of dependency parsing, as this was done by Qi and Manning (2017) .", "The difference is that they compose vectors before scoring transitions.", "It was also done by Kiperwasser and Goldberg (2016a) who showed that using BiLSTM vectors for words in their Tree LSTM parser is helpful but they did not compare this to using BiLSTM vectors without the Tree LSTM.", "Recurrent and recursive LSTMs in the way they have been considered in this paper are two ways of constructing contextual information and making it available for local decisions in a greedy parser.", "The strength of recursive LSTMs is that they can build this contextual information using hierarchical context rather than linear context.", "A possible weakness is that this makes the model sensitive to error propagation: a wrong attachment leads to using the wrong contextual information.", "It is therefore possible that the benefits and drawbacks of using this method cancel each other out in the context of BiLSTMs.", "Ensemble To investigate further the information captured by BiLSTMs, we ensemble the 6 versions of the models with POS and character information with the different feature extractors (bi, bw, f w) with (+lc) and without composition.", "We use the (unweighted) reparsing technique of Sagae and Lavie (2006) 6 and ignoring labels.", "As can be seen from the UAS scores in Table 2 , the ensemble (full) largely outperforms the parser using only a BiLSTM, indicating that the information obtained from the different models is complementary.", "To investigate the contribution of each of the 6 models, we ablate each one by one.", "As can be seen from Table 2 , ablating either of the BiLSTM models or the backward LSTM using composition, results in the least effective of the ablated models, further strengthening the conclusion that BiL-STMs are powerful feature extractors.", "Conclusion We investigated the impact of composing the representation of subtrees in a transition-based parser.", "We observed that composition does not reliably help a parser that uses a BiLSTM for feature extraction, indicating that vectors obtained from the BiLSTM might capture subtree information, which is consistent with the results of Linzen et al.", "(2016) .", "However, we observe that, when ablating the backward LSTM, performance drops and recursive composition does not help to recover much of this gap.", "We hypothesise that this is because the backward LSTM primarily improves the lookahead for the greedy parser.", "When ablating the forward LSTM, performance drops to a smaller extent and recursive composition recovers a substantial part of the gap.", "This indicates that a forward LSTM and a recursive composition function capture similar information, which we take to be related to the rich history-based features crucial for a transition-based parser.", "To capture this infor-mation, a recursive composition function is better than a forward LSTM on its own, but it is even better to have a forward LSTM as part of a BiLSTM.", "We further find that recursive composition helps more when POS tags are ablated from the model, indicating that POS tags and a recursive composition function are partly redundant ways of constructing contextual information.", "Finally, we correlate results with language properties, showing that the improved lookahead of a backward LSTM is especially important for head-final languages." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "5.1", "5.2", "5.3", "6" ], "paper_header_content": [ "Introduction", "K&G Transition-Based Parsing", "Composing Subtree Representations", "Composition in a K&G Parser", "Results", "Ablating the forward and backward LSTMs", "Ablating POS and character information", "Ensemble", "Conclusion" ] }
GEM-SciDuet-train-107#paper-1284#slide-6
LSTM Feature Extractors
V the Vbrown Vfox Vjumped Vroot concat concat concat concat concat LSTM b LSTM b LSTM b LSTM b LSTM b L STM f LSTM f LSTM f LSTM f LSTM f X the X brown X fox X jumped X root
V the Vbrown Vfox Vjumped Vroot concat concat concat concat concat LSTM b LSTM b LSTM b LSTM b LSTM b L STM f LSTM f LSTM f LSTM f LSTM f X the X brown X fox X jumped X root
[]
GEM-SciDuet-train-107#paper-1284#slide-7
1284
Recursive Subtree Composition in LSTM-Based Dependency Parsing
The need for tree structure modelling on top of sequence modelling is an open issue in neural dependency parsing. We investigate the impact of adding a tree layer on top of a sequential model by recursively composing subtree representations (composition) in a transition-based parser that uses features extracted by a BiLSTM. Composition seems superfluous with such a model, suggesting that BiLSTMs capture information about subtrees. We perform model ablations to tease out the conditions under which composition helps. When ablating the backward LSTM, performance drops and composition does not recover much of the gap. When ablating the forward LSTM, performance drops less dramatically and composition recovers a substantial part of the gap, indicating that a forward LSTM and composition capture similar information. We take the backward LSTM to be related to lookahead features and the forward LSTM to the rich history-based features both crucial for transition-based parsers. To capture history-based information, composition is better than a forward LSTM on its own, but it is even better to have a forward LSTM as part of a BiLSTM. We correlate results with language properties, showing that the improved lookahead of a backward LSTM is especially important for head-final languages.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203 ], "paper_content_text": [ "Introduction Recursive neural networks allow us to construct vector representations of trees or subtrees.", "They have been used for constituency parsing by Socher et al.", "(2013) and Dyer et al.", "(2016) and for dependency parsing by Stenetorp (2013) and , among others.", "In particular, showed that composing representations of subtrees using recursive neural networks can be beneficial for transition-based dependency parsing.", "These results were further strengthened in Kuncoro et al.", "(2017) who showed, using ablation experiments, that composition is key in the Recurrent Neural Network Grammar (RNNG) generative parser by Dyer et al.", "(2016) .", "In a parallel development, Kiperwasser and Goldberg (2016b) showed that using BiLSTMs for feature extraction can lead to high parsing accuracy even with fairly simple parsing architectures, and using BiLSTMs for feature extraction has therefore become very popular in dependency parsing.", "It is used in the state-of-the-art parser of Dozat and Manning (2017) , was used in 8 of the 10 highest performing systems of the 2017 CoNLL shared task (Zeman et al., 2017) and 10 out of the 10 highest performing systems of the 2018 CoNLL shared task (Zeman et al., 2018) .", "This raises the question of whether features extracted with BiLSTMs in themselves capture information about subtrees, thus making recursive composition superfluous.", "Some support for this hypothesis comes from the results of Linzen et al.", "(2016) which indicate that LSTMs can capture hierarchical information: they can be trained to predict long-distance number agreement in English.", "Those results were extended to more constructions and three additional languages by Gulordava et al.", "(2018) .", "However, Kuncoro et al.", "(2018) have also shown that although sequential LSTMs can learn syntactic information, a recursive neural network which explicitly models hierarchy (the RNNG model from ) is better at this: it performs better on the number agreement task from Linzen et al.", "(2016) .", "To further explore this question in the context of dependency parsing, we investigate the use of recursive composition (henceforth referred to as composition) in a parser with an architecture like the one in Kiperwasser and Goldberg (2016b) .", "This allows us to explore variations of features and isolate the conditions under which composi-tion is helpful.", "We hypothesise that the use of a BiLSTM for feature extraction makes it possible to capture information about subtrees and therefore makes the use of subtree composition superfluous.", "We hypothesise that composition becomes useful when part of the BiLSTM is ablated, the forward or the backward LSTM.", "We further hypothesise that composition is most useful when the parser has no access to information about the function of words in the context of the sentence given by POS tags.", "When using POS tags, the tagger has indeed had access to the full sentence.", "We additionally look at what happens when we ablate character vectors which have been shown to capture information which is partially overlapping with information from POS tags.", "We experiment with a wider variety of languages than in order to explore whether the usefulness of different model variants vary depending on language type.", "K&G Transition-Based Parsing We define the parsing architecture introduced by Kiperwasser and Goldberg (2016b) at a high level of abstraction and henceforth refer to it as K&G.", "A K&G parser is a greedy transition-based parser.", "1 For an input sentence of length n with words w 1 , .", ".", ".", ", w n , a sequence of vectors x 1:n is created, where the vector x i is a vector representation of the word w i .", "We refer to these as type vectors, as they are the same for all occurrences of a word type.", "Type vectors are then passed through a feature function which learns representations of words in the context of the sentence.", "x i = e(w i ) v i = f (x 1:n , i) We refer to the vector v i as a token vector, as it is different for different tokens of the same word type.", "In Kiperwasser and Goldberg (2016b) , the feature function used is a BiLSTM.", "As is usual in transition-based parsing, parsing involves taking transitions from an initial configuration to a terminal one.", "Parser configurations are represented by a stack, a buffer and set of dependency arcs (Nivre, 2008) .", "For each configuration c, the feature extractor concatenates the token representations of core elements from the stack and buffer.", "These token vectors are passed to a classifier, typically a Multilayer Perceptron (MLP).", "The MLP scores transitions together with the arc labels for transitions that involve adding an arc.", "Both the word type vectors and the BiLSTMs are trained together with the model.", "looked at the impact of using a recursive composition function in their parser, which is also a transition-based parser but with an architecture different from K&G.", "They make use of a variant of the LSTM called a stack LSTM.", "A stack LSTM has push and pop operations which allow passing through states in a tree structure rather than sequentially.", "Stack LSTMs are used to represent the stack, the buffer, and the sequence of past parsing actions performed for a configuration.", "Composing Subtree Representations The words of the sentence are represented by vectors of the word types, together with a vector representing the word's POS tag.", "In the initial configuration, the vectors of all words are in the buffer and the stack is empty.", "The representation of the buffer is the end state of a backward LSTM over the word vectors.", "As parsing evolves, the word vectors are popped from the buffer, pushed to and popped from the stack and the representations of stack and buffer get updated.", "define a recursive composition function and compose tree representations incrementally, as dependents get attached to their head.", "The composed representation c is built by concatenating the vector h of the head with the vector of the dependent d, as well as a vector r representing the label paired with the direction of the arc.", "That concatenated vector is passed through an affine transformation and then through a tanh non-linear activation.", "c = tanh(W [h; d; r] + b) They create two versions of the parser.", "In the first version, when a dependent is attached to a head, the word vector of the head is replaced by a composed vector of the head and dependent.", "In the second version, they simply keep the vector of the head when attaching a dependent to a head.", "They observe that the version with composition is substantially better than the version without, by 1.3 LAS points for English (on the Penn Treebank (PTB) test set) and 2.1 for Chinese (on the Chinese Treebank (CTB) test set).", "Their parser uses POS tag information.", "POS tags help to disambiguate between different functional uses of a word and in this way give information about the use of the word in context.", "We hypothesise that the effect of using a recursive composition function is stronger when not making use of POS tags.", "Composition in a K&G Parser The parsing architectures of the stack LSTM parser (S-LSTM) and K&G are different but have some similarities.", "2 In both cases, the configuration is represented by vectors obtained by LSTMs.", "In K&G, it is represented by the token vectors of top items of the stack and the first item of the buffer.", "In the S-LSTM, it is represented by the vector representations of the entire stack, buffer and sequence of past transitions.", "Both types of parsers learn vector representations of word types which are passed to an LSTM.", "In K&G, they are passed to an LSTM in a feature extraction step that happens before parsing.", "The LSTM in this case is used to learn vectors that have information about the context of each word, a token vector.", "In the S-LSTM, word type vectors are passed to Stack LSTMs as parsing evolves.", "In this case, LSTMs are used to learn vector representations of the stack and buffer (as well as one which learns a representation of the parsing action history).", "When composition is not used in the S-LSTM, word vectors represent word types.", "When composition is used, as parsing evolves, the stack and buffer vectors get updated with information about the subtrees they contain, so that they gradually become contextualised.", "In this sense, those vectors become more like token vectors in K&G.", "More specifically, as explained in the previous section, when a dependent is attached to its head, the composition function is applied to the vectors of head and dependent and the vector of the head is replaced by this composed vector.", "We cannot apply composition on type vectors in the K&G architecture, since they are not used after the feature extraction step and hence cannot influence the representation of the configuration.", "Instead, we apply composition on the token vectors.", "We embed those composed representations in the same space as the token vectors.", "In K&G, like in the S-LSTM, we can create a composition function and compose the representation of subtrees as parsing evolves.", "We create two versions of the parser, one where word tokens are represented by their token vector.", "The other where they are represented by their token vector and the vector of their subtree c i , which is initially just a copy of the token vector (v i = f (x 1:n , i)β€’c i ).", "When a dependent word d is attached to a word h with a relation and direction r, c i is computed with the same composition function as in the S-LSTM defined in the previous section, repeated below.", "3 This composition function is a simple recurrent cell.", "Simple RNNs have known shortcomings which have been addressed by using LSTMs, as proposed by Hochreiter and Schmidhuber (1997) .", "A natural extension to this composition function is therefore to replace it with an LSTM cell.", "We also try this variant.", "We construct LSTMs for subtrees.", "We initialise a new LSTM for each new subtree that is formed, that is, when a dependent d is attached to a head h which does not have any dependent yet.", "Each time we attach a dependent to a head, we construct a vector which is a concatenation of h, d and r. We pass this vector to the LSTM of h. c is the output state of the LSTM after passing through that vector.", "We denote those models with +rc for the one using an ungated recurrent cell and with +lc for the one using an LSTM cell.", "c = tanh(W [h; d; r] + b) c = LSTM([h; d; r]) As results show (see Β§ 5), neither type of composition seems useful when used with the K&G parsing model, which indicates that BiLSTMs capture information about subtrees.", "To further investigate this and in order to isolate the conditions under which composition is helpful, we perform different model ablations and test the impact of recursive composition on these ablated models.", "First, we ablate parts of the BiLSTMs: we ablate either the forward or the backward LSTM.", "We therefore build parsers with 3 different feature functions f (x, i) over the word type vectors x i in the sentence x: a BiLSTM (bi) (our baseline), a backward LSTM (bw) (i.e., ablating the forward LSTM) and a forward LSTM (f w) (i.e., ablating the backward LSTM): bi(x, i) = BILSTM(x 1:n , i) bw(x, i) = LSTM(x n:1 , i) f w(x, i) = LSTM(x 1:n , i) K&G parsers with unidirectional LSTMs are, in some sense, more similar to the S-LSTM than those with a BiLSTM, since the S-LSTM only uses unidirectional LSTMs.", "We hypothesise that composition will help the parser using unidirectional LSTMs in the same way it helps an S-LSTM.", "We additionally experiment with the vector representing the word at the input of the LSTM.", "The most complex representation consists of a concatenation of an embedding of the word type e(w i ), an embedding of the (predicted) POS tag of w i , p(w i ) and a character representation of the word obtained by running a BiLSTM over the characters ch 1:m of w i (BiLSTM(ch 1:m )).", "x i = e(w i ) β€’ p(w i ) β€’ BiLSTM(ch 1:m ) Without a POS tag embedding, the word vector is a representation of the word type.", "With POS information, we have some information about the word in the context of the sentence and the tagger has had access to the full sentence.", "The representation of the word at the input of the BiLSTM is therefore more contextualised and it can be expected that a recursive composition function will be less helpful than when POS information is not used.", "Character information has been shown to be useful for dependency parsing first by .", "and Smith et al.", "(2018b) among others have shown that POS and character information are somewhat complementary.", "used similar character vectors in the S-LSTM parser but did not look at the impact of composition when using these vectors.", "Here, we experiment with ablating either or both of the character and POS vectors.", "We look at the impact of using composition on the full model as well as these ablated models.", "We hypothesise that composition is most helpful when those vectors are not used, since they give information about the functional use of the word in context.", "Parser We use UUParser, a variant of the K&G transition-based parser that employs the arc-hybrid transition system from Kuhlmann et al.", "(2011) extended with a SWAP transition and a Static-Dynamic oracle, as described in de Lhoneux et al.", "(2017b) 4 .", "The SWAP transition is used to allow the construction of non-projective dependency trees (Nivre, 2009) .", "We use default hyperparameters.", "When using POS tags, we use the universal POS tags from the UD treebanks which are coarsegrained and consistent across languages.", "Those POS tags are predicted by UDPipe (Straka et al., 2016) both for training and parsing.", "This parser obtained the 7th best LAS score on average in the 2018 CoNLL shared task (Zeman et al., 2018) , about 2.5 LAS points below the best system, which uses an ensemble system as well as ELMo embeddings, as introduced by Peters et al.", "(2018) .", "Note, however, that we use a slightly impoverished version of the model used for the shared task which is described in Smith et al.", "(2018a) : we use a less accurate POS tagger (UDPipe) and we do not make use of multi-treebank models.", "In addition, Smith et al.", "(2018a) use the three top items of the stack as well as the first item of the buffer to represent the configuration, while we only use the two top items of the stack and the first item of the buffer.", "Smith et al.", "(2018a) also use an extended feature set as introduced by Kiperwasser and Goldberg (2016b) where they also use the rightmost and leftmost children of the items of the stack and buffer that they consider.", "We do not use that extended feature set.", "This is to keep the parser settings as simple as possible and avoid adding confounding factors.", "It is still a near-SOTA model.", "We evaluate parsing models on the development sets and report the average of the 5 best results in 30 epochs and 5 runs with different random seeds.", "Data We test our models on a sample of treebanks from Universal Dependencies v2.1 (Nivre et al., 2017) .", "We follow the criteria from de Lhoneux et al.", "(2017c) to select our sample: we ensure typological variety, we ensure variety of domains, we verify the quality of the treebanks, and we use one treebank with a large amount of non-projective arcs.", "However, unlike them, we do not use extremely small treebanks.", "Our selection is the same as theirs but we remove the tiny treebanks and replace them with 3 others.", "Our final set is: Ancient Greek (PROIEL), Basque, Chinese, Czech, English, Finnish, French, Hebrew and Japanese.", "Results First, we look at the effect of our different recursive composition functions on the full model (i.e., the model using a BiLSTM for feature extraction as well as both character and POS tag information).", "As can be seen from Figure 1 , recursive composition using an LSTM cell (+lc) is generally better than recursive composition with a recurrent cell (+rc), but neither technique reliably improves the accuracy of a BiLSTM parser.", "Ablating the forward and backward LSTMs Second, we only consider the models using character and POS information and look at the effect of ablating parts of the BiLSTM on the different languages.", "The results can be seen in Figure 2 .", "As expected, the BiLSTM parser performs considerably better than both unidirectional LSTM parsers, and the backward LSTM is considerably better than the forward LSTM, on average.", "It is, however, interesting to note that using a forward LSTM is much more hurtful for some languages than others: it is especially hurtful for Chinese and Japanese.", "This can be explained by language properties: the right-headed languages suffer more from ablating the backward LSTM than other languages.", "We observe a correlation between how hurtful a forward model is compared to the baseline and the percentage of right-headed content dependency relations (R = βˆ’0.838, p < .01), see Figure 3 .", "5 There is no significant correlation between how hurtful ablating the forward LSTM is and the percentage of left-headed content dependency relations (p > .05) indicating that its usefulness is not dependent on language properties.", "We hypothesise that dependency length or sentence length can play a role but we also find no correlation between how hurtful it is to ablate the forward LSTM and average dependency or sentence length in treebanks.", "It is finally also interesting to note that the backward LSTM performance is close to the BiLSTMs performance for some languages (Japanese and French).", "5 The reason we only consider content dependency relations is that the UD scheme focuses on dependency relations between content words and treats function words as features of content words to maximise parallelism across languages (de Marneffe et al., 2014) .", "We now look at the effect of using recursive composition on these ablated models.", "Results are given in Figure 4 .", "First of all, we observe unsurprisingly that composition using an LSTM cell is much better than using a simple recurrent cell.", "Second, both types of composition help the backward LSTM case, but neither reliably helps the bi models.", "Finally, the recurrent cell does not help the forward LSTM case but the LSTM cell does to some extent.", "It is interesting to note that using composition, especially using an LSTM cell, bridges a substantial part of the gap between the bw and the bi models.", "These results can be related to the literature on transition-based dependency parsing.", "Transitionbased parsers generally rely on two types of features: history-based features over the emerging dependency tree and lookahead features over the buffer of remaining input.", "The former are based on a hierarchical structure, the latter are purely sequential.", "McDonald and Nivre (2007) and Mc-Donald and Nivre (2011) have shown that historybased features enhance transition-based parsers as long as they do not suffer from error propagation.", "However, Nivre (2006) has also shown that lookahead features are absolutely crucial given the greedy left-to-right parsing strategy.", "In the model architectures considered here, the backward LSTM provides an improved lookahead.", "Similarly to the lookahead in statistical parsing, it is sequential.", "The difference is that it gives information about upcoming words with unbounded length.", "The forward LSTM in this model architecture provides history-based information but unlike in statistical parsing, that information is built sequentially rather than hierarchically: the forward LSTM passes through the sentence in the linear order of the sentence.", "In our results, we see that lookahead features are more important than the history-based ones.", "It hurts parsing accuracy more to ablate the backward LSTM than to ablate the forward one.", "This is expected given that some history-based information is still available through the top tokens on the stack, while the lookahead information is almost lost completely without the backward LSTM.", "A composition function gives hierarchical information about the history of parsing actions.", "It makes sense that it helps the backward LSTM model most since that model has no access to any information about parsing history.", "It helps the forward LSTM slightly which indicates that there can be gains from using structured information about parsing history rather than sequential information.", "We could then expect that composition should help the BiLSTM model which, how- Figure 5 : LAS of baseline, using char and/or POS tags to construct word representations ever, is not the case.", "This might be because the BiLSTM constructs information about parsing history and lookahead into a unique representation.", "In any case, this indicates that BiLSTMs are powerful feature extractors which seem to capture useful information about subtrees.", "Ablating POS and character information Next, we look at the effect of the different word representation methods on the different languages, as represented in Figure 5 .", "As is consistent with the literature de Lhoneux et al., 2017a; Smith et al., 2018b) , using characterbased word representations and/or POS tags consistently improves parsing accuracy but has a different impact in different languages and the benefits of both methods are not cumulative: using the two combined is not much better than using either on its own.", "In particular, character models are an efficient way to obtain large improvements in morphologically rich languages.", "We look at the impact of recursive compositions on all combinations of ablated models, see Table 1 .", "We only look at the impact of using an LSTM cell rather than a recurrent cell since it was a better technique across the board (see previous section).", "Looking first at BiLSTMs, it seems that composition does not reliably help parsing accuracy, regardless of access to POS and character information.", "This indicates that the vectors obtained from the BiLSTM already contain information that would otherwise be obtained by using composition.", "Turning to results with either the forward or the backward LSTM ablated, we see the expected pattern.", "Composition helps more when the model lacks POS tags, indicating that there is some redundancy between these two methods of building contextual information.", "Composition helps recover a substantial part of the gap of the model with a backward LSTM with or without POS tag.", "It recovers a much less substantial part of the gap in other cases which means that, although there is some redundancy between these different methods of building contextual information, they are still complementary and a recursive composition function cannot fully compensate for the lack of a backward LSTM or POS and/or character information.", "There are some language idiosyncracies in the results.", "While composition helps recover most of the gap for the backward LSTM models without POS and/or character information for Czech and English, it does it to a much smaller extent for Basque and Finnish.", "We hypothesise that arc depth might impact the usefulness of composition, since more depth means more matrix multiplications with the composition function.", "However, we find no correlation between average arc depth of the treebanks and usefulness of composition.", "It is an open question why composition helps some languages more than others.", "Note that we are not the first to use composition over vectors obtained from a BiLSTM in the context of dependency parsing, as this was done by Qi and Manning (2017) .", "The difference is that they compose vectors before scoring transitions.", "It was also done by Kiperwasser and Goldberg (2016a) who showed that using BiLSTM vectors for words in their Tree LSTM parser is helpful but they did not compare this to using BiLSTM vectors without the Tree LSTM.", "Recurrent and recursive LSTMs in the way they have been considered in this paper are two ways of constructing contextual information and making it available for local decisions in a greedy parser.", "The strength of recursive LSTMs is that they can build this contextual information using hierarchical context rather than linear context.", "A possible weakness is that this makes the model sensitive to error propagation: a wrong attachment leads to using the wrong contextual information.", "It is therefore possible that the benefits and drawbacks of using this method cancel each other out in the context of BiLSTMs.", "Ensemble To investigate further the information captured by BiLSTMs, we ensemble the 6 versions of the models with POS and character information with the different feature extractors (bi, bw, f w) with (+lc) and without composition.", "We use the (unweighted) reparsing technique of Sagae and Lavie (2006) 6 and ignoring labels.", "As can be seen from the UAS scores in Table 2 , the ensemble (full) largely outperforms the parser using only a BiLSTM, indicating that the information obtained from the different models is complementary.", "To investigate the contribution of each of the 6 models, we ablate each one by one.", "As can be seen from Table 2 , ablating either of the BiLSTM models or the backward LSTM using composition, results in the least effective of the ablated models, further strengthening the conclusion that BiL-STMs are powerful feature extractors.", "Conclusion We investigated the impact of composing the representation of subtrees in a transition-based parser.", "We observed that composition does not reliably help a parser that uses a BiLSTM for feature extraction, indicating that vectors obtained from the BiLSTM might capture subtree information, which is consistent with the results of Linzen et al.", "(2016) .", "However, we observe that, when ablating the backward LSTM, performance drops and recursive composition does not help to recover much of this gap.", "We hypothesise that this is because the backward LSTM primarily improves the lookahead for the greedy parser.", "When ablating the forward LSTM, performance drops to a smaller extent and recursive composition recovers a substantial part of the gap.", "This indicates that a forward LSTM and a recursive composition function capture similar information, which we take to be related to the rich history-based features crucial for a transition-based parser.", "To capture this infor-mation, a recursive composition function is better than a forward LSTM on its own, but it is even better to have a forward LSTM as part of a BiLSTM.", "We further find that recursive composition helps more when POS tags are ablated from the model, indicating that POS tags and a recursive composition function are partly redundant ways of constructing contextual information.", "Finally, we correlate results with language properties, showing that the improved lookahead of a backward LSTM is especially important for head-final languages." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "5.1", "5.2", "5.3", "6" ], "paper_header_content": [ "Introduction", "K&G Transition-Based Parsing", "Composing Subtree Representations", "Composition in a K&G Parser", "Results", "Ablating the forward and backward LSTMs", "Ablating POS and character information", "Ensemble", "Conclusion" ] }
GEM-SciDuet-train-107#paper-1284#slide-7
Results BiLSTM ablations
a a o y Miryam de Lhoneux, Miguel Ballesteros and Joakim Nivre e Subtree Compo: BE oR oP G 8 ob cs en eu fi a gre he ja zh av. NTE eee ee oc oR Ce etd 16/22
a a o y Miryam de Lhoneux, Miguel Ballesteros and Joakim Nivre e Subtree Compo: BE oR oP G 8 ob cs en eu fi a gre he ja zh av. NTE eee ee oc oR Ce etd 16/22
[]
GEM-SciDuet-train-107#paper-1284#slide-8
1284
Recursive Subtree Composition in LSTM-Based Dependency Parsing
The need for tree structure modelling on top of sequence modelling is an open issue in neural dependency parsing. We investigate the impact of adding a tree layer on top of a sequential model by recursively composing subtree representations (composition) in a transition-based parser that uses features extracted by a BiLSTM. Composition seems superfluous with such a model, suggesting that BiLSTMs capture information about subtrees. We perform model ablations to tease out the conditions under which composition helps. When ablating the backward LSTM, performance drops and composition does not recover much of the gap. When ablating the forward LSTM, performance drops less dramatically and composition recovers a substantial part of the gap, indicating that a forward LSTM and composition capture similar information. We take the backward LSTM to be related to lookahead features and the forward LSTM to the rich history-based features both crucial for transition-based parsers. To capture history-based information, composition is better than a forward LSTM on its own, but it is even better to have a forward LSTM as part of a BiLSTM. We correlate results with language properties, showing that the improved lookahead of a backward LSTM is especially important for head-final languages.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203 ], "paper_content_text": [ "Introduction Recursive neural networks allow us to construct vector representations of trees or subtrees.", "They have been used for constituency parsing by Socher et al.", "(2013) and Dyer et al.", "(2016) and for dependency parsing by Stenetorp (2013) and , among others.", "In particular, showed that composing representations of subtrees using recursive neural networks can be beneficial for transition-based dependency parsing.", "These results were further strengthened in Kuncoro et al.", "(2017) who showed, using ablation experiments, that composition is key in the Recurrent Neural Network Grammar (RNNG) generative parser by Dyer et al.", "(2016) .", "In a parallel development, Kiperwasser and Goldberg (2016b) showed that using BiLSTMs for feature extraction can lead to high parsing accuracy even with fairly simple parsing architectures, and using BiLSTMs for feature extraction has therefore become very popular in dependency parsing.", "It is used in the state-of-the-art parser of Dozat and Manning (2017) , was used in 8 of the 10 highest performing systems of the 2017 CoNLL shared task (Zeman et al., 2017) and 10 out of the 10 highest performing systems of the 2018 CoNLL shared task (Zeman et al., 2018) .", "This raises the question of whether features extracted with BiLSTMs in themselves capture information about subtrees, thus making recursive composition superfluous.", "Some support for this hypothesis comes from the results of Linzen et al.", "(2016) which indicate that LSTMs can capture hierarchical information: they can be trained to predict long-distance number agreement in English.", "Those results were extended to more constructions and three additional languages by Gulordava et al.", "(2018) .", "However, Kuncoro et al.", "(2018) have also shown that although sequential LSTMs can learn syntactic information, a recursive neural network which explicitly models hierarchy (the RNNG model from ) is better at this: it performs better on the number agreement task from Linzen et al.", "(2016) .", "To further explore this question in the context of dependency parsing, we investigate the use of recursive composition (henceforth referred to as composition) in a parser with an architecture like the one in Kiperwasser and Goldberg (2016b) .", "This allows us to explore variations of features and isolate the conditions under which composi-tion is helpful.", "We hypothesise that the use of a BiLSTM for feature extraction makes it possible to capture information about subtrees and therefore makes the use of subtree composition superfluous.", "We hypothesise that composition becomes useful when part of the BiLSTM is ablated, the forward or the backward LSTM.", "We further hypothesise that composition is most useful when the parser has no access to information about the function of words in the context of the sentence given by POS tags.", "When using POS tags, the tagger has indeed had access to the full sentence.", "We additionally look at what happens when we ablate character vectors which have been shown to capture information which is partially overlapping with information from POS tags.", "We experiment with a wider variety of languages than in order to explore whether the usefulness of different model variants vary depending on language type.", "K&G Transition-Based Parsing We define the parsing architecture introduced by Kiperwasser and Goldberg (2016b) at a high level of abstraction and henceforth refer to it as K&G.", "A K&G parser is a greedy transition-based parser.", "1 For an input sentence of length n with words w 1 , .", ".", ".", ", w n , a sequence of vectors x 1:n is created, where the vector x i is a vector representation of the word w i .", "We refer to these as type vectors, as they are the same for all occurrences of a word type.", "Type vectors are then passed through a feature function which learns representations of words in the context of the sentence.", "x i = e(w i ) v i = f (x 1:n , i) We refer to the vector v i as a token vector, as it is different for different tokens of the same word type.", "In Kiperwasser and Goldberg (2016b) , the feature function used is a BiLSTM.", "As is usual in transition-based parsing, parsing involves taking transitions from an initial configuration to a terminal one.", "Parser configurations are represented by a stack, a buffer and set of dependency arcs (Nivre, 2008) .", "For each configuration c, the feature extractor concatenates the token representations of core elements from the stack and buffer.", "These token vectors are passed to a classifier, typically a Multilayer Perceptron (MLP).", "The MLP scores transitions together with the arc labels for transitions that involve adding an arc.", "Both the word type vectors and the BiLSTMs are trained together with the model.", "looked at the impact of using a recursive composition function in their parser, which is also a transition-based parser but with an architecture different from K&G.", "They make use of a variant of the LSTM called a stack LSTM.", "A stack LSTM has push and pop operations which allow passing through states in a tree structure rather than sequentially.", "Stack LSTMs are used to represent the stack, the buffer, and the sequence of past parsing actions performed for a configuration.", "Composing Subtree Representations The words of the sentence are represented by vectors of the word types, together with a vector representing the word's POS tag.", "In the initial configuration, the vectors of all words are in the buffer and the stack is empty.", "The representation of the buffer is the end state of a backward LSTM over the word vectors.", "As parsing evolves, the word vectors are popped from the buffer, pushed to and popped from the stack and the representations of stack and buffer get updated.", "define a recursive composition function and compose tree representations incrementally, as dependents get attached to their head.", "The composed representation c is built by concatenating the vector h of the head with the vector of the dependent d, as well as a vector r representing the label paired with the direction of the arc.", "That concatenated vector is passed through an affine transformation and then through a tanh non-linear activation.", "c = tanh(W [h; d; r] + b) They create two versions of the parser.", "In the first version, when a dependent is attached to a head, the word vector of the head is replaced by a composed vector of the head and dependent.", "In the second version, they simply keep the vector of the head when attaching a dependent to a head.", "They observe that the version with composition is substantially better than the version without, by 1.3 LAS points for English (on the Penn Treebank (PTB) test set) and 2.1 for Chinese (on the Chinese Treebank (CTB) test set).", "Their parser uses POS tag information.", "POS tags help to disambiguate between different functional uses of a word and in this way give information about the use of the word in context.", "We hypothesise that the effect of using a recursive composition function is stronger when not making use of POS tags.", "Composition in a K&G Parser The parsing architectures of the stack LSTM parser (S-LSTM) and K&G are different but have some similarities.", "2 In both cases, the configuration is represented by vectors obtained by LSTMs.", "In K&G, it is represented by the token vectors of top items of the stack and the first item of the buffer.", "In the S-LSTM, it is represented by the vector representations of the entire stack, buffer and sequence of past transitions.", "Both types of parsers learn vector representations of word types which are passed to an LSTM.", "In K&G, they are passed to an LSTM in a feature extraction step that happens before parsing.", "The LSTM in this case is used to learn vectors that have information about the context of each word, a token vector.", "In the S-LSTM, word type vectors are passed to Stack LSTMs as parsing evolves.", "In this case, LSTMs are used to learn vector representations of the stack and buffer (as well as one which learns a representation of the parsing action history).", "When composition is not used in the S-LSTM, word vectors represent word types.", "When composition is used, as parsing evolves, the stack and buffer vectors get updated with information about the subtrees they contain, so that they gradually become contextualised.", "In this sense, those vectors become more like token vectors in K&G.", "More specifically, as explained in the previous section, when a dependent is attached to its head, the composition function is applied to the vectors of head and dependent and the vector of the head is replaced by this composed vector.", "We cannot apply composition on type vectors in the K&G architecture, since they are not used after the feature extraction step and hence cannot influence the representation of the configuration.", "Instead, we apply composition on the token vectors.", "We embed those composed representations in the same space as the token vectors.", "In K&G, like in the S-LSTM, we can create a composition function and compose the representation of subtrees as parsing evolves.", "We create two versions of the parser, one where word tokens are represented by their token vector.", "The other where they are represented by their token vector and the vector of their subtree c i , which is initially just a copy of the token vector (v i = f (x 1:n , i)β€’c i ).", "When a dependent word d is attached to a word h with a relation and direction r, c i is computed with the same composition function as in the S-LSTM defined in the previous section, repeated below.", "3 This composition function is a simple recurrent cell.", "Simple RNNs have known shortcomings which have been addressed by using LSTMs, as proposed by Hochreiter and Schmidhuber (1997) .", "A natural extension to this composition function is therefore to replace it with an LSTM cell.", "We also try this variant.", "We construct LSTMs for subtrees.", "We initialise a new LSTM for each new subtree that is formed, that is, when a dependent d is attached to a head h which does not have any dependent yet.", "Each time we attach a dependent to a head, we construct a vector which is a concatenation of h, d and r. We pass this vector to the LSTM of h. c is the output state of the LSTM after passing through that vector.", "We denote those models with +rc for the one using an ungated recurrent cell and with +lc for the one using an LSTM cell.", "c = tanh(W [h; d; r] + b) c = LSTM([h; d; r]) As results show (see Β§ 5), neither type of composition seems useful when used with the K&G parsing model, which indicates that BiLSTMs capture information about subtrees.", "To further investigate this and in order to isolate the conditions under which composition is helpful, we perform different model ablations and test the impact of recursive composition on these ablated models.", "First, we ablate parts of the BiLSTMs: we ablate either the forward or the backward LSTM.", "We therefore build parsers with 3 different feature functions f (x, i) over the word type vectors x i in the sentence x: a BiLSTM (bi) (our baseline), a backward LSTM (bw) (i.e., ablating the forward LSTM) and a forward LSTM (f w) (i.e., ablating the backward LSTM): bi(x, i) = BILSTM(x 1:n , i) bw(x, i) = LSTM(x n:1 , i) f w(x, i) = LSTM(x 1:n , i) K&G parsers with unidirectional LSTMs are, in some sense, more similar to the S-LSTM than those with a BiLSTM, since the S-LSTM only uses unidirectional LSTMs.", "We hypothesise that composition will help the parser using unidirectional LSTMs in the same way it helps an S-LSTM.", "We additionally experiment with the vector representing the word at the input of the LSTM.", "The most complex representation consists of a concatenation of an embedding of the word type e(w i ), an embedding of the (predicted) POS tag of w i , p(w i ) and a character representation of the word obtained by running a BiLSTM over the characters ch 1:m of w i (BiLSTM(ch 1:m )).", "x i = e(w i ) β€’ p(w i ) β€’ BiLSTM(ch 1:m ) Without a POS tag embedding, the word vector is a representation of the word type.", "With POS information, we have some information about the word in the context of the sentence and the tagger has had access to the full sentence.", "The representation of the word at the input of the BiLSTM is therefore more contextualised and it can be expected that a recursive composition function will be less helpful than when POS information is not used.", "Character information has been shown to be useful for dependency parsing first by .", "and Smith et al.", "(2018b) among others have shown that POS and character information are somewhat complementary.", "used similar character vectors in the S-LSTM parser but did not look at the impact of composition when using these vectors.", "Here, we experiment with ablating either or both of the character and POS vectors.", "We look at the impact of using composition on the full model as well as these ablated models.", "We hypothesise that composition is most helpful when those vectors are not used, since they give information about the functional use of the word in context.", "Parser We use UUParser, a variant of the K&G transition-based parser that employs the arc-hybrid transition system from Kuhlmann et al.", "(2011) extended with a SWAP transition and a Static-Dynamic oracle, as described in de Lhoneux et al.", "(2017b) 4 .", "The SWAP transition is used to allow the construction of non-projective dependency trees (Nivre, 2009) .", "We use default hyperparameters.", "When using POS tags, we use the universal POS tags from the UD treebanks which are coarsegrained and consistent across languages.", "Those POS tags are predicted by UDPipe (Straka et al., 2016) both for training and parsing.", "This parser obtained the 7th best LAS score on average in the 2018 CoNLL shared task (Zeman et al., 2018) , about 2.5 LAS points below the best system, which uses an ensemble system as well as ELMo embeddings, as introduced by Peters et al.", "(2018) .", "Note, however, that we use a slightly impoverished version of the model used for the shared task which is described in Smith et al.", "(2018a) : we use a less accurate POS tagger (UDPipe) and we do not make use of multi-treebank models.", "In addition, Smith et al.", "(2018a) use the three top items of the stack as well as the first item of the buffer to represent the configuration, while we only use the two top items of the stack and the first item of the buffer.", "Smith et al.", "(2018a) also use an extended feature set as introduced by Kiperwasser and Goldberg (2016b) where they also use the rightmost and leftmost children of the items of the stack and buffer that they consider.", "We do not use that extended feature set.", "This is to keep the parser settings as simple as possible and avoid adding confounding factors.", "It is still a near-SOTA model.", "We evaluate parsing models on the development sets and report the average of the 5 best results in 30 epochs and 5 runs with different random seeds.", "Data We test our models on a sample of treebanks from Universal Dependencies v2.1 (Nivre et al., 2017) .", "We follow the criteria from de Lhoneux et al.", "(2017c) to select our sample: we ensure typological variety, we ensure variety of domains, we verify the quality of the treebanks, and we use one treebank with a large amount of non-projective arcs.", "However, unlike them, we do not use extremely small treebanks.", "Our selection is the same as theirs but we remove the tiny treebanks and replace them with 3 others.", "Our final set is: Ancient Greek (PROIEL), Basque, Chinese, Czech, English, Finnish, French, Hebrew and Japanese.", "Results First, we look at the effect of our different recursive composition functions on the full model (i.e., the model using a BiLSTM for feature extraction as well as both character and POS tag information).", "As can be seen from Figure 1 , recursive composition using an LSTM cell (+lc) is generally better than recursive composition with a recurrent cell (+rc), but neither technique reliably improves the accuracy of a BiLSTM parser.", "Ablating the forward and backward LSTMs Second, we only consider the models using character and POS information and look at the effect of ablating parts of the BiLSTM on the different languages.", "The results can be seen in Figure 2 .", "As expected, the BiLSTM parser performs considerably better than both unidirectional LSTM parsers, and the backward LSTM is considerably better than the forward LSTM, on average.", "It is, however, interesting to note that using a forward LSTM is much more hurtful for some languages than others: it is especially hurtful for Chinese and Japanese.", "This can be explained by language properties: the right-headed languages suffer more from ablating the backward LSTM than other languages.", "We observe a correlation between how hurtful a forward model is compared to the baseline and the percentage of right-headed content dependency relations (R = βˆ’0.838, p < .01), see Figure 3 .", "5 There is no significant correlation between how hurtful ablating the forward LSTM is and the percentage of left-headed content dependency relations (p > .05) indicating that its usefulness is not dependent on language properties.", "We hypothesise that dependency length or sentence length can play a role but we also find no correlation between how hurtful it is to ablate the forward LSTM and average dependency or sentence length in treebanks.", "It is finally also interesting to note that the backward LSTM performance is close to the BiLSTMs performance for some languages (Japanese and French).", "5 The reason we only consider content dependency relations is that the UD scheme focuses on dependency relations between content words and treats function words as features of content words to maximise parallelism across languages (de Marneffe et al., 2014) .", "We now look at the effect of using recursive composition on these ablated models.", "Results are given in Figure 4 .", "First of all, we observe unsurprisingly that composition using an LSTM cell is much better than using a simple recurrent cell.", "Second, both types of composition help the backward LSTM case, but neither reliably helps the bi models.", "Finally, the recurrent cell does not help the forward LSTM case but the LSTM cell does to some extent.", "It is interesting to note that using composition, especially using an LSTM cell, bridges a substantial part of the gap between the bw and the bi models.", "These results can be related to the literature on transition-based dependency parsing.", "Transitionbased parsers generally rely on two types of features: history-based features over the emerging dependency tree and lookahead features over the buffer of remaining input.", "The former are based on a hierarchical structure, the latter are purely sequential.", "McDonald and Nivre (2007) and Mc-Donald and Nivre (2011) have shown that historybased features enhance transition-based parsers as long as they do not suffer from error propagation.", "However, Nivre (2006) has also shown that lookahead features are absolutely crucial given the greedy left-to-right parsing strategy.", "In the model architectures considered here, the backward LSTM provides an improved lookahead.", "Similarly to the lookahead in statistical parsing, it is sequential.", "The difference is that it gives information about upcoming words with unbounded length.", "The forward LSTM in this model architecture provides history-based information but unlike in statistical parsing, that information is built sequentially rather than hierarchically: the forward LSTM passes through the sentence in the linear order of the sentence.", "In our results, we see that lookahead features are more important than the history-based ones.", "It hurts parsing accuracy more to ablate the backward LSTM than to ablate the forward one.", "This is expected given that some history-based information is still available through the top tokens on the stack, while the lookahead information is almost lost completely without the backward LSTM.", "A composition function gives hierarchical information about the history of parsing actions.", "It makes sense that it helps the backward LSTM model most since that model has no access to any information about parsing history.", "It helps the forward LSTM slightly which indicates that there can be gains from using structured information about parsing history rather than sequential information.", "We could then expect that composition should help the BiLSTM model which, how- Figure 5 : LAS of baseline, using char and/or POS tags to construct word representations ever, is not the case.", "This might be because the BiLSTM constructs information about parsing history and lookahead into a unique representation.", "In any case, this indicates that BiLSTMs are powerful feature extractors which seem to capture useful information about subtrees.", "Ablating POS and character information Next, we look at the effect of the different word representation methods on the different languages, as represented in Figure 5 .", "As is consistent with the literature de Lhoneux et al., 2017a; Smith et al., 2018b) , using characterbased word representations and/or POS tags consistently improves parsing accuracy but has a different impact in different languages and the benefits of both methods are not cumulative: using the two combined is not much better than using either on its own.", "In particular, character models are an efficient way to obtain large improvements in morphologically rich languages.", "We look at the impact of recursive compositions on all combinations of ablated models, see Table 1 .", "We only look at the impact of using an LSTM cell rather than a recurrent cell since it was a better technique across the board (see previous section).", "Looking first at BiLSTMs, it seems that composition does not reliably help parsing accuracy, regardless of access to POS and character information.", "This indicates that the vectors obtained from the BiLSTM already contain information that would otherwise be obtained by using composition.", "Turning to results with either the forward or the backward LSTM ablated, we see the expected pattern.", "Composition helps more when the model lacks POS tags, indicating that there is some redundancy between these two methods of building contextual information.", "Composition helps recover a substantial part of the gap of the model with a backward LSTM with or without POS tag.", "It recovers a much less substantial part of the gap in other cases which means that, although there is some redundancy between these different methods of building contextual information, they are still complementary and a recursive composition function cannot fully compensate for the lack of a backward LSTM or POS and/or character information.", "There are some language idiosyncracies in the results.", "While composition helps recover most of the gap for the backward LSTM models without POS and/or character information for Czech and English, it does it to a much smaller extent for Basque and Finnish.", "We hypothesise that arc depth might impact the usefulness of composition, since more depth means more matrix multiplications with the composition function.", "However, we find no correlation between average arc depth of the treebanks and usefulness of composition.", "It is an open question why composition helps some languages more than others.", "Note that we are not the first to use composition over vectors obtained from a BiLSTM in the context of dependency parsing, as this was done by Qi and Manning (2017) .", "The difference is that they compose vectors before scoring transitions.", "It was also done by Kiperwasser and Goldberg (2016a) who showed that using BiLSTM vectors for words in their Tree LSTM parser is helpful but they did not compare this to using BiLSTM vectors without the Tree LSTM.", "Recurrent and recursive LSTMs in the way they have been considered in this paper are two ways of constructing contextual information and making it available for local decisions in a greedy parser.", "The strength of recursive LSTMs is that they can build this contextual information using hierarchical context rather than linear context.", "A possible weakness is that this makes the model sensitive to error propagation: a wrong attachment leads to using the wrong contextual information.", "It is therefore possible that the benefits and drawbacks of using this method cancel each other out in the context of BiLSTMs.", "Ensemble To investigate further the information captured by BiLSTMs, we ensemble the 6 versions of the models with POS and character information with the different feature extractors (bi, bw, f w) with (+lc) and without composition.", "We use the (unweighted) reparsing technique of Sagae and Lavie (2006) 6 and ignoring labels.", "As can be seen from the UAS scores in Table 2 , the ensemble (full) largely outperforms the parser using only a BiLSTM, indicating that the information obtained from the different models is complementary.", "To investigate the contribution of each of the 6 models, we ablate each one by one.", "As can be seen from Table 2 , ablating either of the BiLSTM models or the backward LSTM using composition, results in the least effective of the ablated models, further strengthening the conclusion that BiL-STMs are powerful feature extractors.", "Conclusion We investigated the impact of composing the representation of subtrees in a transition-based parser.", "We observed that composition does not reliably help a parser that uses a BiLSTM for feature extraction, indicating that vectors obtained from the BiLSTM might capture subtree information, which is consistent with the results of Linzen et al.", "(2016) .", "However, we observe that, when ablating the backward LSTM, performance drops and recursive composition does not help to recover much of this gap.", "We hypothesise that this is because the backward LSTM primarily improves the lookahead for the greedy parser.", "When ablating the forward LSTM, performance drops to a smaller extent and recursive composition recovers a substantial part of the gap.", "This indicates that a forward LSTM and a recursive composition function capture similar information, which we take to be related to the rich history-based features crucial for a transition-based parser.", "To capture this infor-mation, a recursive composition function is better than a forward LSTM on its own, but it is even better to have a forward LSTM as part of a BiLSTM.", "We further find that recursive composition helps more when POS tags are ablated from the model, indicating that POS tags and a recursive composition function are partly redundant ways of constructing contextual information.", "Finally, we correlate results with language properties, showing that the improved lookahead of a backward LSTM is especially important for head-final languages." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "5.1", "5.2", "5.3", "6" ], "paper_header_content": [ "Introduction", "K&G Transition-Based Parsing", "Composing Subtree Representations", "Composition in a K&G Parser", "Results", "Ablating the forward and backward LSTMs", "Ablating POS and character information", "Ensemble", "Conclusion" ] }
GEM-SciDuet-train-107#paper-1284#slide-8
Results BiLSTM ablations composition
Vienna NTE eee ee oc Reeteneact ee mest teria ens Nien enn Pe ee me eC Petite kent te aman nl paps) NN eee cg Petite kent te aman nl
Vienna NTE eee ee oc Reeteneact ee mest teria ens Nien enn Pe ee me eC Petite kent te aman nl paps) NN eee cg Petite kent te aman nl
[]
GEM-SciDuet-train-107#paper-1284#slide-9
1284
Recursive Subtree Composition in LSTM-Based Dependency Parsing
The need for tree structure modelling on top of sequence modelling is an open issue in neural dependency parsing. We investigate the impact of adding a tree layer on top of a sequential model by recursively composing subtree representations (composition) in a transition-based parser that uses features extracted by a BiLSTM. Composition seems superfluous with such a model, suggesting that BiLSTMs capture information about subtrees. We perform model ablations to tease out the conditions under which composition helps. When ablating the backward LSTM, performance drops and composition does not recover much of the gap. When ablating the forward LSTM, performance drops less dramatically and composition recovers a substantial part of the gap, indicating that a forward LSTM and composition capture similar information. We take the backward LSTM to be related to lookahead features and the forward LSTM to the rich history-based features both crucial for transition-based parsers. To capture history-based information, composition is better than a forward LSTM on its own, but it is even better to have a forward LSTM as part of a BiLSTM. We correlate results with language properties, showing that the improved lookahead of a backward LSTM is especially important for head-final languages.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203 ], "paper_content_text": [ "Introduction Recursive neural networks allow us to construct vector representations of trees or subtrees.", "They have been used for constituency parsing by Socher et al.", "(2013) and Dyer et al.", "(2016) and for dependency parsing by Stenetorp (2013) and , among others.", "In particular, showed that composing representations of subtrees using recursive neural networks can be beneficial for transition-based dependency parsing.", "These results were further strengthened in Kuncoro et al.", "(2017) who showed, using ablation experiments, that composition is key in the Recurrent Neural Network Grammar (RNNG) generative parser by Dyer et al.", "(2016) .", "In a parallel development, Kiperwasser and Goldberg (2016b) showed that using BiLSTMs for feature extraction can lead to high parsing accuracy even with fairly simple parsing architectures, and using BiLSTMs for feature extraction has therefore become very popular in dependency parsing.", "It is used in the state-of-the-art parser of Dozat and Manning (2017) , was used in 8 of the 10 highest performing systems of the 2017 CoNLL shared task (Zeman et al., 2017) and 10 out of the 10 highest performing systems of the 2018 CoNLL shared task (Zeman et al., 2018) .", "This raises the question of whether features extracted with BiLSTMs in themselves capture information about subtrees, thus making recursive composition superfluous.", "Some support for this hypothesis comes from the results of Linzen et al.", "(2016) which indicate that LSTMs can capture hierarchical information: they can be trained to predict long-distance number agreement in English.", "Those results were extended to more constructions and three additional languages by Gulordava et al.", "(2018) .", "However, Kuncoro et al.", "(2018) have also shown that although sequential LSTMs can learn syntactic information, a recursive neural network which explicitly models hierarchy (the RNNG model from ) is better at this: it performs better on the number agreement task from Linzen et al.", "(2016) .", "To further explore this question in the context of dependency parsing, we investigate the use of recursive composition (henceforth referred to as composition) in a parser with an architecture like the one in Kiperwasser and Goldberg (2016b) .", "This allows us to explore variations of features and isolate the conditions under which composi-tion is helpful.", "We hypothesise that the use of a BiLSTM for feature extraction makes it possible to capture information about subtrees and therefore makes the use of subtree composition superfluous.", "We hypothesise that composition becomes useful when part of the BiLSTM is ablated, the forward or the backward LSTM.", "We further hypothesise that composition is most useful when the parser has no access to information about the function of words in the context of the sentence given by POS tags.", "When using POS tags, the tagger has indeed had access to the full sentence.", "We additionally look at what happens when we ablate character vectors which have been shown to capture information which is partially overlapping with information from POS tags.", "We experiment with a wider variety of languages than in order to explore whether the usefulness of different model variants vary depending on language type.", "K&G Transition-Based Parsing We define the parsing architecture introduced by Kiperwasser and Goldberg (2016b) at a high level of abstraction and henceforth refer to it as K&G.", "A K&G parser is a greedy transition-based parser.", "1 For an input sentence of length n with words w 1 , .", ".", ".", ", w n , a sequence of vectors x 1:n is created, where the vector x i is a vector representation of the word w i .", "We refer to these as type vectors, as they are the same for all occurrences of a word type.", "Type vectors are then passed through a feature function which learns representations of words in the context of the sentence.", "x i = e(w i ) v i = f (x 1:n , i) We refer to the vector v i as a token vector, as it is different for different tokens of the same word type.", "In Kiperwasser and Goldberg (2016b) , the feature function used is a BiLSTM.", "As is usual in transition-based parsing, parsing involves taking transitions from an initial configuration to a terminal one.", "Parser configurations are represented by a stack, a buffer and set of dependency arcs (Nivre, 2008) .", "For each configuration c, the feature extractor concatenates the token representations of core elements from the stack and buffer.", "These token vectors are passed to a classifier, typically a Multilayer Perceptron (MLP).", "The MLP scores transitions together with the arc labels for transitions that involve adding an arc.", "Both the word type vectors and the BiLSTMs are trained together with the model.", "looked at the impact of using a recursive composition function in their parser, which is also a transition-based parser but with an architecture different from K&G.", "They make use of a variant of the LSTM called a stack LSTM.", "A stack LSTM has push and pop operations which allow passing through states in a tree structure rather than sequentially.", "Stack LSTMs are used to represent the stack, the buffer, and the sequence of past parsing actions performed for a configuration.", "Composing Subtree Representations The words of the sentence are represented by vectors of the word types, together with a vector representing the word's POS tag.", "In the initial configuration, the vectors of all words are in the buffer and the stack is empty.", "The representation of the buffer is the end state of a backward LSTM over the word vectors.", "As parsing evolves, the word vectors are popped from the buffer, pushed to and popped from the stack and the representations of stack and buffer get updated.", "define a recursive composition function and compose tree representations incrementally, as dependents get attached to their head.", "The composed representation c is built by concatenating the vector h of the head with the vector of the dependent d, as well as a vector r representing the label paired with the direction of the arc.", "That concatenated vector is passed through an affine transformation and then through a tanh non-linear activation.", "c = tanh(W [h; d; r] + b) They create two versions of the parser.", "In the first version, when a dependent is attached to a head, the word vector of the head is replaced by a composed vector of the head and dependent.", "In the second version, they simply keep the vector of the head when attaching a dependent to a head.", "They observe that the version with composition is substantially better than the version without, by 1.3 LAS points for English (on the Penn Treebank (PTB) test set) and 2.1 for Chinese (on the Chinese Treebank (CTB) test set).", "Their parser uses POS tag information.", "POS tags help to disambiguate between different functional uses of a word and in this way give information about the use of the word in context.", "We hypothesise that the effect of using a recursive composition function is stronger when not making use of POS tags.", "Composition in a K&G Parser The parsing architectures of the stack LSTM parser (S-LSTM) and K&G are different but have some similarities.", "2 In both cases, the configuration is represented by vectors obtained by LSTMs.", "In K&G, it is represented by the token vectors of top items of the stack and the first item of the buffer.", "In the S-LSTM, it is represented by the vector representations of the entire stack, buffer and sequence of past transitions.", "Both types of parsers learn vector representations of word types which are passed to an LSTM.", "In K&G, they are passed to an LSTM in a feature extraction step that happens before parsing.", "The LSTM in this case is used to learn vectors that have information about the context of each word, a token vector.", "In the S-LSTM, word type vectors are passed to Stack LSTMs as parsing evolves.", "In this case, LSTMs are used to learn vector representations of the stack and buffer (as well as one which learns a representation of the parsing action history).", "When composition is not used in the S-LSTM, word vectors represent word types.", "When composition is used, as parsing evolves, the stack and buffer vectors get updated with information about the subtrees they contain, so that they gradually become contextualised.", "In this sense, those vectors become more like token vectors in K&G.", "More specifically, as explained in the previous section, when a dependent is attached to its head, the composition function is applied to the vectors of head and dependent and the vector of the head is replaced by this composed vector.", "We cannot apply composition on type vectors in the K&G architecture, since they are not used after the feature extraction step and hence cannot influence the representation of the configuration.", "Instead, we apply composition on the token vectors.", "We embed those composed representations in the same space as the token vectors.", "In K&G, like in the S-LSTM, we can create a composition function and compose the representation of subtrees as parsing evolves.", "We create two versions of the parser, one where word tokens are represented by their token vector.", "The other where they are represented by their token vector and the vector of their subtree c i , which is initially just a copy of the token vector (v i = f (x 1:n , i)β€’c i ).", "When a dependent word d is attached to a word h with a relation and direction r, c i is computed with the same composition function as in the S-LSTM defined in the previous section, repeated below.", "3 This composition function is a simple recurrent cell.", "Simple RNNs have known shortcomings which have been addressed by using LSTMs, as proposed by Hochreiter and Schmidhuber (1997) .", "A natural extension to this composition function is therefore to replace it with an LSTM cell.", "We also try this variant.", "We construct LSTMs for subtrees.", "We initialise a new LSTM for each new subtree that is formed, that is, when a dependent d is attached to a head h which does not have any dependent yet.", "Each time we attach a dependent to a head, we construct a vector which is a concatenation of h, d and r. We pass this vector to the LSTM of h. c is the output state of the LSTM after passing through that vector.", "We denote those models with +rc for the one using an ungated recurrent cell and with +lc for the one using an LSTM cell.", "c = tanh(W [h; d; r] + b) c = LSTM([h; d; r]) As results show (see Β§ 5), neither type of composition seems useful when used with the K&G parsing model, which indicates that BiLSTMs capture information about subtrees.", "To further investigate this and in order to isolate the conditions under which composition is helpful, we perform different model ablations and test the impact of recursive composition on these ablated models.", "First, we ablate parts of the BiLSTMs: we ablate either the forward or the backward LSTM.", "We therefore build parsers with 3 different feature functions f (x, i) over the word type vectors x i in the sentence x: a BiLSTM (bi) (our baseline), a backward LSTM (bw) (i.e., ablating the forward LSTM) and a forward LSTM (f w) (i.e., ablating the backward LSTM): bi(x, i) = BILSTM(x 1:n , i) bw(x, i) = LSTM(x n:1 , i) f w(x, i) = LSTM(x 1:n , i) K&G parsers with unidirectional LSTMs are, in some sense, more similar to the S-LSTM than those with a BiLSTM, since the S-LSTM only uses unidirectional LSTMs.", "We hypothesise that composition will help the parser using unidirectional LSTMs in the same way it helps an S-LSTM.", "We additionally experiment with the vector representing the word at the input of the LSTM.", "The most complex representation consists of a concatenation of an embedding of the word type e(w i ), an embedding of the (predicted) POS tag of w i , p(w i ) and a character representation of the word obtained by running a BiLSTM over the characters ch 1:m of w i (BiLSTM(ch 1:m )).", "x i = e(w i ) β€’ p(w i ) β€’ BiLSTM(ch 1:m ) Without a POS tag embedding, the word vector is a representation of the word type.", "With POS information, we have some information about the word in the context of the sentence and the tagger has had access to the full sentence.", "The representation of the word at the input of the BiLSTM is therefore more contextualised and it can be expected that a recursive composition function will be less helpful than when POS information is not used.", "Character information has been shown to be useful for dependency parsing first by .", "and Smith et al.", "(2018b) among others have shown that POS and character information are somewhat complementary.", "used similar character vectors in the S-LSTM parser but did not look at the impact of composition when using these vectors.", "Here, we experiment with ablating either or both of the character and POS vectors.", "We look at the impact of using composition on the full model as well as these ablated models.", "We hypothesise that composition is most helpful when those vectors are not used, since they give information about the functional use of the word in context.", "Parser We use UUParser, a variant of the K&G transition-based parser that employs the arc-hybrid transition system from Kuhlmann et al.", "(2011) extended with a SWAP transition and a Static-Dynamic oracle, as described in de Lhoneux et al.", "(2017b) 4 .", "The SWAP transition is used to allow the construction of non-projective dependency trees (Nivre, 2009) .", "We use default hyperparameters.", "When using POS tags, we use the universal POS tags from the UD treebanks which are coarsegrained and consistent across languages.", "Those POS tags are predicted by UDPipe (Straka et al., 2016) both for training and parsing.", "This parser obtained the 7th best LAS score on average in the 2018 CoNLL shared task (Zeman et al., 2018) , about 2.5 LAS points below the best system, which uses an ensemble system as well as ELMo embeddings, as introduced by Peters et al.", "(2018) .", "Note, however, that we use a slightly impoverished version of the model used for the shared task which is described in Smith et al.", "(2018a) : we use a less accurate POS tagger (UDPipe) and we do not make use of multi-treebank models.", "In addition, Smith et al.", "(2018a) use the three top items of the stack as well as the first item of the buffer to represent the configuration, while we only use the two top items of the stack and the first item of the buffer.", "Smith et al.", "(2018a) also use an extended feature set as introduced by Kiperwasser and Goldberg (2016b) where they also use the rightmost and leftmost children of the items of the stack and buffer that they consider.", "We do not use that extended feature set.", "This is to keep the parser settings as simple as possible and avoid adding confounding factors.", "It is still a near-SOTA model.", "We evaluate parsing models on the development sets and report the average of the 5 best results in 30 epochs and 5 runs with different random seeds.", "Data We test our models on a sample of treebanks from Universal Dependencies v2.1 (Nivre et al., 2017) .", "We follow the criteria from de Lhoneux et al.", "(2017c) to select our sample: we ensure typological variety, we ensure variety of domains, we verify the quality of the treebanks, and we use one treebank with a large amount of non-projective arcs.", "However, unlike them, we do not use extremely small treebanks.", "Our selection is the same as theirs but we remove the tiny treebanks and replace them with 3 others.", "Our final set is: Ancient Greek (PROIEL), Basque, Chinese, Czech, English, Finnish, French, Hebrew and Japanese.", "Results First, we look at the effect of our different recursive composition functions on the full model (i.e., the model using a BiLSTM for feature extraction as well as both character and POS tag information).", "As can be seen from Figure 1 , recursive composition using an LSTM cell (+lc) is generally better than recursive composition with a recurrent cell (+rc), but neither technique reliably improves the accuracy of a BiLSTM parser.", "Ablating the forward and backward LSTMs Second, we only consider the models using character and POS information and look at the effect of ablating parts of the BiLSTM on the different languages.", "The results can be seen in Figure 2 .", "As expected, the BiLSTM parser performs considerably better than both unidirectional LSTM parsers, and the backward LSTM is considerably better than the forward LSTM, on average.", "It is, however, interesting to note that using a forward LSTM is much more hurtful for some languages than others: it is especially hurtful for Chinese and Japanese.", "This can be explained by language properties: the right-headed languages suffer more from ablating the backward LSTM than other languages.", "We observe a correlation between how hurtful a forward model is compared to the baseline and the percentage of right-headed content dependency relations (R = βˆ’0.838, p < .01), see Figure 3 .", "5 There is no significant correlation between how hurtful ablating the forward LSTM is and the percentage of left-headed content dependency relations (p > .05) indicating that its usefulness is not dependent on language properties.", "We hypothesise that dependency length or sentence length can play a role but we also find no correlation between how hurtful it is to ablate the forward LSTM and average dependency or sentence length in treebanks.", "It is finally also interesting to note that the backward LSTM performance is close to the BiLSTMs performance for some languages (Japanese and French).", "5 The reason we only consider content dependency relations is that the UD scheme focuses on dependency relations between content words and treats function words as features of content words to maximise parallelism across languages (de Marneffe et al., 2014) .", "We now look at the effect of using recursive composition on these ablated models.", "Results are given in Figure 4 .", "First of all, we observe unsurprisingly that composition using an LSTM cell is much better than using a simple recurrent cell.", "Second, both types of composition help the backward LSTM case, but neither reliably helps the bi models.", "Finally, the recurrent cell does not help the forward LSTM case but the LSTM cell does to some extent.", "It is interesting to note that using composition, especially using an LSTM cell, bridges a substantial part of the gap between the bw and the bi models.", "These results can be related to the literature on transition-based dependency parsing.", "Transitionbased parsers generally rely on two types of features: history-based features over the emerging dependency tree and lookahead features over the buffer of remaining input.", "The former are based on a hierarchical structure, the latter are purely sequential.", "McDonald and Nivre (2007) and Mc-Donald and Nivre (2011) have shown that historybased features enhance transition-based parsers as long as they do not suffer from error propagation.", "However, Nivre (2006) has also shown that lookahead features are absolutely crucial given the greedy left-to-right parsing strategy.", "In the model architectures considered here, the backward LSTM provides an improved lookahead.", "Similarly to the lookahead in statistical parsing, it is sequential.", "The difference is that it gives information about upcoming words with unbounded length.", "The forward LSTM in this model architecture provides history-based information but unlike in statistical parsing, that information is built sequentially rather than hierarchically: the forward LSTM passes through the sentence in the linear order of the sentence.", "In our results, we see that lookahead features are more important than the history-based ones.", "It hurts parsing accuracy more to ablate the backward LSTM than to ablate the forward one.", "This is expected given that some history-based information is still available through the top tokens on the stack, while the lookahead information is almost lost completely without the backward LSTM.", "A composition function gives hierarchical information about the history of parsing actions.", "It makes sense that it helps the backward LSTM model most since that model has no access to any information about parsing history.", "It helps the forward LSTM slightly which indicates that there can be gains from using structured information about parsing history rather than sequential information.", "We could then expect that composition should help the BiLSTM model which, how- Figure 5 : LAS of baseline, using char and/or POS tags to construct word representations ever, is not the case.", "This might be because the BiLSTM constructs information about parsing history and lookahead into a unique representation.", "In any case, this indicates that BiLSTMs are powerful feature extractors which seem to capture useful information about subtrees.", "Ablating POS and character information Next, we look at the effect of the different word representation methods on the different languages, as represented in Figure 5 .", "As is consistent with the literature de Lhoneux et al., 2017a; Smith et al., 2018b) , using characterbased word representations and/or POS tags consistently improves parsing accuracy but has a different impact in different languages and the benefits of both methods are not cumulative: using the two combined is not much better than using either on its own.", "In particular, character models are an efficient way to obtain large improvements in morphologically rich languages.", "We look at the impact of recursive compositions on all combinations of ablated models, see Table 1 .", "We only look at the impact of using an LSTM cell rather than a recurrent cell since it was a better technique across the board (see previous section).", "Looking first at BiLSTMs, it seems that composition does not reliably help parsing accuracy, regardless of access to POS and character information.", "This indicates that the vectors obtained from the BiLSTM already contain information that would otherwise be obtained by using composition.", "Turning to results with either the forward or the backward LSTM ablated, we see the expected pattern.", "Composition helps more when the model lacks POS tags, indicating that there is some redundancy between these two methods of building contextual information.", "Composition helps recover a substantial part of the gap of the model with a backward LSTM with or without POS tag.", "It recovers a much less substantial part of the gap in other cases which means that, although there is some redundancy between these different methods of building contextual information, they are still complementary and a recursive composition function cannot fully compensate for the lack of a backward LSTM or POS and/or character information.", "There are some language idiosyncracies in the results.", "While composition helps recover most of the gap for the backward LSTM models without POS and/or character information for Czech and English, it does it to a much smaller extent for Basque and Finnish.", "We hypothesise that arc depth might impact the usefulness of composition, since more depth means more matrix multiplications with the composition function.", "However, we find no correlation between average arc depth of the treebanks and usefulness of composition.", "It is an open question why composition helps some languages more than others.", "Note that we are not the first to use composition over vectors obtained from a BiLSTM in the context of dependency parsing, as this was done by Qi and Manning (2017) .", "The difference is that they compose vectors before scoring transitions.", "It was also done by Kiperwasser and Goldberg (2016a) who showed that using BiLSTM vectors for words in their Tree LSTM parser is helpful but they did not compare this to using BiLSTM vectors without the Tree LSTM.", "Recurrent and recursive LSTMs in the way they have been considered in this paper are two ways of constructing contextual information and making it available for local decisions in a greedy parser.", "The strength of recursive LSTMs is that they can build this contextual information using hierarchical context rather than linear context.", "A possible weakness is that this makes the model sensitive to error propagation: a wrong attachment leads to using the wrong contextual information.", "It is therefore possible that the benefits and drawbacks of using this method cancel each other out in the context of BiLSTMs.", "Ensemble To investigate further the information captured by BiLSTMs, we ensemble the 6 versions of the models with POS and character information with the different feature extractors (bi, bw, f w) with (+lc) and without composition.", "We use the (unweighted) reparsing technique of Sagae and Lavie (2006) 6 and ignoring labels.", "As can be seen from the UAS scores in Table 2 , the ensemble (full) largely outperforms the parser using only a BiLSTM, indicating that the information obtained from the different models is complementary.", "To investigate the contribution of each of the 6 models, we ablate each one by one.", "As can be seen from Table 2 , ablating either of the BiLSTM models or the backward LSTM using composition, results in the least effective of the ablated models, further strengthening the conclusion that BiL-STMs are powerful feature extractors.", "Conclusion We investigated the impact of composing the representation of subtrees in a transition-based parser.", "We observed that composition does not reliably help a parser that uses a BiLSTM for feature extraction, indicating that vectors obtained from the BiLSTM might capture subtree information, which is consistent with the results of Linzen et al.", "(2016) .", "However, we observe that, when ablating the backward LSTM, performance drops and recursive composition does not help to recover much of this gap.", "We hypothesise that this is because the backward LSTM primarily improves the lookahead for the greedy parser.", "When ablating the forward LSTM, performance drops to a smaller extent and recursive composition recovers a substantial part of the gap.", "This indicates that a forward LSTM and a recursive composition function capture similar information, which we take to be related to the rich history-based features crucial for a transition-based parser.", "To capture this infor-mation, a recursive composition function is better than a forward LSTM on its own, but it is even better to have a forward LSTM as part of a BiLSTM.", "We further find that recursive composition helps more when POS tags are ablated from the model, indicating that POS tags and a recursive composition function are partly redundant ways of constructing contextual information.", "Finally, we correlate results with language properties, showing that the improved lookahead of a backward LSTM is especially important for head-final languages." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "5.1", "5.2", "5.3", "6" ], "paper_header_content": [ "Introduction", "K&G Transition-Based Parsing", "Composing Subtree Representations", "Composition in a K&G Parser", "Results", "Ablating the forward and backward LSTMs", "Ablating POS and character information", "Ensemble", "Conclusion" ] }
GEM-SciDuet-train-107#paper-1284#slide-9
Word representation
+pos Cf Cf Cf
+pos Cf Cf Cf
[]
GEM-SciDuet-train-107#paper-1284#slide-10
1284
Recursive Subtree Composition in LSTM-Based Dependency Parsing
The need for tree structure modelling on top of sequence modelling is an open issue in neural dependency parsing. We investigate the impact of adding a tree layer on top of a sequential model by recursively composing subtree representations (composition) in a transition-based parser that uses features extracted by a BiLSTM. Composition seems superfluous with such a model, suggesting that BiLSTMs capture information about subtrees. We perform model ablations to tease out the conditions under which composition helps. When ablating the backward LSTM, performance drops and composition does not recover much of the gap. When ablating the forward LSTM, performance drops less dramatically and composition recovers a substantial part of the gap, indicating that a forward LSTM and composition capture similar information. We take the backward LSTM to be related to lookahead features and the forward LSTM to the rich history-based features both crucial for transition-based parsers. To capture history-based information, composition is better than a forward LSTM on its own, but it is even better to have a forward LSTM as part of a BiLSTM. We correlate results with language properties, showing that the improved lookahead of a backward LSTM is especially important for head-final languages.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203 ], "paper_content_text": [ "Introduction Recursive neural networks allow us to construct vector representations of trees or subtrees.", "They have been used for constituency parsing by Socher et al.", "(2013) and Dyer et al.", "(2016) and for dependency parsing by Stenetorp (2013) and , among others.", "In particular, showed that composing representations of subtrees using recursive neural networks can be beneficial for transition-based dependency parsing.", "These results were further strengthened in Kuncoro et al.", "(2017) who showed, using ablation experiments, that composition is key in the Recurrent Neural Network Grammar (RNNG) generative parser by Dyer et al.", "(2016) .", "In a parallel development, Kiperwasser and Goldberg (2016b) showed that using BiLSTMs for feature extraction can lead to high parsing accuracy even with fairly simple parsing architectures, and using BiLSTMs for feature extraction has therefore become very popular in dependency parsing.", "It is used in the state-of-the-art parser of Dozat and Manning (2017) , was used in 8 of the 10 highest performing systems of the 2017 CoNLL shared task (Zeman et al., 2017) and 10 out of the 10 highest performing systems of the 2018 CoNLL shared task (Zeman et al., 2018) .", "This raises the question of whether features extracted with BiLSTMs in themselves capture information about subtrees, thus making recursive composition superfluous.", "Some support for this hypothesis comes from the results of Linzen et al.", "(2016) which indicate that LSTMs can capture hierarchical information: they can be trained to predict long-distance number agreement in English.", "Those results were extended to more constructions and three additional languages by Gulordava et al.", "(2018) .", "However, Kuncoro et al.", "(2018) have also shown that although sequential LSTMs can learn syntactic information, a recursive neural network which explicitly models hierarchy (the RNNG model from ) is better at this: it performs better on the number agreement task from Linzen et al.", "(2016) .", "To further explore this question in the context of dependency parsing, we investigate the use of recursive composition (henceforth referred to as composition) in a parser with an architecture like the one in Kiperwasser and Goldberg (2016b) .", "This allows us to explore variations of features and isolate the conditions under which composi-tion is helpful.", "We hypothesise that the use of a BiLSTM for feature extraction makes it possible to capture information about subtrees and therefore makes the use of subtree composition superfluous.", "We hypothesise that composition becomes useful when part of the BiLSTM is ablated, the forward or the backward LSTM.", "We further hypothesise that composition is most useful when the parser has no access to information about the function of words in the context of the sentence given by POS tags.", "When using POS tags, the tagger has indeed had access to the full sentence.", "We additionally look at what happens when we ablate character vectors which have been shown to capture information which is partially overlapping with information from POS tags.", "We experiment with a wider variety of languages than in order to explore whether the usefulness of different model variants vary depending on language type.", "K&G Transition-Based Parsing We define the parsing architecture introduced by Kiperwasser and Goldberg (2016b) at a high level of abstraction and henceforth refer to it as K&G.", "A K&G parser is a greedy transition-based parser.", "1 For an input sentence of length n with words w 1 , .", ".", ".", ", w n , a sequence of vectors x 1:n is created, where the vector x i is a vector representation of the word w i .", "We refer to these as type vectors, as they are the same for all occurrences of a word type.", "Type vectors are then passed through a feature function which learns representations of words in the context of the sentence.", "x i = e(w i ) v i = f (x 1:n , i) We refer to the vector v i as a token vector, as it is different for different tokens of the same word type.", "In Kiperwasser and Goldberg (2016b) , the feature function used is a BiLSTM.", "As is usual in transition-based parsing, parsing involves taking transitions from an initial configuration to a terminal one.", "Parser configurations are represented by a stack, a buffer and set of dependency arcs (Nivre, 2008) .", "For each configuration c, the feature extractor concatenates the token representations of core elements from the stack and buffer.", "These token vectors are passed to a classifier, typically a Multilayer Perceptron (MLP).", "The MLP scores transitions together with the arc labels for transitions that involve adding an arc.", "Both the word type vectors and the BiLSTMs are trained together with the model.", "looked at the impact of using a recursive composition function in their parser, which is also a transition-based parser but with an architecture different from K&G.", "They make use of a variant of the LSTM called a stack LSTM.", "A stack LSTM has push and pop operations which allow passing through states in a tree structure rather than sequentially.", "Stack LSTMs are used to represent the stack, the buffer, and the sequence of past parsing actions performed for a configuration.", "Composing Subtree Representations The words of the sentence are represented by vectors of the word types, together with a vector representing the word's POS tag.", "In the initial configuration, the vectors of all words are in the buffer and the stack is empty.", "The representation of the buffer is the end state of a backward LSTM over the word vectors.", "As parsing evolves, the word vectors are popped from the buffer, pushed to and popped from the stack and the representations of stack and buffer get updated.", "define a recursive composition function and compose tree representations incrementally, as dependents get attached to their head.", "The composed representation c is built by concatenating the vector h of the head with the vector of the dependent d, as well as a vector r representing the label paired with the direction of the arc.", "That concatenated vector is passed through an affine transformation and then through a tanh non-linear activation.", "c = tanh(W [h; d; r] + b) They create two versions of the parser.", "In the first version, when a dependent is attached to a head, the word vector of the head is replaced by a composed vector of the head and dependent.", "In the second version, they simply keep the vector of the head when attaching a dependent to a head.", "They observe that the version with composition is substantially better than the version without, by 1.3 LAS points for English (on the Penn Treebank (PTB) test set) and 2.1 for Chinese (on the Chinese Treebank (CTB) test set).", "Their parser uses POS tag information.", "POS tags help to disambiguate between different functional uses of a word and in this way give information about the use of the word in context.", "We hypothesise that the effect of using a recursive composition function is stronger when not making use of POS tags.", "Composition in a K&G Parser The parsing architectures of the stack LSTM parser (S-LSTM) and K&G are different but have some similarities.", "2 In both cases, the configuration is represented by vectors obtained by LSTMs.", "In K&G, it is represented by the token vectors of top items of the stack and the first item of the buffer.", "In the S-LSTM, it is represented by the vector representations of the entire stack, buffer and sequence of past transitions.", "Both types of parsers learn vector representations of word types which are passed to an LSTM.", "In K&G, they are passed to an LSTM in a feature extraction step that happens before parsing.", "The LSTM in this case is used to learn vectors that have information about the context of each word, a token vector.", "In the S-LSTM, word type vectors are passed to Stack LSTMs as parsing evolves.", "In this case, LSTMs are used to learn vector representations of the stack and buffer (as well as one which learns a representation of the parsing action history).", "When composition is not used in the S-LSTM, word vectors represent word types.", "When composition is used, as parsing evolves, the stack and buffer vectors get updated with information about the subtrees they contain, so that they gradually become contextualised.", "In this sense, those vectors become more like token vectors in K&G.", "More specifically, as explained in the previous section, when a dependent is attached to its head, the composition function is applied to the vectors of head and dependent and the vector of the head is replaced by this composed vector.", "We cannot apply composition on type vectors in the K&G architecture, since they are not used after the feature extraction step and hence cannot influence the representation of the configuration.", "Instead, we apply composition on the token vectors.", "We embed those composed representations in the same space as the token vectors.", "In K&G, like in the S-LSTM, we can create a composition function and compose the representation of subtrees as parsing evolves.", "We create two versions of the parser, one where word tokens are represented by their token vector.", "The other where they are represented by their token vector and the vector of their subtree c i , which is initially just a copy of the token vector (v i = f (x 1:n , i)β€’c i ).", "When a dependent word d is attached to a word h with a relation and direction r, c i is computed with the same composition function as in the S-LSTM defined in the previous section, repeated below.", "3 This composition function is a simple recurrent cell.", "Simple RNNs have known shortcomings which have been addressed by using LSTMs, as proposed by Hochreiter and Schmidhuber (1997) .", "A natural extension to this composition function is therefore to replace it with an LSTM cell.", "We also try this variant.", "We construct LSTMs for subtrees.", "We initialise a new LSTM for each new subtree that is formed, that is, when a dependent d is attached to a head h which does not have any dependent yet.", "Each time we attach a dependent to a head, we construct a vector which is a concatenation of h, d and r. We pass this vector to the LSTM of h. c is the output state of the LSTM after passing through that vector.", "We denote those models with +rc for the one using an ungated recurrent cell and with +lc for the one using an LSTM cell.", "c = tanh(W [h; d; r] + b) c = LSTM([h; d; r]) As results show (see Β§ 5), neither type of composition seems useful when used with the K&G parsing model, which indicates that BiLSTMs capture information about subtrees.", "To further investigate this and in order to isolate the conditions under which composition is helpful, we perform different model ablations and test the impact of recursive composition on these ablated models.", "First, we ablate parts of the BiLSTMs: we ablate either the forward or the backward LSTM.", "We therefore build parsers with 3 different feature functions f (x, i) over the word type vectors x i in the sentence x: a BiLSTM (bi) (our baseline), a backward LSTM (bw) (i.e., ablating the forward LSTM) and a forward LSTM (f w) (i.e., ablating the backward LSTM): bi(x, i) = BILSTM(x 1:n , i) bw(x, i) = LSTM(x n:1 , i) f w(x, i) = LSTM(x 1:n , i) K&G parsers with unidirectional LSTMs are, in some sense, more similar to the S-LSTM than those with a BiLSTM, since the S-LSTM only uses unidirectional LSTMs.", "We hypothesise that composition will help the parser using unidirectional LSTMs in the same way it helps an S-LSTM.", "We additionally experiment with the vector representing the word at the input of the LSTM.", "The most complex representation consists of a concatenation of an embedding of the word type e(w i ), an embedding of the (predicted) POS tag of w i , p(w i ) and a character representation of the word obtained by running a BiLSTM over the characters ch 1:m of w i (BiLSTM(ch 1:m )).", "x i = e(w i ) β€’ p(w i ) β€’ BiLSTM(ch 1:m ) Without a POS tag embedding, the word vector is a representation of the word type.", "With POS information, we have some information about the word in the context of the sentence and the tagger has had access to the full sentence.", "The representation of the word at the input of the BiLSTM is therefore more contextualised and it can be expected that a recursive composition function will be less helpful than when POS information is not used.", "Character information has been shown to be useful for dependency parsing first by .", "and Smith et al.", "(2018b) among others have shown that POS and character information are somewhat complementary.", "used similar character vectors in the S-LSTM parser but did not look at the impact of composition when using these vectors.", "Here, we experiment with ablating either or both of the character and POS vectors.", "We look at the impact of using composition on the full model as well as these ablated models.", "We hypothesise that composition is most helpful when those vectors are not used, since they give information about the functional use of the word in context.", "Parser We use UUParser, a variant of the K&G transition-based parser that employs the arc-hybrid transition system from Kuhlmann et al.", "(2011) extended with a SWAP transition and a Static-Dynamic oracle, as described in de Lhoneux et al.", "(2017b) 4 .", "The SWAP transition is used to allow the construction of non-projective dependency trees (Nivre, 2009) .", "We use default hyperparameters.", "When using POS tags, we use the universal POS tags from the UD treebanks which are coarsegrained and consistent across languages.", "Those POS tags are predicted by UDPipe (Straka et al., 2016) both for training and parsing.", "This parser obtained the 7th best LAS score on average in the 2018 CoNLL shared task (Zeman et al., 2018) , about 2.5 LAS points below the best system, which uses an ensemble system as well as ELMo embeddings, as introduced by Peters et al.", "(2018) .", "Note, however, that we use a slightly impoverished version of the model used for the shared task which is described in Smith et al.", "(2018a) : we use a less accurate POS tagger (UDPipe) and we do not make use of multi-treebank models.", "In addition, Smith et al.", "(2018a) use the three top items of the stack as well as the first item of the buffer to represent the configuration, while we only use the two top items of the stack and the first item of the buffer.", "Smith et al.", "(2018a) also use an extended feature set as introduced by Kiperwasser and Goldberg (2016b) where they also use the rightmost and leftmost children of the items of the stack and buffer that they consider.", "We do not use that extended feature set.", "This is to keep the parser settings as simple as possible and avoid adding confounding factors.", "It is still a near-SOTA model.", "We evaluate parsing models on the development sets and report the average of the 5 best results in 30 epochs and 5 runs with different random seeds.", "Data We test our models on a sample of treebanks from Universal Dependencies v2.1 (Nivre et al., 2017) .", "We follow the criteria from de Lhoneux et al.", "(2017c) to select our sample: we ensure typological variety, we ensure variety of domains, we verify the quality of the treebanks, and we use one treebank with a large amount of non-projective arcs.", "However, unlike them, we do not use extremely small treebanks.", "Our selection is the same as theirs but we remove the tiny treebanks and replace them with 3 others.", "Our final set is: Ancient Greek (PROIEL), Basque, Chinese, Czech, English, Finnish, French, Hebrew and Japanese.", "Results First, we look at the effect of our different recursive composition functions on the full model (i.e., the model using a BiLSTM for feature extraction as well as both character and POS tag information).", "As can be seen from Figure 1 , recursive composition using an LSTM cell (+lc) is generally better than recursive composition with a recurrent cell (+rc), but neither technique reliably improves the accuracy of a BiLSTM parser.", "Ablating the forward and backward LSTMs Second, we only consider the models using character and POS information and look at the effect of ablating parts of the BiLSTM on the different languages.", "The results can be seen in Figure 2 .", "As expected, the BiLSTM parser performs considerably better than both unidirectional LSTM parsers, and the backward LSTM is considerably better than the forward LSTM, on average.", "It is, however, interesting to note that using a forward LSTM is much more hurtful for some languages than others: it is especially hurtful for Chinese and Japanese.", "This can be explained by language properties: the right-headed languages suffer more from ablating the backward LSTM than other languages.", "We observe a correlation between how hurtful a forward model is compared to the baseline and the percentage of right-headed content dependency relations (R = βˆ’0.838, p < .01), see Figure 3 .", "5 There is no significant correlation between how hurtful ablating the forward LSTM is and the percentage of left-headed content dependency relations (p > .05) indicating that its usefulness is not dependent on language properties.", "We hypothesise that dependency length or sentence length can play a role but we also find no correlation between how hurtful it is to ablate the forward LSTM and average dependency or sentence length in treebanks.", "It is finally also interesting to note that the backward LSTM performance is close to the BiLSTMs performance for some languages (Japanese and French).", "5 The reason we only consider content dependency relations is that the UD scheme focuses on dependency relations between content words and treats function words as features of content words to maximise parallelism across languages (de Marneffe et al., 2014) .", "We now look at the effect of using recursive composition on these ablated models.", "Results are given in Figure 4 .", "First of all, we observe unsurprisingly that composition using an LSTM cell is much better than using a simple recurrent cell.", "Second, both types of composition help the backward LSTM case, but neither reliably helps the bi models.", "Finally, the recurrent cell does not help the forward LSTM case but the LSTM cell does to some extent.", "It is interesting to note that using composition, especially using an LSTM cell, bridges a substantial part of the gap between the bw and the bi models.", "These results can be related to the literature on transition-based dependency parsing.", "Transitionbased parsers generally rely on two types of features: history-based features over the emerging dependency tree and lookahead features over the buffer of remaining input.", "The former are based on a hierarchical structure, the latter are purely sequential.", "McDonald and Nivre (2007) and Mc-Donald and Nivre (2011) have shown that historybased features enhance transition-based parsers as long as they do not suffer from error propagation.", "However, Nivre (2006) has also shown that lookahead features are absolutely crucial given the greedy left-to-right parsing strategy.", "In the model architectures considered here, the backward LSTM provides an improved lookahead.", "Similarly to the lookahead in statistical parsing, it is sequential.", "The difference is that it gives information about upcoming words with unbounded length.", "The forward LSTM in this model architecture provides history-based information but unlike in statistical parsing, that information is built sequentially rather than hierarchically: the forward LSTM passes through the sentence in the linear order of the sentence.", "In our results, we see that lookahead features are more important than the history-based ones.", "It hurts parsing accuracy more to ablate the backward LSTM than to ablate the forward one.", "This is expected given that some history-based information is still available through the top tokens on the stack, while the lookahead information is almost lost completely without the backward LSTM.", "A composition function gives hierarchical information about the history of parsing actions.", "It makes sense that it helps the backward LSTM model most since that model has no access to any information about parsing history.", "It helps the forward LSTM slightly which indicates that there can be gains from using structured information about parsing history rather than sequential information.", "We could then expect that composition should help the BiLSTM model which, how- Figure 5 : LAS of baseline, using char and/or POS tags to construct word representations ever, is not the case.", "This might be because the BiLSTM constructs information about parsing history and lookahead into a unique representation.", "In any case, this indicates that BiLSTMs are powerful feature extractors which seem to capture useful information about subtrees.", "Ablating POS and character information Next, we look at the effect of the different word representation methods on the different languages, as represented in Figure 5 .", "As is consistent with the literature de Lhoneux et al., 2017a; Smith et al., 2018b) , using characterbased word representations and/or POS tags consistently improves parsing accuracy but has a different impact in different languages and the benefits of both methods are not cumulative: using the two combined is not much better than using either on its own.", "In particular, character models are an efficient way to obtain large improvements in morphologically rich languages.", "We look at the impact of recursive compositions on all combinations of ablated models, see Table 1 .", "We only look at the impact of using an LSTM cell rather than a recurrent cell since it was a better technique across the board (see previous section).", "Looking first at BiLSTMs, it seems that composition does not reliably help parsing accuracy, regardless of access to POS and character information.", "This indicates that the vectors obtained from the BiLSTM already contain information that would otherwise be obtained by using composition.", "Turning to results with either the forward or the backward LSTM ablated, we see the expected pattern.", "Composition helps more when the model lacks POS tags, indicating that there is some redundancy between these two methods of building contextual information.", "Composition helps recover a substantial part of the gap of the model with a backward LSTM with or without POS tag.", "It recovers a much less substantial part of the gap in other cases which means that, although there is some redundancy between these different methods of building contextual information, they are still complementary and a recursive composition function cannot fully compensate for the lack of a backward LSTM or POS and/or character information.", "There are some language idiosyncracies in the results.", "While composition helps recover most of the gap for the backward LSTM models without POS and/or character information for Czech and English, it does it to a much smaller extent for Basque and Finnish.", "We hypothesise that arc depth might impact the usefulness of composition, since more depth means more matrix multiplications with the composition function.", "However, we find no correlation between average arc depth of the treebanks and usefulness of composition.", "It is an open question why composition helps some languages more than others.", "Note that we are not the first to use composition over vectors obtained from a BiLSTM in the context of dependency parsing, as this was done by Qi and Manning (2017) .", "The difference is that they compose vectors before scoring transitions.", "It was also done by Kiperwasser and Goldberg (2016a) who showed that using BiLSTM vectors for words in their Tree LSTM parser is helpful but they did not compare this to using BiLSTM vectors without the Tree LSTM.", "Recurrent and recursive LSTMs in the way they have been considered in this paper are two ways of constructing contextual information and making it available for local decisions in a greedy parser.", "The strength of recursive LSTMs is that they can build this contextual information using hierarchical context rather than linear context.", "A possible weakness is that this makes the model sensitive to error propagation: a wrong attachment leads to using the wrong contextual information.", "It is therefore possible that the benefits and drawbacks of using this method cancel each other out in the context of BiLSTMs.", "Ensemble To investigate further the information captured by BiLSTMs, we ensemble the 6 versions of the models with POS and character information with the different feature extractors (bi, bw, f w) with (+lc) and without composition.", "We use the (unweighted) reparsing technique of Sagae and Lavie (2006) 6 and ignoring labels.", "As can be seen from the UAS scores in Table 2 , the ensemble (full) largely outperforms the parser using only a BiLSTM, indicating that the information obtained from the different models is complementary.", "To investigate the contribution of each of the 6 models, we ablate each one by one.", "As can be seen from Table 2 , ablating either of the BiLSTM models or the backward LSTM using composition, results in the least effective of the ablated models, further strengthening the conclusion that BiL-STMs are powerful feature extractors.", "Conclusion We investigated the impact of composing the representation of subtrees in a transition-based parser.", "We observed that composition does not reliably help a parser that uses a BiLSTM for feature extraction, indicating that vectors obtained from the BiLSTM might capture subtree information, which is consistent with the results of Linzen et al.", "(2016) .", "However, we observe that, when ablating the backward LSTM, performance drops and recursive composition does not help to recover much of this gap.", "We hypothesise that this is because the backward LSTM primarily improves the lookahead for the greedy parser.", "When ablating the forward LSTM, performance drops to a smaller extent and recursive composition recovers a substantial part of the gap.", "This indicates that a forward LSTM and a recursive composition function capture similar information, which we take to be related to the rich history-based features crucial for a transition-based parser.", "To capture this infor-mation, a recursive composition function is better than a forward LSTM on its own, but it is even better to have a forward LSTM as part of a BiLSTM.", "We further find that recursive composition helps more when POS tags are ablated from the model, indicating that POS tags and a recursive composition function are partly redundant ways of constructing contextual information.", "Finally, we correlate results with language properties, showing that the improved lookahead of a backward LSTM is especially important for head-final languages." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "5.1", "5.2", "5.3", "6" ], "paper_header_content": [ "Introduction", "K&G Transition-Based Parsing", "Composing Subtree Representations", "Composition in a K&G Parser", "Results", "Ablating the forward and backward LSTMs", "Ablating POS and character information", "Ensemble", "Conclusion" ] }
GEM-SciDuet-train-107#paper-1284#slide-10
Composition gap recovery
pos+char+ pos+char- pos-char+ pos-char-
pos+char+ pos+char- pos-char+ pos-char-
[]
GEM-SciDuet-train-107#paper-1284#slide-11
1284
Recursive Subtree Composition in LSTM-Based Dependency Parsing
The need for tree structure modelling on top of sequence modelling is an open issue in neural dependency parsing. We investigate the impact of adding a tree layer on top of a sequential model by recursively composing subtree representations (composition) in a transition-based parser that uses features extracted by a BiLSTM. Composition seems superfluous with such a model, suggesting that BiLSTMs capture information about subtrees. We perform model ablations to tease out the conditions under which composition helps. When ablating the backward LSTM, performance drops and composition does not recover much of the gap. When ablating the forward LSTM, performance drops less dramatically and composition recovers a substantial part of the gap, indicating that a forward LSTM and composition capture similar information. We take the backward LSTM to be related to lookahead features and the forward LSTM to the rich history-based features both crucial for transition-based parsers. To capture history-based information, composition is better than a forward LSTM on its own, but it is even better to have a forward LSTM as part of a BiLSTM. We correlate results with language properties, showing that the improved lookahead of a backward LSTM is especially important for head-final languages.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203 ], "paper_content_text": [ "Introduction Recursive neural networks allow us to construct vector representations of trees or subtrees.", "They have been used for constituency parsing by Socher et al.", "(2013) and Dyer et al.", "(2016) and for dependency parsing by Stenetorp (2013) and , among others.", "In particular, showed that composing representations of subtrees using recursive neural networks can be beneficial for transition-based dependency parsing.", "These results were further strengthened in Kuncoro et al.", "(2017) who showed, using ablation experiments, that composition is key in the Recurrent Neural Network Grammar (RNNG) generative parser by Dyer et al.", "(2016) .", "In a parallel development, Kiperwasser and Goldberg (2016b) showed that using BiLSTMs for feature extraction can lead to high parsing accuracy even with fairly simple parsing architectures, and using BiLSTMs for feature extraction has therefore become very popular in dependency parsing.", "It is used in the state-of-the-art parser of Dozat and Manning (2017) , was used in 8 of the 10 highest performing systems of the 2017 CoNLL shared task (Zeman et al., 2017) and 10 out of the 10 highest performing systems of the 2018 CoNLL shared task (Zeman et al., 2018) .", "This raises the question of whether features extracted with BiLSTMs in themselves capture information about subtrees, thus making recursive composition superfluous.", "Some support for this hypothesis comes from the results of Linzen et al.", "(2016) which indicate that LSTMs can capture hierarchical information: they can be trained to predict long-distance number agreement in English.", "Those results were extended to more constructions and three additional languages by Gulordava et al.", "(2018) .", "However, Kuncoro et al.", "(2018) have also shown that although sequential LSTMs can learn syntactic information, a recursive neural network which explicitly models hierarchy (the RNNG model from ) is better at this: it performs better on the number agreement task from Linzen et al.", "(2016) .", "To further explore this question in the context of dependency parsing, we investigate the use of recursive composition (henceforth referred to as composition) in a parser with an architecture like the one in Kiperwasser and Goldberg (2016b) .", "This allows us to explore variations of features and isolate the conditions under which composi-tion is helpful.", "We hypothesise that the use of a BiLSTM for feature extraction makes it possible to capture information about subtrees and therefore makes the use of subtree composition superfluous.", "We hypothesise that composition becomes useful when part of the BiLSTM is ablated, the forward or the backward LSTM.", "We further hypothesise that composition is most useful when the parser has no access to information about the function of words in the context of the sentence given by POS tags.", "When using POS tags, the tagger has indeed had access to the full sentence.", "We additionally look at what happens when we ablate character vectors which have been shown to capture information which is partially overlapping with information from POS tags.", "We experiment with a wider variety of languages than in order to explore whether the usefulness of different model variants vary depending on language type.", "K&G Transition-Based Parsing We define the parsing architecture introduced by Kiperwasser and Goldberg (2016b) at a high level of abstraction and henceforth refer to it as K&G.", "A K&G parser is a greedy transition-based parser.", "1 For an input sentence of length n with words w 1 , .", ".", ".", ", w n , a sequence of vectors x 1:n is created, where the vector x i is a vector representation of the word w i .", "We refer to these as type vectors, as they are the same for all occurrences of a word type.", "Type vectors are then passed through a feature function which learns representations of words in the context of the sentence.", "x i = e(w i ) v i = f (x 1:n , i) We refer to the vector v i as a token vector, as it is different for different tokens of the same word type.", "In Kiperwasser and Goldberg (2016b) , the feature function used is a BiLSTM.", "As is usual in transition-based parsing, parsing involves taking transitions from an initial configuration to a terminal one.", "Parser configurations are represented by a stack, a buffer and set of dependency arcs (Nivre, 2008) .", "For each configuration c, the feature extractor concatenates the token representations of core elements from the stack and buffer.", "These token vectors are passed to a classifier, typically a Multilayer Perceptron (MLP).", "The MLP scores transitions together with the arc labels for transitions that involve adding an arc.", "Both the word type vectors and the BiLSTMs are trained together with the model.", "looked at the impact of using a recursive composition function in their parser, which is also a transition-based parser but with an architecture different from K&G.", "They make use of a variant of the LSTM called a stack LSTM.", "A stack LSTM has push and pop operations which allow passing through states in a tree structure rather than sequentially.", "Stack LSTMs are used to represent the stack, the buffer, and the sequence of past parsing actions performed for a configuration.", "Composing Subtree Representations The words of the sentence are represented by vectors of the word types, together with a vector representing the word's POS tag.", "In the initial configuration, the vectors of all words are in the buffer and the stack is empty.", "The representation of the buffer is the end state of a backward LSTM over the word vectors.", "As parsing evolves, the word vectors are popped from the buffer, pushed to and popped from the stack and the representations of stack and buffer get updated.", "define a recursive composition function and compose tree representations incrementally, as dependents get attached to their head.", "The composed representation c is built by concatenating the vector h of the head with the vector of the dependent d, as well as a vector r representing the label paired with the direction of the arc.", "That concatenated vector is passed through an affine transformation and then through a tanh non-linear activation.", "c = tanh(W [h; d; r] + b) They create two versions of the parser.", "In the first version, when a dependent is attached to a head, the word vector of the head is replaced by a composed vector of the head and dependent.", "In the second version, they simply keep the vector of the head when attaching a dependent to a head.", "They observe that the version with composition is substantially better than the version without, by 1.3 LAS points for English (on the Penn Treebank (PTB) test set) and 2.1 for Chinese (on the Chinese Treebank (CTB) test set).", "Their parser uses POS tag information.", "POS tags help to disambiguate between different functional uses of a word and in this way give information about the use of the word in context.", "We hypothesise that the effect of using a recursive composition function is stronger when not making use of POS tags.", "Composition in a K&G Parser The parsing architectures of the stack LSTM parser (S-LSTM) and K&G are different but have some similarities.", "2 In both cases, the configuration is represented by vectors obtained by LSTMs.", "In K&G, it is represented by the token vectors of top items of the stack and the first item of the buffer.", "In the S-LSTM, it is represented by the vector representations of the entire stack, buffer and sequence of past transitions.", "Both types of parsers learn vector representations of word types which are passed to an LSTM.", "In K&G, they are passed to an LSTM in a feature extraction step that happens before parsing.", "The LSTM in this case is used to learn vectors that have information about the context of each word, a token vector.", "In the S-LSTM, word type vectors are passed to Stack LSTMs as parsing evolves.", "In this case, LSTMs are used to learn vector representations of the stack and buffer (as well as one which learns a representation of the parsing action history).", "When composition is not used in the S-LSTM, word vectors represent word types.", "When composition is used, as parsing evolves, the stack and buffer vectors get updated with information about the subtrees they contain, so that they gradually become contextualised.", "In this sense, those vectors become more like token vectors in K&G.", "More specifically, as explained in the previous section, when a dependent is attached to its head, the composition function is applied to the vectors of head and dependent and the vector of the head is replaced by this composed vector.", "We cannot apply composition on type vectors in the K&G architecture, since they are not used after the feature extraction step and hence cannot influence the representation of the configuration.", "Instead, we apply composition on the token vectors.", "We embed those composed representations in the same space as the token vectors.", "In K&G, like in the S-LSTM, we can create a composition function and compose the representation of subtrees as parsing evolves.", "We create two versions of the parser, one where word tokens are represented by their token vector.", "The other where they are represented by their token vector and the vector of their subtree c i , which is initially just a copy of the token vector (v i = f (x 1:n , i)β€’c i ).", "When a dependent word d is attached to a word h with a relation and direction r, c i is computed with the same composition function as in the S-LSTM defined in the previous section, repeated below.", "3 This composition function is a simple recurrent cell.", "Simple RNNs have known shortcomings which have been addressed by using LSTMs, as proposed by Hochreiter and Schmidhuber (1997) .", "A natural extension to this composition function is therefore to replace it with an LSTM cell.", "We also try this variant.", "We construct LSTMs for subtrees.", "We initialise a new LSTM for each new subtree that is formed, that is, when a dependent d is attached to a head h which does not have any dependent yet.", "Each time we attach a dependent to a head, we construct a vector which is a concatenation of h, d and r. We pass this vector to the LSTM of h. c is the output state of the LSTM after passing through that vector.", "We denote those models with +rc for the one using an ungated recurrent cell and with +lc for the one using an LSTM cell.", "c = tanh(W [h; d; r] + b) c = LSTM([h; d; r]) As results show (see Β§ 5), neither type of composition seems useful when used with the K&G parsing model, which indicates that BiLSTMs capture information about subtrees.", "To further investigate this and in order to isolate the conditions under which composition is helpful, we perform different model ablations and test the impact of recursive composition on these ablated models.", "First, we ablate parts of the BiLSTMs: we ablate either the forward or the backward LSTM.", "We therefore build parsers with 3 different feature functions f (x, i) over the word type vectors x i in the sentence x: a BiLSTM (bi) (our baseline), a backward LSTM (bw) (i.e., ablating the forward LSTM) and a forward LSTM (f w) (i.e., ablating the backward LSTM): bi(x, i) = BILSTM(x 1:n , i) bw(x, i) = LSTM(x n:1 , i) f w(x, i) = LSTM(x 1:n , i) K&G parsers with unidirectional LSTMs are, in some sense, more similar to the S-LSTM than those with a BiLSTM, since the S-LSTM only uses unidirectional LSTMs.", "We hypothesise that composition will help the parser using unidirectional LSTMs in the same way it helps an S-LSTM.", "We additionally experiment with the vector representing the word at the input of the LSTM.", "The most complex representation consists of a concatenation of an embedding of the word type e(w i ), an embedding of the (predicted) POS tag of w i , p(w i ) and a character representation of the word obtained by running a BiLSTM over the characters ch 1:m of w i (BiLSTM(ch 1:m )).", "x i = e(w i ) β€’ p(w i ) β€’ BiLSTM(ch 1:m ) Without a POS tag embedding, the word vector is a representation of the word type.", "With POS information, we have some information about the word in the context of the sentence and the tagger has had access to the full sentence.", "The representation of the word at the input of the BiLSTM is therefore more contextualised and it can be expected that a recursive composition function will be less helpful than when POS information is not used.", "Character information has been shown to be useful for dependency parsing first by .", "and Smith et al.", "(2018b) among others have shown that POS and character information are somewhat complementary.", "used similar character vectors in the S-LSTM parser but did not look at the impact of composition when using these vectors.", "Here, we experiment with ablating either or both of the character and POS vectors.", "We look at the impact of using composition on the full model as well as these ablated models.", "We hypothesise that composition is most helpful when those vectors are not used, since they give information about the functional use of the word in context.", "Parser We use UUParser, a variant of the K&G transition-based parser that employs the arc-hybrid transition system from Kuhlmann et al.", "(2011) extended with a SWAP transition and a Static-Dynamic oracle, as described in de Lhoneux et al.", "(2017b) 4 .", "The SWAP transition is used to allow the construction of non-projective dependency trees (Nivre, 2009) .", "We use default hyperparameters.", "When using POS tags, we use the universal POS tags from the UD treebanks which are coarsegrained and consistent across languages.", "Those POS tags are predicted by UDPipe (Straka et al., 2016) both for training and parsing.", "This parser obtained the 7th best LAS score on average in the 2018 CoNLL shared task (Zeman et al., 2018) , about 2.5 LAS points below the best system, which uses an ensemble system as well as ELMo embeddings, as introduced by Peters et al.", "(2018) .", "Note, however, that we use a slightly impoverished version of the model used for the shared task which is described in Smith et al.", "(2018a) : we use a less accurate POS tagger (UDPipe) and we do not make use of multi-treebank models.", "In addition, Smith et al.", "(2018a) use the three top items of the stack as well as the first item of the buffer to represent the configuration, while we only use the two top items of the stack and the first item of the buffer.", "Smith et al.", "(2018a) also use an extended feature set as introduced by Kiperwasser and Goldberg (2016b) where they also use the rightmost and leftmost children of the items of the stack and buffer that they consider.", "We do not use that extended feature set.", "This is to keep the parser settings as simple as possible and avoid adding confounding factors.", "It is still a near-SOTA model.", "We evaluate parsing models on the development sets and report the average of the 5 best results in 30 epochs and 5 runs with different random seeds.", "Data We test our models on a sample of treebanks from Universal Dependencies v2.1 (Nivre et al., 2017) .", "We follow the criteria from de Lhoneux et al.", "(2017c) to select our sample: we ensure typological variety, we ensure variety of domains, we verify the quality of the treebanks, and we use one treebank with a large amount of non-projective arcs.", "However, unlike them, we do not use extremely small treebanks.", "Our selection is the same as theirs but we remove the tiny treebanks and replace them with 3 others.", "Our final set is: Ancient Greek (PROIEL), Basque, Chinese, Czech, English, Finnish, French, Hebrew and Japanese.", "Results First, we look at the effect of our different recursive composition functions on the full model (i.e., the model using a BiLSTM for feature extraction as well as both character and POS tag information).", "As can be seen from Figure 1 , recursive composition using an LSTM cell (+lc) is generally better than recursive composition with a recurrent cell (+rc), but neither technique reliably improves the accuracy of a BiLSTM parser.", "Ablating the forward and backward LSTMs Second, we only consider the models using character and POS information and look at the effect of ablating parts of the BiLSTM on the different languages.", "The results can be seen in Figure 2 .", "As expected, the BiLSTM parser performs considerably better than both unidirectional LSTM parsers, and the backward LSTM is considerably better than the forward LSTM, on average.", "It is, however, interesting to note that using a forward LSTM is much more hurtful for some languages than others: it is especially hurtful for Chinese and Japanese.", "This can be explained by language properties: the right-headed languages suffer more from ablating the backward LSTM than other languages.", "We observe a correlation between how hurtful a forward model is compared to the baseline and the percentage of right-headed content dependency relations (R = βˆ’0.838, p < .01), see Figure 3 .", "5 There is no significant correlation between how hurtful ablating the forward LSTM is and the percentage of left-headed content dependency relations (p > .05) indicating that its usefulness is not dependent on language properties.", "We hypothesise that dependency length or sentence length can play a role but we also find no correlation between how hurtful it is to ablate the forward LSTM and average dependency or sentence length in treebanks.", "It is finally also interesting to note that the backward LSTM performance is close to the BiLSTMs performance for some languages (Japanese and French).", "5 The reason we only consider content dependency relations is that the UD scheme focuses on dependency relations between content words and treats function words as features of content words to maximise parallelism across languages (de Marneffe et al., 2014) .", "We now look at the effect of using recursive composition on these ablated models.", "Results are given in Figure 4 .", "First of all, we observe unsurprisingly that composition using an LSTM cell is much better than using a simple recurrent cell.", "Second, both types of composition help the backward LSTM case, but neither reliably helps the bi models.", "Finally, the recurrent cell does not help the forward LSTM case but the LSTM cell does to some extent.", "It is interesting to note that using composition, especially using an LSTM cell, bridges a substantial part of the gap between the bw and the bi models.", "These results can be related to the literature on transition-based dependency parsing.", "Transitionbased parsers generally rely on two types of features: history-based features over the emerging dependency tree and lookahead features over the buffer of remaining input.", "The former are based on a hierarchical structure, the latter are purely sequential.", "McDonald and Nivre (2007) and Mc-Donald and Nivre (2011) have shown that historybased features enhance transition-based parsers as long as they do not suffer from error propagation.", "However, Nivre (2006) has also shown that lookahead features are absolutely crucial given the greedy left-to-right parsing strategy.", "In the model architectures considered here, the backward LSTM provides an improved lookahead.", "Similarly to the lookahead in statistical parsing, it is sequential.", "The difference is that it gives information about upcoming words with unbounded length.", "The forward LSTM in this model architecture provides history-based information but unlike in statistical parsing, that information is built sequentially rather than hierarchically: the forward LSTM passes through the sentence in the linear order of the sentence.", "In our results, we see that lookahead features are more important than the history-based ones.", "It hurts parsing accuracy more to ablate the backward LSTM than to ablate the forward one.", "This is expected given that some history-based information is still available through the top tokens on the stack, while the lookahead information is almost lost completely without the backward LSTM.", "A composition function gives hierarchical information about the history of parsing actions.", "It makes sense that it helps the backward LSTM model most since that model has no access to any information about parsing history.", "It helps the forward LSTM slightly which indicates that there can be gains from using structured information about parsing history rather than sequential information.", "We could then expect that composition should help the BiLSTM model which, how- Figure 5 : LAS of baseline, using char and/or POS tags to construct word representations ever, is not the case.", "This might be because the BiLSTM constructs information about parsing history and lookahead into a unique representation.", "In any case, this indicates that BiLSTMs are powerful feature extractors which seem to capture useful information about subtrees.", "Ablating POS and character information Next, we look at the effect of the different word representation methods on the different languages, as represented in Figure 5 .", "As is consistent with the literature de Lhoneux et al., 2017a; Smith et al., 2018b) , using characterbased word representations and/or POS tags consistently improves parsing accuracy but has a different impact in different languages and the benefits of both methods are not cumulative: using the two combined is not much better than using either on its own.", "In particular, character models are an efficient way to obtain large improvements in morphologically rich languages.", "We look at the impact of recursive compositions on all combinations of ablated models, see Table 1 .", "We only look at the impact of using an LSTM cell rather than a recurrent cell since it was a better technique across the board (see previous section).", "Looking first at BiLSTMs, it seems that composition does not reliably help parsing accuracy, regardless of access to POS and character information.", "This indicates that the vectors obtained from the BiLSTM already contain information that would otherwise be obtained by using composition.", "Turning to results with either the forward or the backward LSTM ablated, we see the expected pattern.", "Composition helps more when the model lacks POS tags, indicating that there is some redundancy between these two methods of building contextual information.", "Composition helps recover a substantial part of the gap of the model with a backward LSTM with or without POS tag.", "It recovers a much less substantial part of the gap in other cases which means that, although there is some redundancy between these different methods of building contextual information, they are still complementary and a recursive composition function cannot fully compensate for the lack of a backward LSTM or POS and/or character information.", "There are some language idiosyncracies in the results.", "While composition helps recover most of the gap for the backward LSTM models without POS and/or character information for Czech and English, it does it to a much smaller extent for Basque and Finnish.", "We hypothesise that arc depth might impact the usefulness of composition, since more depth means more matrix multiplications with the composition function.", "However, we find no correlation between average arc depth of the treebanks and usefulness of composition.", "It is an open question why composition helps some languages more than others.", "Note that we are not the first to use composition over vectors obtained from a BiLSTM in the context of dependency parsing, as this was done by Qi and Manning (2017) .", "The difference is that they compose vectors before scoring transitions.", "It was also done by Kiperwasser and Goldberg (2016a) who showed that using BiLSTM vectors for words in their Tree LSTM parser is helpful but they did not compare this to using BiLSTM vectors without the Tree LSTM.", "Recurrent and recursive LSTMs in the way they have been considered in this paper are two ways of constructing contextual information and making it available for local decisions in a greedy parser.", "The strength of recursive LSTMs is that they can build this contextual information using hierarchical context rather than linear context.", "A possible weakness is that this makes the model sensitive to error propagation: a wrong attachment leads to using the wrong contextual information.", "It is therefore possible that the benefits and drawbacks of using this method cancel each other out in the context of BiLSTMs.", "Ensemble To investigate further the information captured by BiLSTMs, we ensemble the 6 versions of the models with POS and character information with the different feature extractors (bi, bw, f w) with (+lc) and without composition.", "We use the (unweighted) reparsing technique of Sagae and Lavie (2006) 6 and ignoring labels.", "As can be seen from the UAS scores in Table 2 , the ensemble (full) largely outperforms the parser using only a BiLSTM, indicating that the information obtained from the different models is complementary.", "To investigate the contribution of each of the 6 models, we ablate each one by one.", "As can be seen from Table 2 , ablating either of the BiLSTM models or the backward LSTM using composition, results in the least effective of the ablated models, further strengthening the conclusion that BiL-STMs are powerful feature extractors.", "Conclusion We investigated the impact of composing the representation of subtrees in a transition-based parser.", "We observed that composition does not reliably help a parser that uses a BiLSTM for feature extraction, indicating that vectors obtained from the BiLSTM might capture subtree information, which is consistent with the results of Linzen et al.", "(2016) .", "However, we observe that, when ablating the backward LSTM, performance drops and recursive composition does not help to recover much of this gap.", "We hypothesise that this is because the backward LSTM primarily improves the lookahead for the greedy parser.", "When ablating the forward LSTM, performance drops to a smaller extent and recursive composition recovers a substantial part of the gap.", "This indicates that a forward LSTM and a recursive composition function capture similar information, which we take to be related to the rich history-based features crucial for a transition-based parser.", "To capture this infor-mation, a recursive composition function is better than a forward LSTM on its own, but it is even better to have a forward LSTM as part of a BiLSTM.", "We further find that recursive composition helps more when POS tags are ablated from the model, indicating that POS tags and a recursive composition function are partly redundant ways of constructing contextual information.", "Finally, we correlate results with language properties, showing that the improved lookahead of a backward LSTM is especially important for head-final languages." ] }
{ "paper_header_number": [ "1", "2", "3", "4", "5", "5.1", "5.2", "5.3", "6" ], "paper_header_content": [ "Introduction", "K&G Transition-Based Parsing", "Composing Subtree Representations", "Composition in a K&G Parser", "Results", "Ablating the forward and backward LSTMs", "Ablating POS and character information", "Ensemble", "Conclusion" ] }
GEM-SciDuet-train-107#paper-1284#slide-11
Conclusion
Subtree composition does not reliably help a BiLSTM transition-based parser The backward part of the BiLSTM is crucial, especially for right-headed languages The forward part of the BiLSTM is less crucial A backward LSTM + subtree composition performs close to a POS information and subtree composition are two partially redundant ways of constructing contextual information
Subtree composition does not reliably help a BiLSTM transition-based parser The backward part of the BiLSTM is crucial, especially for right-headed languages The forward part of the BiLSTM is less crucial A backward LSTM + subtree composition performs close to a POS information and subtree composition are two partially redundant ways of constructing contextual information
[]